Download 3 Lie Groups

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Tensor operator wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

History of algebra wikipedia , lookup

Euclidean vector wikipedia , lookup

Capelli's identity wikipedia , lookup

Dual space wikipedia , lookup

Cartesian tensor wikipedia , lookup

Exterior algebra wikipedia , lookup

Bra–ket notation wikipedia , lookup

Lorentz group wikipedia , lookup

Clifford algebra wikipedia , lookup

Basis (linear algebra) wikipedia , lookup

Homomorphism wikipedia , lookup

Matrix calculus wikipedia , lookup

Invariant convex cone wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Linear algebra wikipedia , lookup

Oscillator representation wikipedia , lookup

Four-vector wikipedia , lookup

Transcript
3
Lie Groups
Quite possibly the most beautiful marriage of concepts in all of mathematics, a lie
group is an object endowed with rich structure and great symmetry. It is also of powerful
use to physicists in all fields, due to its intimate connection with physical symmetries. Now
that we finally have the tools and the language with which to express lie groups, we are
ready to explore them in some detail. We have studied group theory, and we've also done
some work with manifolds. We now meld these two important mathematical concepts
together.
A lie group is a smooth manifold endowed with a group structure. Call this manifold
G. Points on G can be multiplied:
There exists a smooth multiplication map
M: G × G → G
(g₁, g₂) ⇢ g₁ · g₂
There also exists a smooth inversion map
I: G → G
g ⇢ g-¹
In other words, if g₁ is near g₂ on the manifold, we expect h · g₁ to be near h · g₂, for any h.
Similarly, we expect g₁-¹ to be near g₂-¹.
Endowing a Manifold with Group Structure
It is sometimes the case that we can take a manifold with which we are already
familiar, and add a group structure, by providing smooth multiplication and inversion maps.
For instance, R and R* provide immediate examples. A more interesting example is the
circle S¹, where each point p is given by an angle θ(p). Points can be “multiplied” in an
intuitive fashion: θ(p · q) = θ(p) + θ(q). We could also think of this group as the set of
complex phases, p eiθ(p), q eiθ(q). Literally multiplying these points together gives us the
same group multiplication law. Note that this is an abelian group. It is called U₁, for reasons
that will be explained soon.
We cannot endow a general manifold with a group structure. For example, the sphere, S²,
cannot be given a group structure.
Figure 3.1 The circle can be given the group structure of rotations specified by a single angle, θ.
3.1 Matrix Groups
The most potent examples of lie groups are matrix groups, i.e. Subgroups of GLnR or
GLnC. We can readily see that GLnR is a manifold, by choosing our coordinates to be the
n² matrix entries. The only requirement is that the determinant of the matrix be nonzero.
We can simply remove the locus of coordinates which combine to give zero determinant,
and we will be left with an open set in Rⁿ², because the determinant is continuous when
considered as a map from Rⁿ² → R. Similarly, any closed subgroup of GLnR or GLnC can
be expressed as a manifold in the local coordinates of matrix entries subject to the given
conditions which specify the subgroup.
It will be useful to review several examples of matrix groups. We start with subgroups of
GLnR:
1. The special linear group, SLnR: The subgroup of n × n matrices with unit
determinant. This is closed, since determinants multiply (Problem 2.13). SLnR is (n²
– 1) dimensional, since the determinant condition places a one-dimensional
restriction on the coordinates.
2. The group of rotations and reflections in n dimensions can be represented by On, the
group of orthogonal matrices, AT = A⁻¹, or AT·A = 1. This is easily shown to be a
closed subgroup, because (A·B)T = BT·AT = B⁻¹·A⁻¹ = (A·B)⁻¹.
3. The group of rotations only (omitting reflections) can be represented by SOn, the
special orthogonal group, which is the group of orthogonal matrices restricted to the
unit-determinant condition. This is not a very strong restriction, as orthogonal
matrices can only have determinant ±1.
We shall also be interested in some subgroups of GLnC:
4. Un = unitary n × n matrices. A unitary matrix satisfies U†·U = 1, where U† = U*T is the
hermitian conjugate of U (the complex conjugate of U's transpose). As a special case,
if we set n = 1, U₁ = 1 × 1 complex matrices, which is just a fancy way of expressing
complex numbers, but subject to the condition that u*·u = |u|² = 1. These are just
complex phases, eiθ, like those used earlier to describe the circle, S¹.
5. SUn = special unitary matrices, i.e. unitary matrices with unit determinant.
A distinction should be made before we continue. A lie group like SO₃ can act on a
manifold like S², in that SO₃ can be thought of as the group of rotations of the points on the
sphere. However, the manifold SO₃ is not the same as the manifold S². It has a very
different shape, which we will explore eventually, but you might quickly note that SO₃ has
dimension 3 when viewed as a set of nine coordinates subject to orthogonality conditions,
while S² is only two-dimensional. It is important to make a clear distinction between the lie
group itself and any manifold on which it may act.
Problem 3.1 Determine the manifold dimensionality of On, SOn, Un, and SUn.
One of the reasons that the structure of a lie group is so useful is that we can get
from any point on G to any other point via a smooth invertible map Lg: G → G. This map is
simply left multiplication in G, Lg(h) = g·h. This provides us with a smoothly varying family
of maps, and as stated before, we can use this map to get from any point h₁ to any other
point h₂, by simply setting g = h₂·h₁⁻¹.
3.2 The Tangent Space of a Lie Group
Any manifold has a tangent space TpM at each point p. This, of course, is a vector
space, so we can always add elements of TpM together. In the special case that the manifold
is also a group, the tangent space has a way of inheriting a multiplication operation from the
multiplicative structure of the group, meaning we can multiply two tangent vectors together
to get a new tangent vector. This multiplication of vectors will be made explicit shortly.
Since TpG is a group under addition, and also has a compatible multiplication operation, we
see that it is an algebra (Section 2.6). Specifically, TpG is called the Lie Algebra of G, often
denoted Ꮭ[G] or g (The latter notation predictably runs into the issue of ambiguity, as we
also use a lowercase “g” to denote a generic group element. However, it will be convenient,
for example, to write the lie algebra of SO₃ as so₃. To deal with the ambiguity, we use gothic
lettering for the lie algebra. For the reader without the penmanship skills to follow suit, we
suggest underlining the lowercase letters: so₃). We will find that the lie algebra of G
encodes much of the information about G itself. This provides a connection between the
local structure of TpG and the global structure of G.
Let's make things somewhat more explicit. First, we choose our coordinates on G to
be the matrix entries xij. Recall that we can think of any vector as a directional derivative,
ij
V =V ∂ ij
(3.1)
∂x
If you are confused by the notation, just think of “ij” as a single index, specified by two
numbers. We could have written this out the usual way, like
k
V =V ∂ k
(3.2)
∂x
where k runs from 1 to n², but we prefer a notation which encodes the multiplicative
structure of matrices.
Intuitively, we will reconstruct G from its lie algebra by pushing forward our tangent
space from the identity element to every other point g G. This is accomplished by using
the pushforward of the Lg map of left-multiplication by group elements. Recall that one way
to view the pushforward map is to note that Lg smoothly and invertibly maps points near h
to points near g·h, and therefore it also maps curves through h to curves through g·h. Since
vector fields are the velocities of curves, the map Lg has a natural manifestation as a map on
vectors, which is exactly the pushforward map, Lg⁎: ThG → TghG. We saw in section 1.2 how
to express this in coordinate-dependent terms.
Problem 3.2 Starting with the coordinate-dependent form of Lg,
ij
ik
kj
[ L g  h] =x  g ⋅x h
(3.3)
Show that the coordinate-dependent form of the pushforward map is
ij
ij
[ L g∗ V ] ∣h= g⋅V 
(3.4)
matrix multiplication by g, viewing the vector components of V as matrix elements.
From problem 3.2 we determine that the pushforward of the left-multiplication map
is just given by the operation of g on the matrix-coordinate components of V. The fact that
the pushforward map has such a simple form in this coordinate representation will be of
great utility to us.
Left-Invariant Fields
Now that we've introduced the lie algebra as the tangent space of the lie group, let us
introduce another useful manifestation of the lie algebra: the set of left-invariant vector
fields of G.
Since we have a smoothly varying family of smooth maps Lg: G → G, we can push a
vector forward from any point on G to any other. Specifically, we can transport a vector
from the identity to every point g G simply by acting with Lg⁎. This generates a vector
field on G, known as a left-invariant vector field. It is called “left-invariant” because it is
invariant under the pushforward map associated with left-multiplication. The vector field X
being left-invariant just means that pushing the vector X|h at h to Lg⁎ X|h at g·h gives exactly
the same result as simply evaluating the vector field X|g·h at g·h.
As mentioned in section 1.2, There is a natural one-to-one linear correspondence
between vectors in the tangent space TeG at the identity, and left-invariant vector fields. We
can generate a unique left-invariant vector field from a vector Ve by pushing Ve forward via
V  g= L g∗ V e
(3.5)
for all g G. Going back the other direction is even easier; we can recover a vector Ve from
a left-invariant vector field V(g) simply by evaluating V(g) at the identity, Ve = V(e). Thus, the
left-invariant vector fields give us another manifestation of the tangent space TeG.
Figure 3.2 A left-invariant field can be found by pushing forward tangent vectors from the identity.
Additionally, the left-invariant fields on G will give us another powerful tool. We can
now define the multiplication law required for our lie algebra:
V ∗W =Ꮭ V [W ]=[V , W ]
(3.6)
the lie derivative of W with respect to V, where V and W are the left-invariant vector fields
corresponding to tangent vectors Ve and We. Recall from section 1.3 that the lie derivative
can be expressed in the following coordinate-dependent form:
ij
ij
ij
kl ∂ W  g 
kl ∂ V  g 
[V  g  , W  g ] =V  g 
−W  g
kl
kl
∂x
∂x
(3.7)
Problem 3.3 Show that this multiplication law for the lie algebra is closed, by showing that
the lie bracket (3.7) of two left-invariant vector fields is given by the following:
[V  g  , W  g ]=L g∗ V e⋅W e −W e⋅V e 
(3.8)
where“V·W”signifies matrix multiplication on the coordinate indices. This is manifestly leftinvariant, showing that the lie bracket of two left-invariant vector fields gives us another leftinvariant vector field.
From problem 3.3, we see that the lie derivative has provided us with a multiplication
law between two left-invariant vector fields, which produces another left-invariant vector
field. Note, however, that (3.8) tells us about much more than just the closure of the lie
algebra. On the left side of the equation is the lie bracket, or vector commutator of the two
vector fields, V and W. On the right side is the matrix commutator of the two sets of
components, expressed in matrix form, Vij and Wij. So, on a lie group, the vector
commutator of two left-invariant vector fields is exactly the matrix commutator of the
corresponding matrix-coordinate components! Symbolically,
[V , W ]vector =[V , W ]matrix
(3.9)
3.3 From the Group to the Algebra and Back Again
It should come as no surprise that it is possible to construct the lie algebra from
knowledge of the group structure. What may come as a surprise is that we can construct
most of the structure of a lie group out of our knowledge of its lie algebra. That is, if we
have a set of matrices {Xa} satisfying
a
b
c
(3.10)
[ X , X ]=c abc X
we can reconstruct a group manifold corresponding to the lie algebra given by the
coefficients cabc, which are known as structure constants. In order to get there form here, we
must first study one-parameter subgroups and revisit the exponential map.
One-Parameter Subgroups
A one-parameter subgroup of G is a connected curve in G whose elements form a
subgroup. In the example of rotations SO₃, a one-parameter subgroup might be rotations
about a single axis. This set of transformations forms a curve parameterized by the angle θ
being rotated about the axis. There is a one-parameter subgroup in SO₃ for every possible
direction of the rotation axis. Each one-parameter subgroup must, of course, contain the
identity, meaning these curves all intersect at this point on the manifold.
We search for a natural way of constructing all the one-parameter subgroups H(t) of a
group G. Since all the one-parameter subgroups intersect at the identity, we should be able
to find them by looking in a neighborhood of the identity. Each one-parameter subgroup
H(t) has an associated velocity vector Ve = dH/dt at the identity (the identity will correspond
to H(t = 0) for convenience). We can generate the one-parameter subgroup HV(t) from the
velocity vector Ve, by making infinitesimal translations in the manifold. As ε tends to zero,
ij
ij
ij
ij
ij
H V  =x e V e =  V e
Now, we've only moved an infinitesimal distance away from the identity, so we haven't yet
constructed much of the subgroup at all. However, we know we can always get new group
elements by multiplying old group elements together. In other words, we can translate a
macroscopic distance by making a large number of microscopic translations in the
subgroup, via matrix multiplication:
N
N
(3.11)
H V  N⋅=[ H V ] =[1 V e ]
Now, take t = N·ε:
N
tVe
(3.12)
H V t =[1
]
N
Then take the limit as N → ∞, and we find that our result is simply the matrix exponential:
2
(3.13)
H V t =exp matrix t V e =1t V e ½ t V e⋅V e ...
Figure 3.3 A one-parameter subgroup is a subgroup which forms a curve in G. All oneparameter subgroups intersect at the identity.
This procedure shows that there exists only one possible one-parameter subgroup
corresponding to each velocity vector. Moreover, we can be sure this exhausts all connected
one-parameter subgroups, because they must all pass through the identity, and hence be
generated by the above. Since TeG = Ꮭ[G] is just the lie algebra of G, we see that there is a
correspondence between one-parameter subgroups and elements of the lie algebra. The
relationship becomes more apparent upon realizing that the velocity of a one-parameter
subgroup is always left-invariant.
Problem 3.4 Show that the velocity of a one-parameter subgroup is left-invariant:
L H  t∗ V e=V H t =
dH
dt
(3.14)
We can also go the other direction. Given a left-invariant vector field V(g), we can use
the exponential map to produce a one-parameter subgroup H(t) whose tangent vector is
V(H(t)) at every point H(t) along the curve. By doing this, we will show that this relationship
between the lie algebra and one-parameter subgroups is one-to-one.
The Exponential Map Revisited
Given a vector field V(p) on a manifold M, we have seen how we can (at least locally)
produce integral curves of V using the vector exponential map. Each point p in the
neighborhood of interest lies on exactly one such curve, and the velocity of the curve at that
point is exactly the vector given by V(p) at that point. To clarify any potential confusion, we
note that the transition from vectors to curves in M is carried out by the vector exponential
map, not to be confused with the matrix exponential used in (3.13), though we shall soon see
that they conveniently produce the same curve in a lie group.
Computationally, we know expvector{tV} translating points along curves through g
can be expressed in the following manner (written down in analogy with (1.13)):
ij
kl
kl
mn
∂ ... x ij  g 
[expvector V ] x  g =1V ∂ kl ½ V ∂ kl V
(3.15)
mn
∂x
∂x
∂x
G
Problem 3.5 Show that when V is a left-invariant vector field and g is evaluated at the
identity,
ij
[expvector V ] x  g ∣g=e =1V e ½ V e⋅V e ...
(3.16)
In other words,
exp vector V =expmatrix V e 
(3.17)
When V is a left-invariant field, the abstract exponential map from tangent vectors in
TeG to the manifold G is just the matrix exponential acting on the matrix-valued lie algebra
element. Conceptually, the manifold structure of left-invariant fields mirrors the group
structure; the vector commutator is the same as the matrix commutator, and the vector
exponential map is just given by matrix exponentiation.
Figure 3.4 A left-invariant vector field on G produces a set of integral curves. The one that
passes through the identity is a one-parameter subgroup of G.
Given a left-invariant vector field V(p), we look at the integral curve passing through
the identity. Since the vector exponential map is the same as the matrix exponential map
given above, the curve generated in this fashion is always a one-parameter subgroup. Thus,
we have a natural correspondence between the lie algebra and one-parameter subgroups.
We are almost ready to use this to generate the group, G.
First, we must appeal to a mathematical theorem of linear algebra known as the
Baker-Campbell-Hausdorff formula. We do not derive it entirely, merely state it
conceptually. If A and B are matrices, then
1
1
exp  A⋅exp B=exp {AB½[ A , B ] [ A ,[ A , B]]− [ B ,[ A , B]]...}
(3.18)
12
12
where “...” represents a series of more complicated commutators involving A and B. In
other words, if we can express two group elements as exponentials of lie algebra elements,
then we can express their product as an exponential of a lie algebra element (because the
algebra is closed under commutation), and furthermore it is possible to derive the group
multiplication structure from the commutation relations.
Problem 3.6 Explain why the“flavor”of (3.18) holds true. That is, expand both sides of the
following first-order approximation,
exp  A⋅exp B ⋲ exp AB
(3.19)
and show that the second-order correction is exactly half the commutator of A with B. See
what happens at third-order, and give a brief explanation of why commutator terms like
[A,[A,[...[A,B]]...] will always provide the proper correction at any order.
Imagine starting at the identity and generating one-parameter subgroups
corresponding to every element of the tangent space TeG. Extending these curves as far as
they will go, we will produce a connected subgroup of G. If an element h of G does not lie
in a one-parameter subgroup, we can use the exponential map to generate curves starting
from this point of G to produce another connected piece of G. These two pieces must be
disjoint, because otherwise it is possible to get to h by two consecutive exponential maps
from the identity, which the Baker-Campbell-Hausdorff formula says can be written a single
exponential map of an element of the lie algebra. In other words, h does not lie in the
connected component of G. Thus, every group element in the connected component of a lie
group lies in some one-parameter subgroup, and can therefore be expressed as g = exp{Ag}.
Problem 3.7 Use the steps in the above paragraph to prove the last statement, that every
element in the connected component of a lie group lies in some one-parameter subgroup, and
hence can be directly associated with an element of the lie algebra.
We should therefore be able to derive the group structure of G from the algebraic
structure of g. We can't quite get back the whole group, for reasons which will be explained
in a moment, but we've come fairly close.
Examples
We now uncover the lie algebra of the matrix groups we've been referencing, by
expanding the exponential map to linear order. Why should we only be interested in linearorder terms? Recall that when we developed the exponential map, we thought of exp(A) as a
large product of infinitesimal translations, [1 + ϵA]N. If we enforce the subgroup conditions
on A for a single infinitesimal translation, those conditions should then be satisfied for a
large product of translations.
1. G = GLnR. We can represent an element close to the identity, g(ϵ), like so:
(3.20)
g =1 AO 2 , where A∈ [G]
For sufficiently small ϵ, g(ϵ) is invertible no matter what A is, so A can be any n × n
matrix (including matrices that are not themselves invertible). Thus, glnR is the set of
all n × n matrices, with no restrictions.
2. G = SLnR. We have one restriction this time: det(g) = 1. We look to (3.20) to
determine how this manifests itself as a restriction on A. Since most of the matrix
elements are very small, the only term contributing to the determinant will be the
product along the diagonal.
n
det [ g ]=∏ 1 A O 
kk
2
(3.21)
k =1
n
det [ g  ]=1 ∑  A O 
kk
2
(3.22)
k=1
This second term is the trace of A. Thus, the requirement that det(g) = 1 is equivalent
to tr(A) = 0. We find that slnR is the set of all traceless n × n matrices.
3. G = SOn: gT·g = 1.
(3.23)
g T⋅g =1 AT ⋅1 AO 2
T
T
2
(3.24)
g  ⋅g =1 A AO 
T
In other words, the subgroup restriction on the lie algebra is A + A = 0. In other
words, son is the set of antisymmetric n × n matrices.
4. G = On
It should come as no surprise that this group has the same lie algebra as SOn, since
they locally look the same. SOn is just the connected component of On which is
connected to the identity. Since the exponential map can only map to points which
can be reached in infinitesimal increments, it can only reach points in SOn. This is
one of the ways that two distinct lie groups can have the same lie algebra. Note that
antisymmetry implies tracelessness of the lie algebra elements; no additional
restriction is made at this level.
5. G = GLnC
It should be clear that glnC is the set of all n × n complex matrices, with no
restrictions.
6. G = Un
We can find this lie algebra in analogy with our treatment of SOn, by replacing all
transposes with hermitian conjugates. We find that un is the set of all n × n antihermitian matrices, A† = -A. Physicists often relabel these as A = iB, where B is a
hermitian matrix, B† = B.
7. G = SUn
The additional unit-determinant requirement means that the lie algebra matrices are
traceless, so that sun is the set of all traceless anti-hermitian matrices (or, if you're a
physicist, the set of traceless hermitian matrices).
The Special case G = SU₂
The set of all 2 × 2 traceless, anti-hermitian matrices is a three-dimensional vector
space, which can be spanned by the basis {ek}:
0 i
0 −1
i 0
e 1=½
, e 2=½
, e 3=½
(3.25)
i 0
1 0
0 −i
These are related to the pauli matrices, {σk}, by ek = i σk/2 (the pauli matrices are more
commonly used by physicists).
 




Problem 3.8 Check that the lie algebra of SU2 is summarized by the commutation relation:
[ei , e j ]=ijk e k
(3.26)
Thus, the structure constants of this lie algebra are given by the totally antisymmetric epsilon
tensor, cabc = ϵabc. Also check the following useful relations:
2
2
2
e 1=e 2=e 3=−¼
e i⋅e j =−e j⋅e i when i≠ j
(3.27)
(3.28)
Note that the multiplicative structure of this lie algebra is very similar to that of the
quaternions.
We can use the {ek} to discover the manifold structure of SU₂. Look at the set of
matrices that can be expressed as a real linear combination of the {ek}'s and the identity:
A=2x⋅e 12y⋅e2 2z⋅e 3w⋅1
(3.29)
where the factors of two were added for later convenience. This set is a group if we omit
matrices with zero determinant. This group is larger than SU₂, seen readily by noting it is
four-dimensional, while SU₂ is three-dimensional. We can restrict ourselves to the
subgroup SU₂ by setting up a proper relationship between the coefficients (x,y,z,w). The
relationship can be determined by requiring A†·A = 1.
Problem 3.9 Using (3.29) along with the relations (3.26-28), show that the unitarity of A
implies
x 2 y 2z 2w 2=1
(3.30)
We restrict ourselves from what is essentially R⁴ with coordinates (x,y,z,w) to the
three-dimensional subgroup SU₂, using the condition (3.30). Therefore, the manifold SU₂ is
topologically a three-sphere, S³.
3.4 The Rest of the Lie Algebra Story
We've seen that a lie algebra can be produced from a lie group. Can distinct lie
groups give rise to the same lie algebra? The answer is yes, for a couple of reasons. First
there is the case that G might be disconnected, as in the case with On (example 4). The lie
algebra of SOn is identical to that of On, half of which is unreachable through the
exponential map. There are more interesting examples, summed up with the following
claim:
If the connected lie groups G₁, G₂, ..., Gi all have the same lie algebras, then amongst
all possible {Gk} there is one that is simply connected. Call this group G. Then all the other
{Gk} can be written in terms of G as a quotient group Gk = G/Dk, where Dk is a discrete
normal subgroup of G. As we saw in section 2.4, the homomorphism
 k :G G k =G / Dk
(3.31)
tells us how to construct Gk: Take G and create an equivalence relation by identifying g ~
h·g, for all h Dk, all g G. This should at least make intuitive sense; if G and Gk have the
same multiplicative structure locally, but different global structure, they should be related by
a homomorphism. Since the manifold dimensionality is equal to the dimensionality of the
lie algebras, the groups must be of the same manifold dimension, and thus the kernel of the
homomorphism must be a discrete group (of dimension zero). An example of this follows.
The Manifold Structure of SO₃
Recall that the lie algebra of SO₃ is the set of traceless 3 × 3 antisymmetric matrices.
This set has dimension three (it had better, since that's the dimensionality of the group). We
can write down a basis for so₃:

 
 
0 0 0
0 0 1
0 −1 0
L1= 0 0 −1 , L 2= 0 0 0 , L3= 1 0 0
0 1 0
−1 0 0
0 0 0

(3.32)
Problem 3.10 Check that these basis elements satisfy the commutation relation:
[ Li , L j ]=ijk Lk
(3.33)
Exactly the same commutation relations (3.26) we found for the lie algebra of SU2.
The implication of problem 3.10 is that the group structure of SO₃ is locally identical
to that of SU₂. To put it yet another way, there is an isomorphism between a neighborhood
of the identity in SO₃ and a neighborhood of the identity in SU₂. We now seek to find a
relationship between their global structures.
We can use the exponential map to find group elements in SO₃ corresponding to
group elements in SU₂. We look at the one-parameter subgroups generated by e₁ and L₁.
Problem 3.11 Show that the following formulas hold for one-parameter subgroups generated
by the two equivalent lie algebras:

exp {t e 1 }=
cost / 2 i sint / 2
i sin t /2 cost / 2


(3.34)

1
0
0
exp {t L1 }= 0 cost −sin t 
0 sint  cost 
(3.35)
We know that the exponential map gives us the same group structure when it acts on
the same lie algebra. Thus, we can write down a homomorphism between these groups,
based on equivocating the two exponential maps. Symbolically,


1
0
0
cost / 2 i sin t /2
(3.36)
0 cost −sin t  ⇔
i sint / 2 cost / 2
0 sin t  cost 
Defining the homomorphism in a more rigorous manner would entail finding the lie
algebra element corresponding to a given group element in SU₂, then using the
isomorphism ek Lk, then exponentiating again. Using the relationship (3.36), we can finally
compare the global structures of these two groups, when viewed as manifolds. Notice what
happens when we send t → t + 2π. For the group element R SO₃, R → R. However, for
the group element U SU₂, U → -U. In other words, this mapping is two-to-one. You need
to travel twice the parameter distance in SU₂ to get back to the same point. This
equivalence is a two-to-one quotient map. To put it another way,
SO3 ⋲SU 2 / Z 2
(3.37)
Since SU₂ S³ is the three-sphere, SO₃ is diffeomorphic to the three-sphere after
identifying antipodal points. You may recognize this space as RP³, three-dimensional
projective space.
3
(3.38)
SO3⋲RP


Figure 3.5 From a single lie algebra, we can derive two separate lie groups. They are related by a
two-to-one homomorphism.
We start with a single lie algebra, and from this, we can produce two separate groups:
SU₂ and SO₃. Since the entirety of each group can be expressed in terms of the exponential
map, they are implicitly related by a two-to-one homomorphism. Note that the quotient
map works simultaneously in terms of group structure and topological structure.
Problem 3.12 Show that the lie algebra of U1 is the same as that of the additive real
numbers. Given this, there must be a homomorphism between these two groups (since R is
simply connected). Find this map.
3.5 Realizations and Representations
Throughout this discussion, we've interchangeably viewed lie groups from two
different perspectives:
•
•
On the abstract level, where lie groups are manifolds with a given multiplication law
Via concrete parameterizations of points in a lie group, using matrices (for which the
group multiplication law becomes matrix multiplication)
We should briefly flesh out the second of these perspectives, as we've been using this
description without really identifying it. To do so, we need to revisit a few group theory
concepts:
First, a realization of a lie group G is given by the action of G on a manifold M. When
we speak specifically of lie groups, the action of G on M is given by a differential map G × M
→ M which can be symbolically written (as in section 2.5) as “g·p”, where g G, p M. As
we saw in chapter 2, group action on a set must satisfy the following axioms (in addition to
the differentiability conditions we have just imposed):
(3.39)
e⋅p= p (where e is the identity in G)
 g 1⋅g 2⋅p= g 1⋅ g 2⋅p 
(3.40)
Each element of G is “realized” as a particular transformation of M. For example, take G =
SO₃, M = S². Each element in SO₃ can be realized as a rotation, which maps points on the
sphere to other points on the sphere. Recall from chapter 2 that an action is faithful if it
can be thought of as a one-to-one mapping from G to a set of smooth transformations of the
points of S. That is, it is faithful if
g 1⋅p≠ g 2⋅p for some p∈ M whenever g 1≠ g 2
(3.41)
When this is the case, we also say that the realization is faithful.
When the manifold is a vector space, and the realization is a linear action, it is known
as a representation. If the vector space has dimension n, then it is said to be an ndimensional representation. There is a concrete and useful way of thinking about
representations. We can always choose a basis for V, {ei}. Then since the action of g on a
basis vector gives us another vector expressible as a linear combination of basis vectors, the
action of g on the {ei} can be summarized by a matrix:
j
(3.42)
g⋅ei =M i  g  e j
i
We can use this basis to express any vector X = X ei. Then the action of g G on X V is
given by
j
(3.43)
g⋅X =Y =Y e j
i
j
and the relationship between the {X } and the {Y } can also be written matrix form:
j
j
i
(3.44)
Y =M i  g  X
We have an explicit representation of every g G via an n × n matrix. More formally,
when a lie group G acts on an n-dimensional vector space V, we get a homomorphism from
G into a collection of n × n matrices. We will often informally refer to the homomorphism
itself as the “representation”, since it goes hand in hand with linear action on a vector space.
Note that although we defined On, SOn, Un, and SUn in terms of n × n matrices, we can have
m-dimensional representations of these groups.
Examples
We could easily write down a few trivial examples of representations. For instance,
there is the trivial n-dimensional representation, where G acts by not changing anything. Of
course, for a nontrivial group, this is not a faithful representation. It can be viewed as a
homomorphism from G to the trivial group of the n × n identity matrix.
For another simple example, take G = a collection of n × n matrices. We can
construct a 2n-dimensional representation by mapping each n × n matrix to a blockdiagonal 2n ×2n matrix:
M 0
M
(3.45)
0 M
We could also increase the dimensionality of the representation by augmenting the matrix
with additional rows and columns in a trivial sense:




M 0 0
M 0 1 0
0 0 1
but these are all rather silly examples.
(3.46)
For a more interesting example, take G = SU₂. It has a three-dimensional
representation, in the form of SO₃, which we found earlier. We've already shown that there
is a two-to-one homomorphism from SU₂ to SO₃. Thus, unit-determinant orthogonal
matrices can be considered either an unfaithful representation of SU₂ or the defining
representation of SO₃.
Problem 3.13 Show that the matrix homomorphism associated with a faithful representation
is an isomorphism. In other words, one way to check that a representation is faithful is to see
whether any nontrivial group elements are mapped to the identity matrix.
At the Level of Lie Algebras
Representations can also be described in the lie algebra of a group. A d-dimensional
representation of a lie algebra g is a map Γ from elements Ai of g to d × d matrices, Γ(Ai).
(3.47)
 : ℊ {d×d matrices}
This map must preserve the vector structure of g by being a linear operator, but it must also
preserve the algebraic structure, meaning
[  Ai  ,   A j ]= [ Ai , A j ]=cijk   Ak 
(3.48)
The n-dimensional matrices we derived earlier for un, sun, son, etc. can be considered the
defining representations of these lie algebras, but the basic vector and algebraic structure is
independent of representation.
It may be possible to derive a group representation from a lie algebra representation,
and vice versa, in the same way we moved between the group elements and lie algebra
elements before: using the exponential map. Let Φ(g) be a representation for G. Then Γ(A)
is its manifestation in the lie algebra if:
A
  A
(3.49)
e =e
for every A in the lie algebra.
Equivalent Representations
Let Φ(g) be a d-dimensional representation for G. Consider a new representation,
Φʹ(g), given by conjugating every Φ(g) by a nonsingular d × d matrix:
−1
(3.50)
'  g =S⋅  g ⋅S
Since conjugation is an automorphism, Φʹ(g) has the same group structure as Φ, and since
the conjugation operation is one-to-one, it is a faithful representation. Φʹ is said to be
“equivalent” to Φ. We don't really want to think of this as a “different” representation, as it
is, in essence, just a change of basis from the original representation.
A representation Φ of G is said to be completely reducible if it is equivalent to a
block-diagonal representation,

1 0 ⋯ 0
−1
0 2 ⋯ 0
S⋅⋅S =
⋮ ⋮ ⋱
0 0
n

(3.51)
Where each of the {Φk} is a representation of Φ. In other words, Φ can be built from
smaller-dimensional representations, using the direct sum operation (which we are about to
describe). If a representation cannot be put into block-diagonal form, it is irreducible. It is
often an interesting and useful question in group theory to exhaust the list of irreducible
representations of a given group.
Building Representations from Other Representations
There are two basic operations we can use to produce a representation Φ from two
given representations Φ₁ and Φ₂:
1. The Direct Sum Representation
Given two representations of a group, Φ₁(g) and Φ₂(g), we can create another
representation by writing the two representations in block-diagonal form:


1 0
(3.52)
0 2
This is a faithful representation of both Φ₁ and Φ₂, since we are not changing any of
the matrix multiplication. This gives us a d₁ + d₂ dimensional representation acting
on the vector space, V₁ V₂ (Where now “ ” refers to a direct sum of vector spaces).
1⊕ 2=
2. Direct Product Representations
We can also define an action on the product vector space, V₁ V₂. Given a basis {ei}
and {fj} for V₁ and V₂, respectively, we can write a given element of V₁ V₂ as aij (ei
fj). In other words, {ei fj} forms a basis fro the direct product space.
If V₁ has a representation Φ₁, and V₂ has a representation Φ₂, we can act on the direct
product space by by acting on the first basis vector with Φ₁ and the second with Φ₂:
ij
ij
ij
(3.53)
V ⊗V [W ei ⊗ f j ]=W V ⊗V [e i⊗ f j ]=W  1 [ei ]⊗ 2 [ f j ]
The result is a d₁ × d₁ matrix times a d₂ × d₂ matrix. This can be thought of as a
1
2
1
2
single large matrix, whose rows and columns are specified by two indices each: The
(ij),(kl)th indices can be calculated by multiplying the (ik)th component of Φ₁ with the
(jl)th component of Φ₂. This matrix representation will have dimensions (d₁d₂ ×
d₁d₂).
The Adjoint Representation
As stated before, we can define an action of G on itself by conjugation. This specifies
a realization, which we will call “Ad”.
Ad g :G  G
(3.54)
−1
h ⇢ g⋅h⋅g
We can extract from this realization an action of G on its tangent space, given by the
pushforward map:
Ad g ∗ : T h G T g⋅h⋅g G
(3.55)
Of course, this isn't a well-defined action on a set unless it maps the same space to itself.
Fortunately, conjugation always maps the identity to itself, so if we take h = e, the result is a
bona fide realization.
Ad g ∗ : T e G  T e G
(3.56)
−1
In fact, since TeG is a vector space, Adg* is a representation of G. It is fairly easy to show
that the pushforward map also manifests itself via conjugation, but now acting on lie algebra
elements.
Problem 3.14 Show that Adg* conjugates the lie algebra:
−1
Ad g ∗ A= g⋅A⋅g
(3.57)
Conjugation in this sense is now a representation, not just a realization. In other words, it is
theoretically possible to construct matrices {Mij(g)} which act on a lie algebra basis {Ak}, via:
−1
g⋅Ai⋅g = M ij  g A j
This representation {Mij(g)} is known as the adjoint representation.
Problem 3.15 The adjoint representation has a manifestation in the lie algebra of G. To
determine this manifestation, first set g = exp{Aa} and show:
Ad g ∗ Ab =Ab[ Aa , Ab ]½[ Aa ,[ Aa , Ab ]]...
(3.58)
Since the commutation relations are completely specified by the structure constants, show that
this implies
a
[ Ad g∗ ]bc =[exp {T }]bc
(3.59)
a
where the {T } are matrices built out of the structure constants,
a
[T ]bc =c abc
(3.60)
a
The {T } form the adjoint representation, at the level of the lie algebra. We use the lowercase
notation “ad” to describe the lie algebra representation:
Ad exp {A}=exp {ad A }
(3.61)
The representation at this level is still a set of d × d matrices, which act on the d-dimensional
lie algebra. Show that its action is given by commutation:
ad X Y =[ X , Y ]
(3.62)
3.6 Summary and Application
The two mathematical concepts of a group and a manifold have merged to form a lie
group. Useful examples of lie groups can be formed as subgroups of GLnR or GLnC, in
specific matrix representations. When we look at the tangent space of a lie group, we find
that the group structure of the manifold endows a multiplication law onto its tangent space,
compatible with its additive group structure as a vector space. Thus, the tangent space of a
lie group forms an algebra. The multiplication law is given specifically by the lie bracket of
left-invariant vector fields, which are in linear one-to-one correspondence with tangent
vectors at the identity.
The lie group structure therefore fixes the algebra structure, given by the structure
constants cijk. The lie algebra can be used to reconstruct the group, or at least part of it. The
exponential map reconstructs the manifold at the level of points, and the multiplication law
of the group can be derived from the commutation relations, whenever group elements can
be expressed as exponentials of lie algebra elements:
exp {A}⋅exp {B }=exp {AB½ [ A , B]...}
(3.63)
Instead of considering a lie group as an abstract mathematical object, one can
literally be realized by its action on a manifold. In the case that this manifold is a vector
space and the action is linear, the realization becomes a representation. Representations are
usually specified by a homomorphism from group elements to matrices. One choice of
representation that appears often in physics is the adjoint representation, where the lie
group acts on its own lie algebra, by conjugation. At the level of the lie algebra, the adjoint
representation can be viewed as the action of the lie algebra on itself by commutation.
In physics, lie groups can manifest themselves as symmetries. For example, threedimensional space exhibits the symmetry group SO₃ as the set of rotations in space.
Quantum Mechanics tells us that these symmetries also manifest themselves at the level of
particles. For example, electrons transform as two-component spinors under rotations. In
fact, it is the group SU₂ under which electrons transform. It is possible (and, in fact, more
mathematically natural) to think of the SO₃ transformations with which we are so familiar as
a mere representation of the fundamental transformation that is being performed; that of
SU₂. This is because the transformations are an unfaithful representation of SU₂; SO₃ only
captures half of the transformation group. This is most apparent in the fact that the electron
waveform picks up a minus sign when it undergoes a rotation of 2π – something which
appears quite peculiar, unless we can believe that physical rotations are fundamentally
represented by SU₂.