Download M3/4/5P12 Group Representation Theory

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Orthogonal matrix wikipedia , lookup

Capelli's identity wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Jordan normal form wikipedia , lookup

Matrix multiplication wikipedia , lookup

Symmetric cone wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Exterior algebra wikipedia , lookup

Vector space wikipedia , lookup

Matrix calculus wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Four-vector wikipedia , lookup

Transcript
M3/4/5P12 Group Representation Theory
Rebecca Bellovin
April 10, 2017
1
Administrivia
Comments, complaints, and corrections:
E-mail me at [email protected]. The course website is wwwf.imperial.ac.uk/
~rbellovi/teaching/m3p12.html; there will be no blackboard page.
Office hours:
Thursdays 4:30-5:30, starting January 19.
Problem sessions:
Problem sessions will be on the Tuesday session of odd-numbered weeks, starting on January
24.
Other reading:
Previous versions of this course have been taught by James Newton (https://nms.kcl.
ac.uk/james.newton/M3P12.html), Ed Segal (http://wwwf.imperial.ac.uk/~epsegal/
repthy.html), and Matthew Towers (https://sites.google.com/site/matthewtowers/
m3p12). Lecture notes and sample problem sheets are available from their website; we will
cover similar material, though somewhat rearranged.
Recommended reference books include:
• G. James and M. Liebeck, Representations and Characters of Groups.
• J.-P. Serre, Linear Representations of Finite Groups.
• J.L. Alperin, Local Representation Theory.
1
2
Introduction to representations
2.1
Motivation
Our ultimate goal is to study groups (and in this course, we will only study finite groups).
Recall:
Definition 2.1. A group G is a set equipped with an associative multiplication map G×G →
G such that there is an identity element e (i.e., e·g = g·e = g for all g ∈ G) and every element
has an inverse (i.e. for every g ∈ G, there is some g −1 ∈ G so that g · g −1 = g −1 · g = e).
Groups generally show up as symmetries of objects we are interested in, but it is difficult
to study them directly. For example, it is much more fruitful to view D8 as the symmetry
group of the square than to write down its multiplication table.
Theorem 2.2. Let G be a group with |G| = pa q b for p, q prime numbers. Then G has a
proper normal subgroup.
The easiest proof is via techniques from representation theory, but it is not obvious how to
attack it by “bare hands”. The proof requires a bit of algebraic number theory, so we may
not be able to cover it, but the proof is in James and Liebeck’s book.
There are also applications to physics and chemistry: J.-P. Serre wrote his book because his
wife was a chemist.
The structure of the course will be:
1. Representations: definitions and basic structure theory
2. Character theory
3. Group algebras
Since we understand linear algebra much better than abstract group theory, we will attempt
to turn groups into linear algebra.
Informally, a representation will be a way of writing elements of a group as matrices. For
example, let G = C4 = e, g, g 2 , g 3 , with g 4 = e. Consider also the 2 × 2 matrices
2
−1 0
0 1
I = ( 10 01 ) , M = ( 01 −1
)
,
M
=
, M 3 = ( −1
0)
0
−1
0
Since M 4 = I, this forms a finite cyclic group of order 4, and g 7→ M defines an isomorphism
with C4 . Thus, we have embedded C4 as a subgroup of GL2 (C).
We further observe that M can be diagonalized: its characteristic polynomial is det(λI −
1
M ) = λ2 + 1, which has two distinct roots, ±i. The eigenspaces are generated by ( −i
)
1
and ( i ), so after changing basis, we see that our embedding is actually built out of two
embeddings C4 → C× given by g 7→ i, g 7→ −i.
2
2.2
2.2.1
Definitions
First definitions
Recall from last time: We defined a group homomorphism C4 → GL2 (C) by sending a
0
−1
i −1
0
generator g 7→ M = ( 01 −1
= ( 0i −i
).
0 ). If P = ( i 1 ), then M = P M P
Definition 2.3 (Preliminary version). Let G be a group. A representation of G is a homomorphism ρ : G → GLd (C) (so ρ(gh) = ρ(g)ρ(h)). That is, for every g ∈ G we have a
matrix ρ(g), and ρ(gh) = ρ(g)ρ(h) for all g, h ∈ G. We call d the dimension of ρ.
Two representations ρ, ρ0 : G → GL(Cd ) are equivalent if there is some matrix P ∈ GL(Cd )
such that ρ0 = P ρP −1 .
Observe that our definition implies that if e ∈ G is the identity, ρ(e) = Id . For any g ∈ G,
we further have that ρ(g −1 ) = ρ(g)−1 .
Example 2.4.
sentation.
1. Suppose that ρ(g) = Id for all g ∈ G. This is called the trivial repre-
2. Let d = |G| and let the elements of the standard basis correspond to elements of
G. That is, choose some ordering of the elements of G and let {eg1 , . . . , egd } be the
standard basis of Cd . Define a representation by setting ρ(g)(eh ) = egh for all g, h ∈ G.
This is called the regular representation.
3. Let G = Cn , the cyclic group of order n with generator g, and let ρ : G → GL1 (C) be
defined by sending g to some nth root of unity ζ. For example, we could set ζ = e2πi/n .
But any choice of e2aπi/n works, as well. To be more concrete, if n = 4, we could send
g to i, −1, −i, or 1; the last case is the trivial 1-dimensional representation.
4. Let G = Sn , the group of permutations of n elements. Let {e1 , . . . , en } be the standard
basis of Cn ; we define a representation by setting ρ(g)(ei ) = eg(i) . This is called the
permutation representation.
5. More generally, let X be a set with a left-action of G; recall that this means there is an
“action map” G × X → X (which we write (g, x) 7→ g · x) such that (gh) · x = g · (h · x).
Let d = |X| and let elements of the standard basis of Cd correspond to elements of X;
write the basis {ex }x∈X . Then we define a representation ρ : G → GLd (C) by setting
ρ(g)(ex ) = eg·x .
If X = G, this construction recovers the regular representation. If G = Sn and
X = {1, . . . , n}, this recoves the permutation representation.
Note that we do not require a representation to be an injective homomorphism! In fact, for
the trivial representation, the kernel is all of G. For the 1-dimensional representations of Cn
we wrote down earlier, ρ is injective if and only if ζ is a primitive nth root of 1, that is,
ζ n = 1 and n is the smallest positive integer with this property. If ρ is injective, we say that
it is a faithful representation.
3
2.2.2
Cleaner definitions
Recall from linear algebra that every finite dimensional vector space has a basis, but a basis
is not unique. Moreover, if g ∈ G and ρ : G → GLd (C) is a representation, we can view ρ(g)
as an invertible linear transformation V → V , where V is a d-dimensional complex vector
space. Thus, it is cleaner to rewrite our definitions as follows:
Definition 2.5. A representation of G is a pair (V, ρ) where V is a finite-dimensional complex
vector space, and ρ : G → GL(V ) is a homomorphism.
∼
Recall that GL(V ) := {f : V −
→ V }.
If we pick a basis of V , we may write ρ in terms of matrices as before:
∼
Choose a basis B = (b1 , . . . , bd ) of V . This choice of basis gives us an isomorphism Cd −
→V,
∼
and therefore an isomorphism GLd (C) −
→ GL(V ). We therefore have a representation ρB :
G → GLd (C), in the sense of the previous lecture.
If we need to distinguish the two notions of representations, we will refer to homomorphisms
ρ : G → GLd (C) as matrix representations.
Lemma 2.6. Let ρ : G → GL(V ) be a representation, and let B = (b1 , . . . , bd ) and B 0 =
(b01 , . . . , b0d ) be bases of V . Then ρB and ρB0 are equivalent. Conversely, if ρ1 , ρ2 : G →
GLd (C) are equivalent matrix representations, there is a representation (V, ρ) such that ρ1
and ρ2 can be obtained by choosing bases B1 and B2 of V .
Proof. Let P denote the change-of-basis matrix taking B to B 0 , that is, P bi = b0i . Then for
every g ∈ G, ρB0 = P ρB P −1 , so ρB and ρB0 are equivalent.
For the converse, suppose that ρ1 and ρ2 are equivalent, and let P ∈ GLd (C) be a matrix
such that ρ2 (g) = P ρ1 (g)P −1 for all g ∈ G. Let V be the underlying vector space of Cd .
Then we can think of ρ1 as a representation ρ : G → GL(V ) by forgetting about the standard
basis on Cd . If we let B1 denote the standard basis of Cd , ρ1 (g) = ρB1 (because we forgot
the standard basis and then remembered it again).
Now let B2 denote the basis of Cd given by the columns of P −1 . Then for every g ∈ G, the
matrix of ρ(g) with respect to B2 is given by P ρ1 (g)P −1 , as desired.
2.3
Homomorphisms of representations
Let (V, ρV ) and (W, ρW ) be representations of G.
Definition 2.7. A linear map f : V → W is G-linear if f ◦ ρV (g) = ρW (g) ◦ f for every
g ∈ G.
Recall that a map f : V → W is linear if f (av1 + bv2 ) = af (v1 ) + bf (v2 ) for a, b ∈ C and
v1 , v2 ∈ V .
4
In other words, for every element g ∈ G, we have a commutative square:
ρV (g)
V
/V
f
f
W
W
ρW (g) /
Definition 2.8. Let (V, ρV ) and (W, ρW ) be representations of G. A map f : V → W is said
to be a homomorphism of representations if it is G-linear, and it is said to be an isomorphism
of representations if it is G-linear and invertible.
Exercise 2.9. Check that if f is an isomorphism of representations, so is f −1 .
Proposition 2.10. Let (V, ρV ) and (W, ρW ) be representations of G, and let BV and BW
be bases of V and W , respectively. If f : V → W is a linear map and [f ]BV ,BW denotes
the matrix representing f with respect to the chosen bases, then f is G-linear if and only if
[f ]BV ,BW ρV (g) = ρW (g)[f ]BV ,BW for all g ∈ G.
Proof. This follows by unwinding the definition; trace through the commutative diagram
above, and write down the matrix associated to each map.
Corollary 2.11. Let (V, ρV ) and (W, ρW ) be d-dimensional representations of G, and let BV
and BW be bases of V and W , respectively. Then (V, ρV ) and (W, ρW ) are isomorphic if and
only if ρV,BV and ρW,BW are equivalent matrix representations.
When we discussed the regular representation, there was some question about how to enumerate elements of G. It doesn’t matter, and the reason it doesn’t matter is because re-ordering
the elements of G is just changing the basis of the underlying vector space. So different
orderings for the elements of G give different matrix representations but isomorphic abstract
representations.
2.4
Direct sums and indecomposable representations
Let us begin with an example, namely the regular representation of C2 = {e, g}. This is a
2-dimensional representation (Vreg , ρreg ), and we choose the basis (be , bg ); the action of C4 is
given by ρreg (g)(be ) = bg and ρreg (g)(bg ) = be .
We also have two 1-dimensional representations, (V0 , ρ0 ) and (V1 , ρ1 ), given in coordinates
by ρi (g) = (−1)i ∈ GL1 (C).
It turns out we can define homomorphisms (Vi , ρi ) → (Vreg , ρreg ):
• Choose a basis element v0 ∈ V0 , and define a linear transformation V0 → Vreg via
v0 7→ be + bg . Since ρreg (g)(be + bg ) = be + bg (exercise!), this map is C2 -linear.
5
• Choose a basis element v1 ∈ V1 , and define a linear transformation V1 → Vreg via
v1 7→ be − bg . Since ρreg (g)(be − bg ) = bg − be (exercise!), this maps is C2 -linear.
Thus, we have a linear transformation V0 ⊕ V1 → Vreg , and since hbe + bg i ∩ hbe − bg i = {0},
it is an isomorphism.
We will see that we can define a representation (V0 ⊕ V1 , ρ1 ⊕ ρ2 ) so that this is actually an
isomorphism of representations.
Recall that if V, W are finite-dimensional complex vector spaces, their direct sum V ⊕ W
consists of ordered pairs (v, w) with v ∈ V, w ∈ W . Defining addition by (v1 , w1 )+(v2 , w2 ) :=
(v1 + v2 , w1 + w2 ) and scaling by a · (v, w) := (a · v, a · w) for a ∈ C makes V ⊕ W into another
finite-dimensional complex vector space.
If we are have finite-dimensional complex vector spaces V1 , V2 , we may define the direct sum
V1 ⊕ V2 . If we have linear transformations f1 : V1 → W and f2 : V2 → W , where W is
another finite-dimensional complex vector space, we may define a new linear transformation
f1 ⊕ f2 : V1 ⊕ V2 → W via (f1 ⊕ f2 )(v1 , v2 ) := f1 (v1 ) + f2 (v2 ).
If we are given linear transformations TV : V → V and TW : W → W , we may also define
a new linear transformation TV ⊕ TW : V ⊕ W → V ⊕ W by setting (TV ⊕ TW )(v, w) :=
(TV (v), TW (w)). This permits us to define the direct some of two group representations:
Definition 2.12. Let (V, ρV ) and (W, ρW ) be representations of a finite group G. The direct
sum (V ⊕ W, ρV ⊕ ρW ) is defined by setting (ρV ⊕ ρW )(g) := ρV (g) ⊕ ρW (g) ∈ GL(V ⊕ W ).
Suppose that BV = (v1 , . . . , vdV ) and BW = (w1 , . . . , wdW ) are bases for V and W , respectively, and let ρV,BV and ρW,BW be the associated matrix representations. Then V ⊕ W is
dV + dW -dimensional, and
((v1 , 0), . . . , (vdV , 0), (0, w1 ), . . . , (0, wdW ))
is a basis. The matrix representation associated to ρV ⊕ ρW with respect to this basis is
given by
ρV,BV (g)
0
0
ρW,BW
for each g ∈ G.
Proposition 2.13. The map V0 ⊕ V1 → Vreg we defined earlier is an isomorphism of representations of C2 .
Proof. It is an isomorphism of vector spaces, so it is enough to prove that it is C2 -linear.
But this follows from the definition of ρ0 ⊕ ρ1 .
We can also see this on the level of matrices: The matrix representation associated to ρ0 ⊕ ρ1
with respect to ((v0 , 0), (0, v1 ) is given by
1 0
1 0
(ρ0 ⊕ ρ1 )(e) =
(ρ0 ⊕ ρ1 )(g) =
0 1
0 −1
6
On the other hand, the matrix representation associated to ρreg with respect to the basis
(be , bg ) is given by
1 0
0 1
ρreg (e) =
ρreg (g) =
0 1
1 0
Since the underlying representations are isomorphic, their matrix representations are equiv0
1
alent, so there is some P ∈ GL2 (C) such that ( 10 −1
) = P ( 01 10 ) P −1 . Taking P = ( 11 −1
)
suffices.
Definition 2.14. A representation (V, ρV ) of G is decomposable if it is isomorphic (as a
representation) to the direct sum of smaller representations, i.e., if there are non-zero representations (V1 , ρ1 ) and (V2 , ρ2 ) such that (V, ρ) ∼
= (V1 ⊕ V2 , ρ1 ⊕ ρ2 ). A representation is
indecomposable if no such decomposition exists.
We also make the following observation: Suppose that (V, ρV ) and (W, ρW ) are representations of G. We have a linear map V → V ⊕W which is injective, and it is G-linear. So V ⊕W
actually contains subspaces which are preserved by the linear transformations (ρV ⊕ ρW )(g).
2.5
Subrepresentations and Irreducible Representations
Before we define subrepresentations, recall the notion of a vector subspace:
Definition 2.15. If V is a finite-dimensional complex vector space, a subspace of V is a
subset W ⊂ V which is itself a complex vector space.
In particular, W must be a group under addition (so 0 ∈ W and if w ∈ W , so is −w), and
W is preserved under scaling (i.e., if w ∈ W and λ ∈ C, λw ∈ W .
For example, {0} ⊂ V is a subspace. As another (more interesting) example, suppose we
have a linear transformation f : V1 → V2 . Then we can define the kernel ker(f ) := {v ∈ V1 :
f (v) = 1} and the image im(f ) := {f (v) ∈ V2 : v ∈ V1 }. The kernel is a vector subspace of
V1 and the image is a vector subspace of V2 .
We can specify a subspace of V by giving a set of generators: Let {v1 , . . . , vn } be a subset
of V . The span of {v1 , . . . , vn }, or the subspace generated by {v1 , . . . , vn }, is the set
W := hv1 , . . . , vn i := {a1 v1 + . . . + an vn : ai ∈ C}
Then W is a subset of V and a vector space, so it is a subspace of V . Note that we do not
assume that the vi are linearly independent! They generate W but need not form a basis of
W.
Let T : V → V be a linear transformation, and let W ⊂ V be a subspace. We say that T
stabilizes or preserves W if T (w) ∈ W for every w ∈ W . In other words, T carries W to
itself. Thus, it also defines a linear transformation W → W , which we denote T |W . We call
this the restriction of T . If W is 1-dimensional, then T preserves W if and only if W consists
of eigenvectors for T , i.e., if there is some λ ∈ C such that T (w) = λw for every w ∈ W .
7
We can describe this in terms of matrices: Suppose that BV := (w1 , . . . , wdW , vdW +1 , . . . , vdV )
is a basis of V such that BW := (w1 , . . . , wdW ) is a basis of W (exercise: check that we can
always find such a basis!). Let T : V → V be alinear transformation which preserves W .
Then the matrix [T ]BV is of the form [T |W0]BW ∗∗ .
Now we can define subrepresentations:
Definition 2.16. Let (V, ρV ) be a representation of G. A subrepresentation is a vector
subspace W ⊂ V such that ρV (g) : V → V preserves W for each g ∈ G.
Thus, (W, {ρV (g)|W }) is another representation of G (since ρV (g) is invertible on V , ρV (g)|W
is invertible on W ).
The zero subspace {0} ⊂ V is a subrepresentation of V , but not a very interesting one.
Suppose that (V1 , ρ1 ) and (V2 , ρ2 ) are representations of a finite group G, and suppose that
we have a homomorphism of representations f : V1 → V2 . Then we may again define its
kernel and image:
ker(f ) := {v ∈ V1 : f (v) = 0}
im(f ) := {f (v) ∈ V2 : v ∈ V1 }
Then ker(f ) is a subrepresentation of V1 and im(f ) is a subrepresentation of V2 . Since both
of these are vector subspaces, it is enough to prove that they are preserved by ρ1 (g) and
ρ2 (g), respectively. But that follows from G-linearity of f .
We considered representations of C2 earlier, and we defined a C2 -linear homomorphism V0 →
Vreg (where V0 is equipped with the trivial representation). This homomorphism is injective,
and its image is the vector subspace hbe + bg i ⊂ Vreg . Thus, this is a subrepresentation of
Vreg .
More generally, if (V, ρV ) is a representation of a finite group G, a 1-dimensional subrepresentation of V is a 1-dimensional subspace W = hwi ⊂ V such that w is an eigenvector for
every ρV (g).
Definition 2.17. A representation (V, ρV ) of G is reducible if there is a non-zero proper
subrepresentation W ⊂ V , i.e., if 0 6= W 6= V . We say that V is irreducible if there is no
such subrepresentation.
Example 2.18. Any 1-dimensional representation is irreducible.
Lemma 2.19. If (V, ρV ) is an irreducible representation of a finite group G, then it is
indecomposable.
Proof. Suppose that V is decomposable. Then V ∼
= V1 ⊕ V2 with V1 , V2 6= 0 and there are
representations ρi : G ⇒ GL(Vi ) such that (V, ρV ) ∼
= (V1 ⊕ V2 , ρ1 ⊕ ρ2 ). But this implies
that there are injective G-linear maps fi : Vi → V ∼
= V1 ⊕ V2 . Then im(f1 ) and im(f2 ) are
subrepresentations of V , and they are non-zero because V1 and V2 were assumed non-zero.
But since dim V = dim V1 + dim V2 , dim Vi 6= 0 implies dim Vi < dim V so Vi 6= V . Thus,
im(f1 ) and im(f2 ) are non-zero proper subrepresentations of V .
8
In terms of matrices, we think of being decomposable as being able to choose a basis so that
the matrix representation will have blocks down the diagonal and zeros elsewhere. We think
of being decomposable as being able to choose a basis so that the matrix representation is
upper-triangular except for blocks on the diagonal. For example, if (V, ρV ) is a 3-dimensional
representation
∗ ∗ ∗ with a 2-dimensional subrepresentation, we can find a basis of V so that
ρV (g) = ∗0 ∗0 ∗∗ for every g ∈ G.
Example 2.20. We give an example of an irreducible 2-dimensional representation
of
√
2
∼
S3 = D6 .√Embed an equilateral triangle into R with vertices at (1, 0), (−1/2, 3/2), and
(−1/2, − 3/2). Then “counterclockwise rotation” and “reflection over the x-axis” generate
D3 , so using the standard basis of R2 ⊂ C2 , we get a homomorphism
ρ : D6 → GL2 (R) ⊂
√
−1/2 − 3/2
GL2 (C). The matrix for “counterclockwise rotation” is √3/2 −1/2 , and the matrix for
0
“reflection over the x-axis” is ( 10 −1
).
To see that this representation is irreducible, we need to check that these two matrices do not
0
have a common eigenvector. But the eigenspaces of ( 10 −1
) are generated by ( 10) and ( 01√), and
they have different eigenvalues. Since neither of these is an eigenvector of
we are done.
2.6
1/2 − 3/2
√
− 3/2 −1/2
,
Maschke’s theorem
We can now start studying the structure of representations. Previously, we have defined
direct sums of representations and decomposable representations, and we have defined subrepresentations and irreducible representations. We showed that irreducible representations
are indecomposable. The goal for today is to show the converse:
Theorem 2.21 (Maschke’s Theorem). If (V, ρV ) is a reducible representation of a finite
group G, then it is decomposable.
The finiteness hypothesis on G is crucial here! If G = Z, we may define a representation
1 n
ρ : G → GL2 (C);
ρ(n) :=
0 1
Then the only non-zero proper subrepresentation is the one generated by h( 10 )i. Thus, this
representation is reducible but indecomposable.
We formalize this argument somewhat. Recall the following definition:
Definition 2.22. Let V be a finite-dimensional complex vector space, and let W ⊂ V
be a subspace. A complement of W is a subspace W 0 ⊂ V such that the natural map
W ⊕ W 0 → V , i.e., the map defined by (w, w0 ) 7→ w + w0 , is an isomorphism.
9
Note that complements are not unique. In fact, any subspace W 0 ⊂ V with W ∩ W 0 = {0}
and dim W 0 = dim V − dim W is a complement of W .
We make a similar definition for representations:
Definition 2.23. Let (V, ρV ) be a representation of a finite group G and let W ⊂ V be
a subrepresentation. A subrepresentation W 0 ⊂ V is complementary to W if the natural
map W ⊕ W 0 → V given by (w, w0 ) → w + w0 induces an isomorphism of representations
∼
ρW ⊕ρW 0 −
→ ρV . That is, W 0 is complementary as a subspace, and is also a subrepresentation.
Thus, we can rephrase Maschke’s theorem as the statement that if G is finite, every subrepresentation has a complementary representation. Before we prove it, we state a corollary:
Corollary 2.24. Every finite-dimensional complex representation (V, ρV ) of a finite group
G is isomorphic to a direct sum
V ∼
= V1 ⊕ V2 ⊕ · · · ⊕ Vr
where (Vi , ρi ) is an irreducible representation of G.
Proof. This follows by induction on the dimension of V . If V is 1-dimensional, we have
already seen that V is irreducible. Now let V be an arbitrary representation of G and suppose
we know the result for all representations of dimension less than dim V . Then V is either
an irreducible representation, or V ∼
6 W, W 0 ( V (as representations) by
= W ⊕ W 0 for {0} =
0
Theorem 2.21. But dim W, dim W < dim V so W and W 0 are isomorphic to direct sums of
irreducible representations. Therefore, V is also isomorphic to a direct sum of irreducible
representations.
We will prove Maschke’s theorem by constructing complementary subrepresentations.
Lemma 2.25. Let V be a finite-dimensional complex vector space and let f : V → V
be a linear transformation such that f ◦ f = f . Then ker(f ) ⊂ V and im(f ) ⊂ V are
complementary subspaces.
Let (V, ρV ) be a representation of a finite group G and let f : V → V be a homomorphism
of representations such that f ◦ f = f . Then ker(f ) ⊂ V and im(f ) are complementary
subrepresentations of V .
Proof. We have seen that ker(f ) and im(f ) are subspaces of V , and if V is a representation and f is G-linear, that they are subrepresentations. By the rank-nullity theorem,
dim ker(f ) + dim im(f ) = dim V , so it suffices to show that ker(f ) ∩ im(f ) = {0}. So choose
some v ∈ ker(f ) ∩ im(f ), so that v = f (v 0 ) and f (v) = 0. But we assumed f ◦ f = c, so
0 = f (v) = f (f (v 0 )) = f (v 0 ) = v
Thus, v = 0, as desired.
10
Given a finite-dimensional vector space V and a subspace W ⊂ V , any linear transformation
f : V → W with f |W = id is an example of a map as in the lemma. We call this a
projection from V to W . We can build a projection explicitly as follows: Choose a basis
BV = (w1 , . . . , wdW , vdW +1 , . . . , vdV ) with BW = (w1 , . . . , wdW ) a basis for W . Then we define
f : V → W to be the linear transformation associated to the matrix
IddW 0
0
0
where the upper-left is the identity dW × dW matrix, and there are zeros everywhere else.
We see that it is not hard to construct complementary subspaces, but we need to construct
complementary subspaces that are also preserved by G. For this, we need a way to construct
G-linear projections.
Definition 2.26. Let V, W be finite-dimensional complex vector spaces. We define
Hom(V, W ) := {f : V → W : f isalineartransformation}
Since we can add and subtract linear transformations and scale them by complex numbers,
Hom(V, W ) is again a finite-dimensional vector space, of dimension dim V · dim W .
If V and W are additionally representations of a finite group G, we can make Hom(V, W )
into a representation of G:
Definition 2.27. Let (V, ρV ) and (W, ρW ) be finite-dimensional complex representations of
a finite group G. We define an action of G on Hom(V, W ) by setting g · f : V → W to be
(g · f )(v) = (ρW (g) ◦ f ◦ ρV (g −1 ))(v) for all g ∈ G, v ∈ V .
It is straightforward to check that for each g, this is a linear transformation Hom(V, W ) →
Hom(V, W ).
Note that we do not require elements of Hom(V, W ) to be G-linear, even when V and W are
representations of G. In fact,
Lemma 2.28. A linear transformation f : V → W is G-linear if and only if g · f = f .
Proof. Exercise.
Given a representation (V, ρV ), the subset V G := {v ∈ G : ρV (g)(v) = vforallg ∈ G} is a
trivial subrepresentation of V . Thus, the inclusion V G → V is a G-linear map. We can also
define a G-linear homomorphism the other way:
P
1
Lemma 2.29. Define e : V → V G by setting e(v) := |G|
g∈G ρV (g)(v) is G-linear. In
addition, it satisfies e ◦ e = e.
11
Proof. We first check that for any v ∈ V , e(v) ∈ V G . Let g 0 ∈ G. Then
1 X
1 X
1 X
ρV (g 0 )(e(v)) =
ρV (g 0 )ρV (g)(v) =
ρV (g 0 g)(v) =
ρV (g)(v)
|G| g∈G
|G| g∈G
|G| g∈G
where the last equality follows because the set {g 0 g}g∈G = G.
Furthermore, if v ∈ V G , then
e(v) =
1 X
1 X
1
ρV (g)(v) =
v=
· |G| · v = v
|G| g∈G
|G| g∈G
|G|
Thus, e : V → V G is the identity on V G , so e ◦ e = e.
It remains to check that e is G-linear. The first two parts of this proof imply that ρV (g 0 )(e(v)) =
e(v) for all g 0 ∈ G and all v ∈ V , so we need to show that e(ρV (g 0 )v) = e(v) for all g 0 ∈ G
and all v ∈ V . But
1 X
1 X
1 X
ρV (g)(ρV (g 0 )v) =
ρV (gg 0 )(v) =
ρV (g)(v) = e(v)
e(ρV (g)v) =
|G| g∈G
|G| g∈G
|G| g∈G
where we use again that {gg 0 }g∈G = G.
of 2.21. Let (V, ρV ) be a representation of a finite group G, and let W be a subrepresentation.
Choose a projection (of vector spaces, not representations) fe : V → W . There is no reason
for fe to be G-linear, so we modify it.
Define f : V → W by setting
1
g·f
|G|
We can write f down more explicitly, by unwinding the definition of the action of G on
Hom(V, W ):
1 X
f (v) =
(ρV (g) ◦ fe ◦ ρV (g −1 ))(v)
|G| g∈G
f := e(f ) =
By construction, f ∈ Hom(V, W )G , so f is G-linear.
We need to check further that f |W : W → W is the identity map. But if w ∈ W , then
ρV (g −1 )(w) ∈ W since the action of G preserves W . We also chose fe : V → W to be the
identity on W , so
1 X
1 X
1 X
f (w) =
(ρV (g) ◦ fe ◦ ρV (g −1 ))(w) =
ρV (g)(ρV (g −1 )(w)) =
w=w
|G| g∈G
|G| g∈G
|G| g∈G
as desired.
Finally, we set W 0 := ker(f ). Since f is G-linear, this is a subrepresentation of V complementary to im(f ). Since f ◦ f = f and im(f ) = W , W and W 0 are complementary
subrepresentations of V and we are done.
12
2.7
Schur’s lemma
Now that we have shown that representations of finite groups can be decomposed as the direct
sum of irreducible representations, we wish to study homomorphisms between irreducible
representations in more detail. The key result is the following:
Theorem 2.30 (Schur’s Lemma). Let (V, ρV ) and (W, ρW ) be irreducible finite-dimensional
complex representations of a finite group G.
1. If f : V → W is G-linear, then f is either 0 or an isomorphism.
2. Let f : V → V be a G-linear map. Then f = λ1V for some λ ∈ C. That is, f is
multiplication by a scalar.
Proof.
1. Recall that if f is G-linear, both ker(f ) ⊂ V and im(f ) ⊂ W are subrepresentations. Since V and W are assumed irreducible, we must have either ker(f ) = {0}
(in which case f is injective) or ker(f ) = V (in which case f = 0), and we must have
either im(f ) = 0 (in which case f = 0) or im(f ) = W (in which case f is surjective).
So either f = 0 or f is both injective and surjective, i.e., an isomorphism.
2. Certainly for any λ ∈ C, λ1V : V → V is G-linear. Now for any λ ∈ C and any Glinear f : V → V , consider the linear transformation λ1V − f : V → V . This is clearly
G-linear as well, so either λ1V − f = 0 or λ1V − f is an isomorphism. But λ1V − f
fails to be an isomorphism if and only if λ is a root of the characteristic polynomial
of the matrix representing f (with respect to some basis of V ). Since any polynomial
over C of degree d has d (not necessarily distinct) roots, there is some λ ∈ C such that
f = λ1V .
We can use Schur’s lemma to classify representations of finite abelian groups:
Lemma 2.31. Let G be a finite abelian group and let (V, ρV ) be a finite-dimensional complex
representation. For any g ∈ G, ρV (g) : V → V is a G-linear homomorphism.
Proof. We need to check that for any g 0 ∈ G, ρV (g 0 ) ◦ ρV (g) = ρV (g) ◦ ρV (g 0 ). But this
follows because G is abelian.
Corollary 2.32. Let (V, ρV ) be a non-zero irreducible finite-dimensional complex representation of a finite abelian group G. Then V is 1-dimensional.
Proof. For each element g ∈ G, ρV (g) : V → V is G-linear. Since V is irreducible, Theorem 2.30 implies that there is some λg ∈ C such that ρV (g) = λ1V . But this implies
that every element of V is an eigenvector for every ρV (g), and therefore generates a 1dimensional subrepresentation of V . But this contradicts the irreducibility of V unless V is
itself 1-dimensional.
13
3
Uniqueness of decomposition
Now that we have Schur’s lemma, we are almost ready to prove that the decomposition of
representations into irreducibles (“irreps”) is unique. That is, we would like to prove
Theorem 3.1. Let (V, ρV ) be a finite-dimensional complex representation of a finite group
G, and let V ∼
= V1 ⊕· · ·⊕Vr and V ∼
= V10 ⊕· · ·⊕Vr00 be two decompositions of V into irreducible
0
subrepresentations. Then r = r and the decompositions are the same, up to reordering the
irreducible subrepresentations.
Proof. We first show that for each i, one of the Vj0 is isomorphic to Vi . This shows that the
sets of irreducible subrepresentations are the same. Let ej : V → Vj0 denote projection to the
jth factor; this is G-linear by construction, and e1 + . . . + er0 : V → V is the identity map.
Now by assumption, there is a non-zero G-linear map fi : Vi → V , so (e1 +· · ·+er0 )◦fi is nonzero and G-linear. If each ej ◦ fi were zero, their sum would be, as well (since the images of
the ej are linearly independent). Thus, there is some non-zero G-linear map ej ◦ fi : Vi → Vj0 ,
and since Vj0 is irreducible, it must be an isomorphism.
We rewrite our two decompositions as V ∼
= ⊕i Visi , where the Vi are distinct
= ⊕i Viri and V ∼
irreducible representations of G. We need to prove that ri = si for all i. But for any j,
Hom(Vj , ⊕i Viri )G ∼
= Crj
and similarly
Hom(Vj , ⊕i Visi )G ∼
= C sj
Thus, rj = sj for all j.
We need to say a bit more about homomorphisms of representations first.
Lemma 3.2. Let V , V 0 , and W be vector spaces. Then we have natural isomorphisms
1. Hom(W, V ⊕ V 0 ) ∼
= Hom(W, V ) ⊕ Hom(W, V 0 )
2. Hom(V ⊕ V 0 , W ) ∼
= Hom(V, W ) ⊕ Hom(V 0 , W )
If V , V 0 , and W carry representations of a finite group G, these are isomorphisms of representations.
Proof. First of all, note that all of these spaces are complex vector spaces with dimension
dim W · (dim V + dim V 0 ). Second, recall that we have homomorphisms
iV
eV 0
eV
iV 0
V V ⊕V0 V0
14
given by
iV (v) = (v, 0);
eV (v, v 0 ) = v;
iV 0 (v 0 ) = (0, v 0 );
eV 0 (v, v 0 ) = v 0
It follows that iV ◦ eV + iV 0 ◦ eV 0 = 1 : V ⊕ V 0 → V ⊕ V 0 . Moreover, if V , V 0 , and W are
representations, all four of these maps are G-linear.
1. To define a linear transformation P : Hom(W, V ⊕ V 0 ) → Hom(W, V ) ⊕ Hom(W, V 0 ),
we choose f ∈ Hom(W, V ⊕ V 0 ). Then eV ◦ f ∈ Hom(W, V ) and eV 0 ◦ f ∈ Hom(W, V 0 ),
and we send f to (eV ◦ f, eV 0 ◦ f ) ∈ Hom(W, V ) ⊕ Hom(W, V 0 ). On the other hand,
given (h, h0 ) ∈ Hom(W, V ) ⊕ Hom(W, V 0 ), we may define Q(h, h0 ) Hom(W, V ⊕ V 0 ) via
Q(g, g 0 ) := iV ◦ g + iV 0 ◦ g 0 . Composing these two linear transformations, we see
(Q ◦ P )(f ) = iV ◦ (eV ◦ f ) + iV 0 (eV 0 ◦ f ) = 1V ⊕V 0 ◦ f = f
Therefore, our map Hom(W, V ⊕ V 0 ) → Hom(W, V ) ⊕ Hom(W, V 0 ) is injective. Since
both sides are vector spaces of the same dimension, this implies that our map is an
isomorphism.
We need to check that if V , V 0 , and W are representations of G, then this isomorphism
is G-linear. Recall that if g ∈ G, then ρHom(W,V ⊕V 0 ) (f ) = ρV ⊕V 0 (g) ◦ f ◦ ρW (g −1 ). Then
P (ρHom(W,V ⊕V 0 ) (f )) = P (ρV ⊕V 0 (g) ◦ f ◦ ρW (g −1 ))
= (eV ◦ (ρV ⊕V 0 (g) ◦ f ◦ ρW (g −1 )), eV 0 ◦ (ρV ⊕V 0 (g) ◦ f ◦ ρW (g −1 )))
= (ρV (g) ◦ eV ◦ f ◦ ρW (g −1 ), ρV 0 (g) ◦ eV 0 ◦ f ◦ ρW (g −1 ))
= (ρHom(W,V ) (g)(eV ◦ f ), ρHom(W,V 0 ) (g)(eV 0 ◦ f ))
= ρHom(W,V )⊕Hom(W,V 0 ) (g)(eV ◦ f, eV 0 ◦ f )
= ρHom(W,V )⊕Hom(W,V 0 ) (g) ◦ P (f )
so P is G-linear.
2. To define a linear transformation S : Hom(V ⊕ V 0 , W ) → Hom(V, W ) ⊕ Hom(V 0 , W ),
we choose f ∈ Hom(V ⊕ V 0 , W ). Then f ◦ iV ∈ Hom(V, W ) and f ◦ iV 0 ∈ Hom(V 0 , W ),
and we send f to (f ◦ iV , f ◦ iV 0 ) ∈ Hom(V, W ) ⊕ Hom(V 0 , W ). On the other hand,
given (h, h0 ) ∈ Hom(V, W ) ⊕ Hom(V 0 , W ), g ◦ eV , g 0 ◦ eV 0 ∈ Hom(V ⊕ V 0 , W ), so we
may define a linear transformation T : Hom(V, W ) ⊕ Hom(V 0 , W ) → Hom(V ⊕ V 0 , W )
by sending (g, g 0 ) 7→ g ◦ eV + g 0 ◦ eV 0 . Composing these two maps, we see
(f ◦ iV ) ◦ eV + (f ◦ iV 0 ) ◦ eV 0 = f ◦ 1V ⊕V 0 = f
Therefore, our map Hom(V ⊕ V 0 , W ) → Hom(V, W ) ⊕ Hom(V 0 , W ) is injective, and
since the two vector spaces have the same dimension, it is an isomorphism.
15
We need to check that S is G-linear when our vector spaces are representations. Indeed,
S(ρHom(V ⊕V 0 ),W (g)(f )) = S(ρW (g) ◦ f ◦ ρV ⊕V 0 (g −1 ))
= (ρW (g) ◦ f ◦ ρV ⊕V 0 (g −1 ) ◦ iV , ρW (g) ◦ f ◦ ρV ⊕V 0 (g −1 ) ◦ iV 0 )
= (ρW (g) ◦ f ◦ iV ◦ ρV (g −1 ), ρW (g) ◦ f ◦ iV 0 ◦ ρV 0 (g −1 ))
= (ρHom(V,W ) (g)(f ◦ iV ), ρHom(V 0 ,W ) (g)(f ◦ iV 0 )
= ρHom(V,W )⊕Hom(V 0 ,W ) (g)(f ◦ iV , f ◦ iV 0
= ρHom(V,W )⊕Hom(V 0 ,W ) (g) ◦ S(f )
so S is G-linear, as desired.
3.1
The Regular Representation
Let G be a finite group. Recall the definition of the regular representation of G: Vreg has a
basis {bg }g∈G indexed by elements of G, and the action of G is given by ρreg (g 0 )(bg ) = bg0 g .
We wish to study the decomposition of Vreg into irreducible representations. We can say
more:
Theorem 3.3. Let Vreg ∼
= V1 ⊕ · · · Vr be the decomposition of the regular representation into
irreducible subrepresentations. Then if W is an irreducible representation of G, the number
of Vi isomorphic to W is dim W .
This has an important consequence:
Corollary 3.4. A finite group G has only finitely many irreducible representations, up to
isomorphism, and they all have dimension at most |G|.
It also has a numerical consequence:
P
Corollary 3.5. |G| = W (dim W )2 , where the sum runs over the irreducible representations
of G.
This lets us sharpen the previouspcorollary, because it implies that the irreducible representations have dimension at most |G|.
Example 3.6. Consider G = S3 . We can write down all of its irreducible representations.
We saw on the first problem sheet that the 3-dimensional permutation representation is the
direct sum of the trivial representation and a 2-dimensional representation. Since the permutation does not contain any non-trivial 1-dimensional representation, this 2-dimensional
representation is irreducible. There is another 1-dimensional representation of S3 , namely the
sign representation, which sends each transposition to −1. So we have two 1-dimensional representations and one irreducible 2-dimensional representation. Since |S3 | = 6 = 12 + 12 + 22 ,
we have found every irreducible representation of S3 .
16
Example 3.7. Let G = D8 . On the first problem sheet, you found that there are four
1-dimensional representations of D8 . If Vreg ∼
= V1⊕d1 ⊕ · · · ⊕ Vr⊕dr with di = dim Vi , then we
have 8 = d21 + . . . + d2r . The only solutions to this equation (with di ≥ 1 and di ∈ N) are for
r = 8 and di = 1, r = 2 and di = 2, and r = 5 with one di = 2 and the rest equal to 1. Since
there are exactly four 1-dimensional representations, the last is the case.
Lemma 3.8. Let W be an irreducible representation of G. Define the evaluation map
ev : Hom(Vreg , W )G → W via f 7→ f (be ). This map is an isomorphism.
Proof. First we prove that ev is injective. So suppose that ev(f ) = f (be ) = 0 for some f ∈
Hom(Vreg , W )G . But thenf : Vreg → W is G-linear, so f (bg ) = f (ρreg (g)be ) = ρW (g)f (be ) = 0
for all g ∈ G, so f = 0.
Now we prove that this map is surjective. Choose some w ∈ W ; we will construct a G-linear
f : Vreg → W with f (be ) = w. But it is enough to specify f (bg ) for all g ∈ G, so we set
f (bg ) := ρW (g)(w) and extend by linearity. We check that f is G-linear: for g 0 ∈ G and
{ag }g∈G ,
(f ◦ ρreg (g 0 ))(bg ) = f (bg0 g ) = ρW (g 0 g)(w)
(ρW (g 0 ) ◦ f )(bg ) = ρW (g 0 )(ρW (g)(w)) = ρW (g 0 g)(w)
Since the two maps agree on basis vectors, they are the same.
4
Duals and Tensor Products
Recall the following definition:
Definition 4.1. Let V be a vector space. The dual vector space is V ∗ := Hom(V, C).
This is a special case of the construction Hom(V, W ), with W = C. It is clear that dim V ∗ =
dim V ; in fact, if we choose a basis (v1 , . . . , vd ) for V , we may define a dual basis (f1 , . . . , fd ),
where
(
1 if i = j
fi (vj ) =
0 if i 6= j
Suppose that (V, ρV ) is a representation of G. Then we have defined a representation (V ∗ , ρ∗V )
by setting
ρ∗V (g) := ρHom(V,C) (g) : f 7→ f ◦ ρV (g −1 )
We call this the dual representation.
Lemma 4.2. Let V be a finite-dimensional vector space. Then there is a map h : V → (V ∗ )∗ ,
defined by letting h(v) be the linear map h(v) : V ∗ → C with h(v)(f ) = f (v). If V is a
representation of a finite group G, then this isomorphism is G-linear.
17
Example 4.3. Let ρ : G → GL1 (C) be a 1-dimensional matrix representation. If we write
down ρ∗ , we get
ρ∗ (g)(f ) = f ◦ ρV (g −1 )) = ρV (g −1 ) · f
since ρ∗ (g −1 ) is just multiplication by a scalar. So the matrix for ρ∗ (with respect to any
basis of C) is ρ−1 .
We wish to consider dual matrix representations more generally. We may choose a basis for
V and view it as the set of column vectors of length d; then V ∗ is the set of row vectors of
length d.
Proposition 4.4. Let (V, ρV ) be a representation of G, let B be a basis of V , and let B ∗
be the dual basis for V ∗ . Then the matrix representation ρ∗V,B∗ : G → GLd (C) is given by
ρ∗V,B∗ (g) = (ρV (g)−1 )t , where the t refers to taking the transpose.
Proof. Let B = (v1 , . . . , vd ) and let B ∗ = (f1 , . . . , fd ). Then for any g ∈ G, if ρV,B (g −1 ) =
(gkj ),
d
X
∗
−1
gkj vk ) = gij
ρV,B∗ (g)(fi )(vj ) = fi (ρV (g )(vj )) = fi (
k=1
It follows that
ρ∗V,B∗ (g)(fi )
=
P
j
gij fj .
Proposition 4.5. A representation (V, ρV ) is irreducible if and only if (V ∗ , ρ∗V ) is irreducible.
Proof. Suppose V ∼
= W ⊕W 0 , where W , W 0 are also representations. Then V ∗ = Hom(V, C) ∼
=
0
∗
∗
0∗
Hom(W, C)⊕Hom(W , C), so V is also decomposable (since dim W = dim W and dim W =
dim W 0 ). Since (V ∗ )∗ ∼
= V via a G-linear isomorphism, the same argument shows that if V ∗
is reducible, so is V .
Thus, given an irreducible representation of G, we may take its dual to obtain another one.
However, V ∗ may very well be isomorphic to V . For example, consider
the 2-dimensional
√
−1/2 − 3/2
irreducible representation (V2 , ρ2 ) of S3 . Explicitly, ρ2 (123) = √3/2 −1/2 and ρ2 (23) =
0
( 10 −1
). But these matrices are equal to their own inverse transposes, so V = V ∗ .
Now we turn to tensor products. They can be fairly abstract, but we will give a more
hands-on discussion.
Definition 4.6. Let V and W be finite-dimensional complex vector spaces with bases
(v1 , . . . , vdV ) and (w1 , . . . , wdW ), respectively. Then we define V ⊗ W to be the vector space
with basis given by the symbols {vi ⊗ wj }. It has dimension dV · dW .
P
P
P
If v = i ai vi and w = j bj wj , we define v ⊗ w := i,j ai bj (vi ⊗ wj ). Be careful: not every
element of V ⊗ W is of the form v ⊗ w.
If (V, ρV ) and (W, ρW ) are representations of G, we can also define a representation (V ⊗
W, ρV ⊗W ):
18
Definition 4.7. For g ∈ G, we define ρV ⊗W (g) : V ⊗ W → V ⊗ W by setting ρV ⊗W (g)(vi ⊗
wt ) := ρV (g)(vi ) ⊗ ρW (g)(wt ).
If ρV (g) has matrix M and ρW (g) has matrix N (with respect to the chosen bases), then
ρV ⊗W (g) sends
!
!
dV
dW
X
X
vi ⊗ wt 7→
Mji vj ⊗
Nst ws
s=1
j=1
=
X
Mji Nst vj ⊗ ws
j,s
So in terms of matrices, ρV ⊗W (g) is given by a dV dW × dV dW matrix we call M ⊗ N , whose
entries are [M ⊗ N ](j,s),(s,t) = Mji Nst .
We have given a construction in terms of bases, but we would like to know that if we change
our choice of bases of V and W , we get an equivalent representation. Fortunately, we can
give another description of V ⊗ W :
Proposition 4.8. The vector space V ⊗ W is isomorphic to Hom(V ∗ , W ). If V and W are
representations of G, there is a G-linear isomorphism.
Proof. Let (f1 , . . . , fdV ) be the dual basis for V ∗ . Then we define hit : V ∗ → W by setting
hti (fi ) = wt , and hti (fj ) = 0 if i 6= j; the set {hit } forms a basis for Hom(V ∗ , W ). Concretely,
Hom(V ∗ , W ) can be viewed as the vector space of dW × dV matrices, and hti corresponds to
the linear transformation with a 1 in the (t, i)-entry and 0s elsewhere.
Now we define a linear transformation Hom(V ∗ , W ) → V ⊗ W via hti 7→ vi ⊗ wt . This is
clearly an isomorphism, because Hom(V ∗ , W ) and V ⊗ W have the same dimension, and we
are simply sending basis vectors to distinct basis vectors.
It remains to check that this linear transformation is G-linear. It is enough to compute the
matrix representing ρHom(V ∗ ,W ) (g) with respect to {hti }. Suppose that ρV (g) is represented
by the matrix M (with respect to the chosen basis). Recall also that ρHom(V ∗ ,W ) (g) acts via
hti 7→ ρW (g) ◦ hti ◦ ρV ∗ (g −1 ). Moreover,
the matrix for ρV ∗ (g −1 ) with respect to {fi } is given
P
t
−1
by M , so ρV ∗ (g ) sends fk 7→ j Mkj fj . So hti ◦ ρV ∗ (g −1 ) sends fk 7→ Mki wt . If ρW (g)
−1
is represented
P by the matrix N (with respect to {ws }), then ρW (g) ◦ hti ◦ ρV ∗ (g ) sends
fk 7→ Mki
j Nst ws .
P
It follows that ρHom(V ∗ ,W ) (g) sends hti to j,s Mji Nst hsj , which is the formula we wrote down
for ρV ∗ ⊗W (g).
Example 4.9. Suppose that V is 1-dimensional, so that ρV (g) is just multiplication by a
scalar λg , for every g ∈ G. Then if the matrix representing ρW (g) with respect to some basis
is Ng , the matrix representing ρV ⊗W (g) is λg Ng .
In particular, if V and W are both 1-dimensional, tensoring the representations simply
multiplies them.
19
Suppose G = Sn , and ρV and ρW are both the sign representation. Then ρV ⊗W is the trivial
representation.
5
5.1
Character theory
Definitions and basic properties
There are two definitions of the word “character” you might come across in representation
theory. The first is that a character is a 1-dimensional representation. We will not use this
terminology in this course, to avoid confusion.
P
Definition 5.1. Let M = (Mij ) be a d × d matrix. Then the trace of M is Tr(M ) = i Mii
(the sum of the diagonal entries).
If f : V → V is a linear transformation, we choose a basis B of V , and define Tr(f ) :=
Tr([f ]B ).
Recall that if M, N are d×d matrices, then Tr(M N ) = Tr(N M ). As a result, if P ∈ GLd (C)
is invertible, then Tr(P M P −1 ) = Tr(M ). Thus, the trace of a linear transformation is
independent of the chosen basis.
Definition 5.2. Let (V, ρV ) be a finite-dimensional complex representation of a finite group
G. The character associated to (V, ρV ) is the function χV : G → C given by χV (g) =
Tr(ρV (g)).
This is generally not a homomorphism, because traces are not multiplicative.
Example 5.3. If (V, ρV ) is a 1-dimensional representation of G, then χV (g) = ρV (g) for all
g ∈ G.
Example 5.4. Let (V, ρV ) be the irreducible 2-dimensional representation of S3 . Then
χV (1) = 2, χV (123) = −1, and χV (23) = 0.
Lemma 5.5. Let (V, ρV ) be a representation of G. Then for all g, h ∈ G, χV (hgh−1 ) =
χV (g).
Proof. Since ρV (h) is invertible,
χV (hgh−1 ) = Tr(ρV (h)ρV (g)ρV (h−1 ) = Tr(ρV (g)) = χV (g)
Since (12) and (13) are conjugate to (23), and (132) is conjugate to (123), in the previous
example we actually know all values of χV .
20
Lemma 5.6. Suppose (V, ρV ) and (W, ρW ) are isomorphic representations of a finite group
G. Then χV = χW .
Proof. We choose bases BV and BW for V and W , respectively. This choice of bases gives us
corresponding matrix representations ρV,BV , ρW,BW : G ⇒ GLd (C). Since our representations
are isomorphic, there is some matrix P ∈ GLd (C) such that ρV,BV (g) = P ρW,BW (g)P −1 for
all g ∈ G.
We are going to prove the converse, that is, that if (V, ρV ) and (W, ρW ) are representations
of G such that χV = χW , then (V, ρV ) ∼
= (W, ρW ).
Proposition 5.7. Let (V, ρV ) be a representation of G.
1. χV (e) = dim V
2. χV (g −1 ) = χ(g) (where the bar refers to complex conjugation)
3. For all g ∈ G, |χV (g)| ≤ dim(V ), with equality if and only if ρV (g) = λ1 for some
λ ∈ C.
Proof.
1. Since ρV (e) = 1, χV (e) = Tr(1) = dim V .
2. Since ρV (g) is an invertible matrix with finite order (if g n = e, then ρV (g)n = 1), we
can choose a basis B of V so that the associated matrix ρV,B (g) is a diagonal matrix,
with diagonal entries λ1 , . . . , λd . The λi must be roots of unity, so λ−1 = λi . Thus,
ρV,B (g −1 ) is also a diagonal matrix, with entries λ1 , . . . , λd . It follows that
χV (g −1 ) = λ1−1 + . . . + λ−1
d = λ1 + . . . + λd = χV (g)
3. We may again write χV (g) = λ1 + . . . + λd , where the λi are roots of unity. By the
triangle inequality,
X
|χV (g)| ≤
|λi | = d
i
so |χV (g)| ≤ dim V . We have equality if and only if λi = ri eiθ for a single argument
θ ∈ [0, 2π) and ri ∈ R. Since the λi all have absolute value 1, ri = 1 for all i, so we
have the desired equality if and only if λ1 = λ2 = . . . = λd , in which case ρV (g) is
multiplication by a scalar.
Corollary 5.8. If (V, ρV ) is a representation of G, then ρV (g) = 1 if and only if χV (g) =
dim V .
21
Proof. It is clear that if ρV (g) = 1, then χV (g) = dim V .
If χV (g) = dim V , then |χV (g)| = dim V , so the previous proposition implies that ρV (g) is
multiplication by λ ∈ C. Then χV (g) = λ · dim V , and if χV (g) = dim V we must have
λ = 1.
Thus, the character of a representation detects whether or not a representation is faithful.
Proposition 5.9. Let (V, ρV ) and (W, ρW ) be representations of G.
1. χV ⊕W (g) = χV (g) + χW (g)
2. χV ⊗W (g) = χV (g)χW (g)
3. χV ∗ (g) = χV (g)
4. χHom(V,W ) (g) = χV (g)χW (g)
Proof. Choose bases for V and W , and let M and N be the matrices associated to ρV (g)
and ρW (g), respectively.
1. The matrix for (ρV ⊕ ρW )(g) with respect to the chosen bases is the block-diagonal
matrix
M 0
0 N
so χV ⊕W (g) = Tr ( M0
0
N
) = Tr(M ) + Tr(N ) = χV (g) + χW (g).
2. The matrix M ⊗ N has entries [M ⊗ N ](j,s),(i,t) = Mji Nst , so its trace is
Tr(M ⊗ N ) =
X
X
[M ⊗ N ](i,t),(i,t) =
Mii Ntt = Tr(M ) Tr(N )
i,t
i,t
so the result follows.
3. Recall that the matrix for ρV ∗ (g) with respect to the dual basis is (M −1 )t . Taking
transposes preserves traces, so
χV ∗ (g) = Tr(ρV ∗ (g)) = Tr((M −1 )t ) = Tr(M −1 ) = χV (g −1 ) = χV (g)
4. This follows from the isomorphism (of representations) V ⊗ W ∼
= Hom(V ∗ , W ) and the
previous two parts.
22
5.1.1
The regular character
Let χreg denote the character of the regular representation.
Proposition 5.10. ForPany finite group G, χreg (e) = |G| and χreg (g) = 0 for g 6= e.
In addition, χreg (g) = i di χVi (g), where the sum ranges over irreducible representations
(Vi , ρVi ) and di := dim Vi .
Proof. Recall that χreg (e) = dim Vreg , since this is the case for every representation. Since
dim Vreg = |G|, χreg (e) = |G|.
Recall that the regular representation is given by ρreg (g)(bh ) = bgh , so the matrix the (bgh , bh )
entries equal to 1 and 0 everywhere else. Thus, the trace is 0 unless there is some h such
that gh = h. But this is impossible unless g = e.
⊕ dim Vi
∼
For the last
. Since χV ⊕W = χV + χW , this implies that
P part, recall that Vreg = ⊕i Vi
χreg (g) = i di χVi .
P
Example 5.11. If G = Cn , then we may write ρreg = P
k χk , where χk = ρk is the 12πik/n
2πik/n
= 0, and that for
dimensional representationPg 7→ e
. This implies that n−1
k=0 e
n−1 2πiks/n
= 0 (because both sides are χreg (g s )).
any integer s ∈ [1, n − 1], k=0 e
5.2
Inner products of characters
The set of characters of representations turns out to have a lot of structure, which is what
makes the concept useful.
Definition 5.12. Let C(G) denote the set of functions f : G → C, and let Ccl (G) denote
the set of functions f : G → C which are constant on each conjugacy class of G, i.e., such
that f (hgh−1 ) = f (g) for all g, h ∈ G.
Both C(G) and Ccl (G) are finite-dimensional complex vector spaces: Given a function f :
G → C, we can scale it by setting (λf )(g) := λ · f (g) for λ ∈ C, and given two functions
f, f 0 : G ⇒ C, we can add them by setting (f + f 0 )(g) := f (g) + f 0 (g).
There is a basis for C(G) given by the functions δg : G → C, which is defined by δg (g) = 1
and δg (h) = 0 for h 6= g.
If f, f 0 ∈ C(G) are constant on conjugacy classes of G, then so are λf and f + f 0 , so the
same arguments imply that Ccl (G) is a vector space (and indeed, a subspace of C(G)).
Characters can be viewed as elements of Ccl (G). The dimension of C(G) is |G|, and the
dimension of Ccl (G) is equal to the number of conjugacy classes of G.
Using our choice of basis for C(G), we have an inner product on C(G):
23
Definition 5.13. Let ξ, ψ : G ⇒ G be elements of C(G). We define
hξ, ψi :=
1 X
ξ(g)ψ(g)
|G| g∈G
where ψ(g) means complex conjugation.
Note that hξ, ψi =
6 hψ, ξi. In fact, hξ, ψi = hψ, ξi, and hξ, ψi is linear in the first factor and
conjugate-linear
second factor, i.e., hλξ, µψi = λµhξ, ψi for any λ, µ ∈ C. Furthermore,
P in the
1
2
|ξ(g)|
≥
0 with equality if and only if ξ = 0.
hξ, ξi = |G|
g
This is the list of properties defining a Hermitian inner product. Notice that our basis is not
quite orthonormal with respect to this inner product:
(
0
if g 6= h
hδg , δh i = 1
if g = h
|G|
Our first goal is the following theorem:
Theorem 5.14. Let (V, ρV ) and (W, ρW ) be representations of G and let χV and χW be the
associated characters. Then
hχV , χW i = dim Hom(V, W )G
Before we start proving it, we record some corollaries:
Corollary 5.15. If (V, ρV ) and (W, ρW ) are irreducible representations of G, then hχV , χW i =
1 if they are isomorphic, and hχV , χW i = 0 otherwise.
Proof. This follows from Schur’s lemma, combined with the theorem.
Corollary 5.16. Let χ1 , . . . , χr denote the irreducible characters of G. Then
1. χ1 , . . . , χr are orthonormal elements of Ccl (G).
2. We have an inequality: the number of conjugacy classes of G is at least r.
P
hχ ,χ i
3. If (V, ρV ) is any representation of G, then V ∼
= ⊕i Vi V i and χV = i hχV , χi iχi .
4. If (V, ρV ) is a representation of G, it is irreducible if and only if hV, V i = 1.
Proof.
1. This follows from the previous corollary.
2. Since the χi are orthonormal, they are linearly independent. Therefore dim Ccl (G) ≥ r,
and dim Ccl (G) is the number of conjugacy classes of G.
24
dim Hom(V,Vi )G
3. Recall that V ∼
. The result then follows from the theorem.
= ⊕i Vi
4. We may write V ∼
= ⊕Vi⊕mi for some multiplicities mi . Then
X
X
hχV , χV i =
mi mj hχi , χj i =
m2i
i,j
i
Then hχV , χV i = 1 exactly when one of the mi is 1 and the rest are 0.
Next we give a pair of useful lemmas.
Lemma 5.17. Let V be a vector space. Then f 7→ Tr(f ) gives us a linear map Tr :
Hom(V, V ) → C. That is, if f1 , f2 ∈ Hom(V, V ) and λ1 , λ2 ∈ C, then Tr(λ1 f1 + λ2 f2 ) =
λ1 Tr(f1 ) + λ2 Tr(f2 ).
Proof. Pick a basis for V , and let M and N be the matrices representing f1 and f2 , respectively. Then
Tr(λ1 f1 + λ2 f2 ) = Tr(λ1 M + λ2 N ) =
d
X
(λ1 Mii + λ2 Nii )
i=1
= λ1
d
X
Mii + λ2
i=1
d
X
Nii = λ1 Tr(f1 ) + λ2 Tr(f2 )
i=1
Lemma 5.18. Let V be a vector space, with a subspace W ⊂ V , and let π : V → W be a
projection to W (so π|W = 1W ). Then Tr(π) = dim W .
Proof. Recall that W and ker(π) are complementary subspaces of V , so the natural map
W ⊕ ker(π) → V is an isomorphism. If we choose bases BW and Bπ for W and ker(π),
respectively, their union is a basis for V . If we write π as a matrix with respect to this basis,
we get a matrix of the form
1W 0
0 0
Therefore, Tr(π) = Tr(1W ) = dim W .
Proof of Theorem. Recall that we can view Hom(V, W ) as a representation of G, and the
space of G-linear maps is its invariant subspace: Hom(V, W )G ⊂ Hom(V,
P W ). Then we con1
structed a projection map e : Hom(V, W ) → Hom(V, W )G via f 7→ |G|
g∈G ρHom(V,W ) (g)(f ).
25
We are interested in computing dim Hom(V, W )G , and it suffices to compute Tr(e). But
since e : Hom(V, W ) → Hom(V, W )G is a linear combination of the maps ρHom(V,W ) (g)e :
Hom(V, W ) → Hom(V, W )G , we see that
1 X
Tr(ρHom(V,W ) (g))
Tr(e) =
|G| g∈G
However, Tr(ρHom(V,W ) (g)) = χHom(V,W ) (g), and we proved earlier that χHom(V,W ) (g) =
χV (g)χW (g). Therefore,
1 X
χV (g)χW (g) = hχW , χV i
Tr(e) =
|G| g∈G
Since in this case hχW , χV i is a real number, hχW , χV i = hχV , χW i, as desired.
Example 5.19. Let G = C4 = hg : g 4 = ei. Consider the 2-dimensional matrix repre2
2 2
sentation ρ : G → GL2 (C) given by ρ(g) = ( 1i −i
). Since ρ(g 2 ) = ( 1i −i
) = ( 10 01 ), we
have
χ(e) = 2
χ(g) = 0
χ(g 2 ) = 2
χ(g 3 ) = 0
Now the irreducible characters of G are those coming from the 1-dimensional matrix representations ρk : G → GL1 (C); ρk (g) := ik for k = 0, 1, 2, 3. So we have
χk (e) = 1
χk (g) = ik
χk (g 2 ) = (−1)k
χk (g 3 ) = (−i)k
Now we compute hχ, χk i:
1
2 · 1 + 0 · ik + 2 · (−1)k + 0 · (−i)k
4
1
1
=
2 + 2 · (−1)k =
1 + (−1)k
4
2
(
1 if k = 0, 2
=
0 if k = 1, 3
hχ, χk i =
Thus, ρ is isomorphic to the direct sum of ρ0 and ρ2 .
Example 5.20. Let G = D8 = hs, t : s4 = t2 = e, tst = s−1 i, and consider again the
4-dimensional representation (V, ρV ) coming from its action on the vertices of a square.
Explicitly,




0 0 0 1
0 0 0 1
1 0 0 0
0 0 1 0



ρV (s) = 
ρ
(t)
=
V
0 1 0 0
0 1 0 0
0 0 1 0
1 0 0 0
The conjugacy classes of D8 are {e}, {s, s−1 }, {s2 }, {t, s2 t}, {st, s3 t}, and since characters are
constant on conjugacy classes, we can write down the values of χV :
{e} {s, s−1 } {s2 } {t, s2 t} {st, s3 t}
χV (g) 4
0
0
0
2
26
We first compute
hχV , χV i =
1 2
1 X
|χV (g)|2 =
4 + 2 · 02 + 02 + 2 · 02 + 2 · 22 = 3
8 g∈D
8
8
Thus, V is reducible.
There is an “easy” 1-dimensional subrepresentation, namely the one
1
generated by 11 , which is isomorphic to the trivial representation ρtriv . Indeed,
1
hχV , χtriv i =
1
((4 · 1) + 2 · (0 · 1) + (0 · 1) + 2 · (0 · 1) + 2 · (2 · 1)) = 1
8
so Vtriv is isomorphic to a subrepresentation
of V , and in fact dim Hom(Vtriv , V )D8 = 1, so
1
the subspace generated by 11 is the largest trivial subrepresentation of V .
1
There is a 3-dimensional complement W ⊂ V , so we can expand our table (using the fact
that χtriv + χW = χV ):
{e} {s, s−1 } {s2 } {t, s2 t} {st, s3 t}
χV (g)
4
0
0
0
2
χtriv (g) 1
1
1
1
1
χW (g)
3
−1
−1
−1
1
Now we compute
hχW , χW i =
1 X
1 2
|χW (g)|2 =
3 + 2 · (−1)2 + (−1)2 + 2 · (−1)2 + 2 · 12 = 2
8 g∈D
8
8
so W is also reducible. It must therefore have a 1-dimensional subrepresentation, and the
characters of the non-trivial 1-dimensional subrepresentations of D8 are
{e} {s, s−1 } {s2 } {t, s2 t} {st, s3 t}
χ+− (g) 1
1
1
−1
−1
χ−+ (g) 1
−1
1
1
−1
χ−− (g) 1
−1
1
−1
1
Then we may compute hχW , χ+− i, hχW , χ−+ i, and hχW , χ−− i, and we see that hχW , χ−− i =
1. Thus, W is the direct sum of the 1-dimensional representation with s, t 7→ −1 and a
2-dimensional representation W 0 .
Finally,
hχW 0 , χW 0 i = hχW − χ−− , χW − χ−− i = hχW , χW i − 2hχW , χ−− i + hχ−− , χ−− i = 1
so W 0 is irreducible.
27
5.2.1
Character tables
A character table is a way of recording information about every irreducible character of a
group. Let χ1 , . . . , χr be the irreducible characters of G, and let C1 , . . . , Cs be the conjugacy
classes of G. We have seen that r ≤ s.
The character table for G is a table with columns indexed by C1 , . . . , Cs and rows indexed
by χ1 , . . . , χr . For computational convenience, we will generally also add a line recording the
size of the conjugacy classes. Last week, we gave the example of the character table of D8 :
{e} {s, s−1 } {s2 } {st, s−1 t} {t, s2 t}
size of conjugacy class 1
2
1
2
2
χtriv (g)
1
1
1
1
1
χ+− (g)
1
1
1
−1
−1
1
−1
1
−1
1
χ−+ (g)
χ−− (g)
1
−1
1
+1
−1
2
0
−2
0
0
χ2 (g)
A very small example is the character table of C2 = hg : g 2 = ei. There are two conjugacy
classes and two characters, so we get
{e} {g}
size of conjugacy class 1
1
χ1
1
1
1 −1
χ2
Let’s give a bigger example of a character table, and compute the character table of S4 .
Recall that in a symmetric group, the conjugacy classes are indexed by the shape of the
cycles. So the transpositions (1 2), (1 3), (1 4), (2 3), (2 4), (3 4) are all conjugate, the 3cycles (1 2 3), (1 3 2), (1 2 4), (1 4 2), (1 3 4), (1 4 3), (2 3 4), (2 4 3) are all conjugate, and so
on. We start off with the trivial character and the sign character:
{e} (1 2) (1 2 3) (1 2)(3 4) (1 2 3 4)
size of conjugacy class 1
6
8
3
6
χtriv
1
1
1
1
1
χsign
1
−1
1
1
−1
We need to find the rest of the irreducible characters of S4 . To start with, consider the
permutation representation (Vperm , ρperm ); this is a 4-dimensional representation with basis
(b1 , b2 , b3 , b4 ), and S4 acts by acting on the indices. That is, if σ ∈ S4 , then ρperm (σ)(bi ) = bσ·i .
We can compute the character χperm :
χperm (e) = 4
χperm (1 2) = 2
χperm (1 2 3) = 1
χperm ((1 2)(3 4)) = 0
χperm (1 2 3 4) = 0
Now observe that b1 + b2 + b3 + b4 ∈ Vperm is fixed by ρperm (σ) for every σ ∈ S4 . Thus, the
span of b1 +b2 +b3 +b4 is a 1-dimensional subrepresentation of Vperm , isomorphic to the trivial
28
representation. Therefore, by Maschke’s theorem there is a complementary subrepresentation
W ⊂ Vperm , i.e., W ⊂ Vperm is a subrepresentation, W ∩ hb1 + b2 + b3 + b4 i = {0}, and the
map W ⊕ hb1 + b2 + b3 + b4 i → V (given by (w, v) 7→ w + v) is an isomorphism.
This implies that W is 3-dimensional. We can use character theory to prove that W is
irreducible. Since taking direct sums of representations translates to adding their characters,
we can compute χW . Namely, χW = χperm − χtriv , so
χW (e) = 3
χW (1 2) = 1
χW (1 2 3) = 0
χW ((1 2)(3 4)) = −1
χW (1 2 3 4) = −1
Recall that a representation (W, ρW ) is irreducible if and only if the inner product hχW , χW i =
1. We compute
hχW , χW i =
1 2
3 + 6 · 12 + 8 · 02 + 3 · (−1)2 + 6 · (−1)2 = 1
24
so W is indeed irreducible.
We add χW to our table:
{e} (1 2) (1 2 3) (1 2)(3 4) (1 2 3 4)
size of conjugacy class 1
6
8
3
6
χtriv
1
1
1
1
1
1
−1
1
1
−1
χsign
3
1
0
−1
−1
χW
There are more irreducible representations of S4 , because 12 + 12 + 32 = 11 < 24. In fact,
the squares of the dimensions of the remaining representations must add to 13, so we are
looking for a 2-dimensional representation and a 3-dimensional representation.
One way to construct another 3-dimension representation is to take the tensor product
W 0 := Vsign ⊗ W . This is 3-dimensional (because dim Vsign = 1 and dim W = 3), and it is
irreducible by an exercise on problem sheet #3. We can compute χW 0 :
χW 0 = χsign⊗W = χsign · χW
Thus, our table is now
{e} (1 2) (1 2 3) (1 2)(3 4) (1 2 3 4)
size of conjugacy class 1
6
8
3
6
χtriv
1
1
1
1
1
χsign
1
−1
1
1
−1
χW
3
1
0
−1
−1
0
χW
3
−1
0
−1
1
Note that since χW 0 6= χW , we know that W and W 0 are not isomorphic as representations.
29
Now we need to find the character of our mysterious 2-dimensional representation, which we
will call (U, ρU ). We add its row to the character table:
{e} (1 2) (1 2 3) (1 2)(3 4) (1 2 3 4)
size of conjugacy class 1
6
8
3
6
χtriv
1
1
1
1
1
1
−1
1
1
−1
χsign
χW
3
1
0
−1
−1
χW 0
3
−1
0
−1
1
χU
2
?
?
?
?
We know that U is the only irreducible 2-dimensional representation of S4 (for dimension
reasons). On the other hand Vsign ⊗ U is an irreducible 2-dimensional representation of S4 ;
it follows that it must be isomorphic to U , and so we must have χsign · χU = χU . But that,
in turn, implies that χU (1 2) = χU (1 2 3 4) = 0, so the table is now
{e} (1 2) (1 2 3) (1 2)(3 4) (1 2 3 4)
size of conjugacy class 1
6
8
3
6
χtriv
1
1
1
1
1
1
−1
1
1
−1
χsign
χW
3
1
0
−1
−1
3
−1
0
−1
1
χW 0
2
0
?
?
0
χU
We also know that hχU , χV i = 0 for any irreducible representation V not isomorphic to U .
If we compute hχU , χW i, we get
1
(2 · 3 + 6 · (0 · 1) + 8 · (χU (1 2 3) · 0) + 3 · (χU ((1 2)(3 4)) · (−1)) + 6 · (0 · (−1)))
24
1
(6 − 3χU ((1 2)(3 4)))
=
24
Since this inner product must be 0, we see that χU ((1 2)(3 4)) = 2, and our table is now
hχU , χW i =
{e} (1 2) (1 2 3) (1 2)(3 4) (1 2 3 4)
size of conjugacy class 1
6
8
3
6
χtriv
1
1
1
1
1
1
−1
1
1
−1
χsign
χW
3
1
0
−1
−1
χW 0
3
−1
0
−1
1
χU
2
0
?
2
0
Now we use hχU , χtriv i = 0:
1
(2 · 1 + 6 · (0 · 1) + 8 · (χU (1 2 3) · 1) + 3 · (2 · 1) + 6 · (0 · 1))
24
1
=
(8 + 8 · χU (1 2 3))
24
hχU , χtriv i =
30
This implies that χU (1 2 3) = −1, so we have completed our table:
size of conjugacy class
χtriv
χsign
χW
χW 0
χU
{e} (1 2) (1 2 3) (1 2)(3 4) (1 2 3 4)
1
6
8
3
6
1
1
1
1
1
1
−1
1
1
−1
3
1
0
−1
−1
3
−1
0
−1
1
2
0
−1
2
0
So we have shown that there is some irreducible 2-dimensional representation (U, ρU ) of S4 ,
and we know the traces of ρU (σ) for σ ∈ S4 .
We can say a bit more about (U, ρU ), though. Observe that χU ((1 2)(3 4)) = 2 = χU (e).
We showed earlier that for any character, χ(g) = χ(e) if and only if ρ(g) = ρ(e). Let
N ⊂ S4 be the normal subgroup generated by pairs of transpositions (e.g. (1 2)(3 4)); then
we have shown that ρU (σ) = ρU (e) for σ ∈ N . Thus, we can view the homomorphism
ρU : S4 → GL(U ) as the composition
ρU : S4 S4 /N ∼
= S3 → GLw (U )
Indeed, we have an inclusion S3 ,→ S4 by thinking of a permutation of {1, 2, 3} as a permutation of {1, 2, 3, 4} which fixes 4. Composing with the projection S4 S4 /N , we get
a homomorphism S3 → S4 /N between groups of the same order. But since N ∩ S3 = {e}
inside S4 , this homomorphism is injective, so it is an isomorphism.
Thus, (U, ρU ) can be viewed as a 2-dimensional representation of S3 , and χU agrees with
the character of the irreducible 2-dimensional representation of S3 . So actually, we already
knew about (U, ρU )!
This motivates the following definition:
Definition 5.21. Let N C G be a normal subgroup of a finite group G, and let (V , ρV )
be a representation of the quotient G/N . The inflation of (V , ρV ) is the representation
(V, ρV ) of G, where V = V as vector spaces, and ρV : G → GL(V ) is the composition
ρV
G G/N −→
GL(V ). Equivalently, ρV (g) := ρV (gN ).
Observe that the inflation V is irreducible as a representation of G if and only if V is
irreducible as a representation of G/N .
5.2.2
Row and column orthogonality
Fact: Character tables are square, i.e., the number of irreducible representations of G is
equal to the number of conjugacy classes of G.
We will prove this later on, but we assume it for now.
31
We have already proved that the rows of a character table can be related to each other:
(
0 if i 6= j
1 X
hχi , χj i :=
χi (g)χj (g) =
|G| g∈G
1 if i = j
We refer to this as row orthogonality, because it tells us that the rows of a character table
are mutually orthogonal.
We also have a result we refer to as column orthogonality:
Proposition 5.22. Let g ∈ G, and let C(g) denote the conjugacy class containing g. Then
for any h ∈ G,
(
r
X
0
if h ∈
/ C(g)
χi (g)χi (h) =
|G|
if h ∈ C(g)
|C(g)|
i=1
Proof. Consider the class function δC(g) : G → C, defined by δC(g) (h) = 1 if h ∈ C(g) and
δC(g) (h) = 0 otherwise. The set {δC(g) }g forms a basis for the space of class functions Ccl (G),
where g ranges over representatives of each conjugacy class.
We proved earlier that {χi }ri=1 are orthonormal (with respect to the inner product on C(G)),
and therefore linearly independent. Therefore, if we assume that character tables are square,
{χi }i form another basis for Ccl (G). We may therefore write
δC(g)
r
X
=
hδC(g) , χi iχi
i=1
We can compute directly that hδC(g) , χi i =
δC(g) =
|C(g)|
χi (g),
|G|
r
X
|C(g)|
i=1
|G|
so
χi (g)χi
Evaluating both sides on h, we see that
(
0 if h ∈
/ C(g)
χi (g)χi (h) =
|G|
1 if h ∈ C(g)
r
X
|C(g)|
i=1
Then we may multiply both sides by
|G|
|C(g)|
to obtain the result.
Example 5.23. Take g = e in the previous proposition. Then it implies that
r
X
i=1
χi (e)χi (e) =
r
X
i=1
which is a familiar formula.
32
(dim Vi )2 = |G|
Example 5.24. We could have used column orthogonality to compute the last row of the
character table for S4 :
{e} (1 2) (1 2 3) (1 2)(3 4) (1 2 3 4)
size of conjugacy class 1
6
8
3
6
χtriv
1
1
1
1
1
1
−1
1
1
−1
χsign
χW
3
1
0
−1
−1
0
3
−1
0
−1
1
χW
2
?
?
?
?
χU
Then column orthogonality (applied to each column with itself) implies that χU (1 2) =
χU (1 2 3 4) = 0, χU (1 2 3) = ±1, and χU ((1 2)(3 4)) = ±2. Comparing the (1 2 3) column
with the {e} column impies that χU (1 2 3) = −1, and comparing the (1 2)(3 4) column with
the (1 2 3) column implies that χU ((1 2)(3 4)) = 2.
We can reformulate our row and column orthogonality relations a bit. Let the conjugacy
classes of G be C1 , . . . , Cr , with representatives g1 , . . . , gr , respectively. Now define an r × r
matrix B by setting
s
|Cj |
Bij :=
χi (gj )
|G|
T
Proposition 5.25. The matrix B is unitary, i.e., B −1 = B .
T
Proof. It is enough to show that BB = 1r . Indeed, this implies that det(B) 6= 0 so B is
invertible. Now
s
s
!
X
|Cj |
|Cj |
1 X
T
χi (gj ) ·
χk (gj ) =
(BB )ik =
|Cj |χi (gj )χk (gj ) = hχi , χk i
|G|
|G|
|G|
j
j
We can rephrase this as saying that {δC(g) } and {χi } both give bases for the space of class
functions Ccl (G). But {χi } is orthonormal and {δC(g) } is orthogonal but not orthonormal,
so B is the change-of-basis matrix between {χi } and a rescaled version of {δC(g) }.
5.2.3
Summary
Characters, and character tables, are a tool for assembling information about representations
of a group, in a way that’s convenient for computation.
We cannot recover a group from its character table. For example, D8 and the quaternion
group Q8 are not isomorphic, but they have the same character table.
33
However, we can find the center of the group:
in the center Z(G) if and only if
Pr g ∈ G is
2
C(g) = {g}, which is the case if and only if i=1 |χi (g)| = |G|.
We can also find all normal subgroups of G from the character table, so in particular, we
can determine whether or not G is simple.
Definition 5.26. Let χ : G → C be a function. We define ker χ := {g ∈ G : χ(g) = χ(e)}.
We have proved that if (V, ρV ) is a representation and χV is the associated character, then
χV (g) = χV (e) if and only if ρV (g) = ρV (e) = 1V . Thus, ker χV = ker ρV . But ker ρV ⊂ G
is a normal subgroup of G, so G is simple if and only if ker χ = {e} for every character χ.
Proposition 5.27. Let G be a finite group and let H C G be a normal subgroup. Let
χ1 , . . . , χr be the irreducible characters of G.
1. There is a representation (V, ρV ) of G such that ker ρV = H.
2. There is a subset I ⊂ {1, . . . , r} such that H = ∩i∈I ker χi .
Proof.
1. Consider the regular representation (V reg , ρreg ) of G/H, and let (V, ρV ) be the
inflation to G. The regular representation is faithful, so ker ρV = H.
2. Let (V, ρV ) again be the inflation of the regular representation of G/H. Then V ∼
=
V1⊕mi ⊕ · · · ⊕ Vrmr , where the Vi are the irreducible representations of G and the mi are
the multiplicities. Let I := {i : mi > 0}. Then
H = ker ρV = ∩i∈I ker ρi = ∩i∈I ker χi
6
6.1
Algebras and modules
Algebras
Up to this point, we have viewed representations of groups as vector spaces together with
linear actions of groups. The vector space has an additive structure, and the group acts by
multiplication, but we have treated these additive and multiplicative structures separately.
An algebra is a structure that has both addition and multiplication. A basic example is the
set of d × d matrices Matd (C) — it is a vector space because we can add matrices and scale
them by complex numbers, but we can also multiply matrices.
In other words, we have a map
m : Matd (C) × Matd (C) → Matd (C)
(M, N ) 7→ M · N
This map has the following important properties:
34
1. m is bilinear: m(λ1 M1 + λ2 M2 , N ) = λ1 m(M1 , N ) + λ2 m(M2 , N )
2. m is associative: m(m(L, M ), N ) = m(L, m(M, N )), or equivalently, (L · M ) · N =
L · (M · N )
3. m is unital: there is an element Id ∈ Matd (C) such that m(M, Id ) = m(Id , M ) = M
for all M ∈ Matd (C)
We will define an algebra to be a complex vector space satisfying these properties:
Definition 6.1. An algebra is a vector space A equipped with a bilinear, associative, and
unital multiplication map m : A × A → A.
We will write ab or a · b for m(a, b).
We observe that the unit element of A is unique. Indeed, if 1A and 10A are both unit elements
of A, then 10A = 1A · 10A = 1A .
Given an algebra A, we can define a map C → A via λ 7→ λ1A . This map is compatible
with addition and multiplication (it is a ring homomorphism, if you have seen rings before).
Example 6.2.
1. Let A = C with the usual vector space structure and m given by the
usual multiplication.
2. Let A = C ⊕ C, with multiplication given by
m((x1 , y1 ), (x2 , y2 )) := (x1 x2 , y1 y2 )
3. Let A be the set of polynomials C[x], which is an infinite-dimensional vector space,
with multiplication given the multiplication of polynomials.
4. Let A = C ⊕ C, with multiplication given by
m((x1 , y1 ), (x2 , y2 )) := (x1 x2 , x1 y2 + x2 y1 )
Alternatively, we can view A as a 2-dimensional complex vector space with basis {1, x},
with multiplication given by 12 = 1, 1x = x1 = x, and x2 = 0 and extended bilinearly.
The unit element is (1, 0).
5. Let V be a vector space and let A = Hom(V, V ). Multiplication is given by composition
of maps, i.e., f g := f ◦ g. If we pick a basis of V , A becomes isomorphic to Matd (C)
for some d, and composition of maps becomes multiplication of matrices.
6. Suppose A and B are algebras. We can make the direct sum A ⊕ B into an algebra by
defining multiplication by
m((a1 , b1 ), (a2 , b2 )) = (a1 a2 , b1 b2 )
The unit element is (1A , 1B ).
35
For us, the most important example will be the group algebra:
Let G be a finite group, and let C[G] denote the set of formal linear combinations of elements
of G:
C[G] := {λ1 [g1 ] + . . . + λs [gs ] : λi ∈ C}
We define addition via
(λ1 [g1 ] + . . . + λs [gs ]) + (µ1 [g1 ] + . . . + µs [gs ]) := (λ1 + µ1 )[g1 ] + . . . + (λs + µ2 )[gs ]
and scaling via
µ · (λ1 [g1 ] + . . . + λs [gs ]) := (µλ1 [g1 ] + . . . + µλs [gs ]
so C[G] is a vector space. As a vector space, it is the same as the vector space Vreg that we
have seen while studying the regular representation.
However, C[G] also has a multiplication map, which comes from the multiplication map
on G. More specifically, we define m([g], [h]) := [gh] and extend this to a bilinear map
m : C[G] × C[G] → C[G]. Explicitly,


X
X

λi µj [gk ]
(λ1 [g1 ] + . . . + λs [gs ]) · (µ1 [g1 ] + . . . + µs [gs ]) :=
gk
gi ,gj with gi gj =gk
This product is associative because multiplication in G is associative, and the unit element
is [e].
The group algebra C[G] is commutative if and only if G is an abelian group.
Example 6.3. Let G = C2 = hg : g 2 = ei. Then C[G] = {λ1 [e] + λ2 [g] : λi ∈ C} is a
2-dimensional vector space, with multiplication
(λ1 [e] + λ2 [g]) · (µ1 [e] + µ2 [g]) = (λ1 µ1 + λ2 µ2 )[e] + (λ1 µ2 + λ2 µ1 )[g]
Definition 6.4. Let A and B be algebras. A linear map f : A → B is an algebra homomorphism if f (1A ) = 1B and f (a1 a2 ) = f (a1 )f (a2 ). An algebra homomorphism f is an algebra
isomorphism if f is an isomorphism of vector spaces.
Example 6.5. Consider the map f : C[C2 ] → C ⊕ C defined by [e] 7→ (1, 1) and [g] 7→
(1, −1). This is clearly an isomorphism of vector spaces, since its image contains a basis of C⊕
C and both vector spaces are 2-dimensional. To check that f is an algebra homomorphism,
it is enough to check that if x, y ∈ C[C2 ] are basis elements, then f (xy) = f (x)f (y), since
multiplication is bilinear on both sides. But
f ([e][e]) = f ([e]) = (1, 1) = (1, 1)(1, 1) = f ([e])f ([e])
f ([e][g]) = f ([g]) = (1, −1) = (1, 1)(1, −1) = f ([e])f ([g])
f ([g][e]) = f ([g]) = (1, −1) = (1, −1)(1, 1) = f ([g])f ([e])
f ([g][g]) = f ([e]) = (1, 1) = (1, −1)(1, −1) = f ([g])f ([g])
36
Recall that our algebras are not necessarily commutative.
Definition 6.6. Let A be an algebra with multiplication map m. The opposite algebra Aop
is the algebra with the same underlying vector space, and with multiplication given by
mop : Aop × Aop → Aop
(a, b) 7→ m(b, a)
The algebra axioms for A imply that Aop is also an algebra, and the unit element is the
same. It is clear that mop is the same as m if and only if A is commutative.
Proposition 6.7. Let G be a finite group. Then C[G] ∼
= C[G]op , with an isomorphism given
by
I : C[G] → C[G]
[g] 7→ [g −1 ]
Thus, even if A is non-commutative, it is possible for A and Aop to be abstractly isomorphic.
Proof. It is clear that I is an isomorphism of vector spaces, since it simply shuffles the given
basis for C[G]. We need to check that I is an algebra homomorphism. For this, it suffices
to check that mop (I([g]), I([h])) = I(m([g], [h])) for all g, h ∈ G. But
mop (I([g]), I([h])) = mop ([g −1 ], [h−1 ]) = m([h−1 ], [g −1 ]) = [h−1 g −1 ]
and
I(m([g], [h])) = I([gh]) = [(gh)−1 ] = [h−1 g −1 ]
so both sides are equal.
6.2
Modules
Now we will generalize the definition of a representation.
Definition 6.8. Let A be an algebra. A (left) A-module is a vector space M together with
an algebra homomorphism ρ : A → Hom(M, M ).
Equivalently, an A-module is a vector space M together with an action map A × M → M
sending (a, m) 7→ a · m := ρ(a)(m) satisfying
1. a · (m + n) = a · m + a · n
2. (a + b) · m = a · m + b · m
3. (ab) · m = a · (b · m)
37
4. 1A · m = m
Proposition 6.9. Let G be a finite group. A C[G]-module is the same thing as a representation of G.
Proof. We explain how to go between representations of G and C[G]-modules.
Suppose that ρ : C[G] → Hom(M, M ) is the map making M into a C[G]-module. Then
we can restrict ρ to G = {[g]}g∈G ⊂ C[G] to get a function ρM : G → Hom(M, M ). By
the definition of an algebra homomorphism, ρ([g][h]) = ρ([g]) ◦ ρ([h]) for any g, h ∈ G. In
particular, if we take h = g −1 , we see that
ρ([g]) ◦ ρ([g −1 ]) = ρ([e]) = 1M
Thus, each linear map ρ([g]) : M → M has an inverse, namely ρ([g −1 ]). Thus, ρM is a
function G → GL(M ), and we have seen that it is a group homomorphism. So (M, ρM ) is a
group representation.
On the other hand, suppose that ρ : G → GL(M ) is a representation of G. We may extend
ρ linearly to C[G] to get a map ρe : C[G] → Hom(M, M ) given by
X
X
ρe(
λg [g]) =
λg ρ(g)
g
g
This is a linear map, by construction, and it follows from the definition of multiplication in
C[G] that ρe is an algebra homomorphism. Thus, M is a C[G] module.
Example 6.10. Let A be the algebra C. Then an A-module is a vector space M and a
homomorphism ρ : C → Hom(M, M ). We must have ρ(1) = 1M , and by linearity, we must
have ρ(λ) = λ1M for all λ ∈ C. Thus, there is a unique ρ making M into an A-module. So
a C-module is the same thing as a vector space.
Example 6.11. Let A = C ⊕ C. Recall that we showed that A ∼
= C[C2 ]. Let us classify
1-dimensional A-modules. In other words, if M is a 1-dimensional vector space, we need to
classify algebra homomorphisms
ρ : C ⊕ C → Hom(M, M ) = C
It suffices to specify ρ(1, 0) and ρ(0, 1), and we need these values to satisfy
(ρ(1, 0))2 = ρ((1, 0)2 ) = ρ(1, 0)
(ρ(0, 1))2
= ρ((0, 1)2 ) = ρ(0, 1)
ρ(1, 0)ρ(0, 1) = ρ((1, 0)(0, 1)) = ρ(0, 0) = 0
ρ(0, 1) + ρ(1, 0) = ρ((1, 1)) = ρ(1A ) = 1
Since 0 and 1 are the only complex numbers satisfying x2 = x, and we need the sum and
product of ρ(1, 0) and ρ(0, 1) to be 1 and 0, respectively, we see that there are two solutions:
ρ(1, 0) = 1 and ρ(0, 1) = 0, or ρ(0, 1) = 1 and ρ(1, 0) = 0. The fact that there are two
solutions corresponds to the fact that C2 has two isomorphism classes of representations.
38
Example 6.12. Recall that for any finite group G, we defined the the regular representation
(Vreg , ρreg ), and that the vector space Vreg is the same as the underlying vector space of C[G].
Viewing Vreg as a C[G]-modules, the action map
C[G] × Vreg → Vreg
is the same as the multiplication map
C[G] × C[G] → C[G]
We can extend this example a bit: for any algebra A, we can construct an A-module with
underlying vector space A by defining the structure map
ρ : A → Hom(A, A)
a 7→ ma
where ma : A → A denotes the “multiplication by a” map sending b 7→ ab. So A is acting
on itself by left multiplication.
6.3
Module homomorphisms
Definition 6.13. Let A be an algebra and let M, N be A-modules. A homomorphism of
A-modules (or an A-linear map) is a linear map f : M → N such that f (am) = af (m) for
all a ∈ A, m ∈ M .
We write HomA (M, N ) := {f : M → N |f is A − linear}. It is a subset of Hom(M, N ), and
it is easy to check that it is a vector subspace. If M = N , it is actually a subalgebra, because
the composition of two A-linear maps is A-linear.
Now we check that if A is a group algebra, then homomorphisms of modules are the same
as homomorphisms of representations
Proposition 6.14. Let G be a finite group. Let A = C[G] and let M and N be A-modules,
corresponding to representations (M, ρM ) and (N, ρN ). Then HomA (M, N ) = Hom(M, N )G .
Proof. Suppose f : M → N is A-linear. In particular, it is a linear transformation and we
need to check that it is G-linear. But
f (ρM (g)m) = f ([g] · m) = [g] · f (m) = ρN (g)(f (m))
for all g ∈ G, m ∈ M , so this follows.
Conversely, suppose f : M → N is G-linear. Then f is linear and
X
X
X
X
X
f ((
λg [g])m) = f ( (λg [g]·m)) = f (
λg ρM (g)(m)) =
λg ρN (g)(f (m)) = (
λg [g])·f (m)
g∈G
g∈G
g∈G
g∈G
so f is A-linear.
39
g∈G
6.4
Direct sums, submodules, and simple modules
Let A be an algebra and let M and N be A-modules. Then M ⊕ N is also an A-module if
we define the action
A × (M ⊕ N ) → M ⊕ N
a · (m, n) = (a · m, a · n)
If A = C[G], then this corresponds to taking direct sums of representations.
There are inclusion maps iM : M → M ⊕ N , iN : N → M ⊕ N and projection maps
pM : M ⊕ N → M , pN : M ⊕ N → N , and they are module homomorphisms.
Exercise 6.15. Let A be an algebra and let L, M, N be A-modules. Prove that HomA (L ⊕
M, N ) ∼
= HomA (L, N ) ⊕ HomA (M, N ) and HomA (L, M ⊕ N ) ∼
= HomA (L, M ) ⊕ HomA (L, N ).
We can also define submodules, which correspond to subrepresentations:
Definition 6.16. Let A be an algebra and let M be an A-module, with ρ : A → Hom(M, M )
the structure map. A submodule of M is a vector subspace M 0 ⊂ M such that ρ(a)(m0 ) ∈ M 0
for all a ∈ A, m0 ∈ M 0 .
So a submodule is a vector supspace which is also stable under the action of A. The restricted
action of A defines a structure map ρ0 : A → Hom(M 0 , M 0 ).
Exercise 6.17. Let A be an algebra and let f : M → N be a homomorphism of A-modules.
Then ker(f ) ⊂ M and im(f ) ⊂ N are submodules of M and N , respectively.
We claim that if A = C[G], then a submodule M 0 ⊂ M is a subrepresentation of (M, ρM ).
Indeed,
ρM (g)(m0 ) = [g] · m0 ∈ M 0
for all g ∈ G and m0 ∈ M 0 , so M 0 is preserved by ρM (g) for all g ∈ G. Conversely, suppose
M 0 ⊂ M is a subrepresentation. Then
X
X
(
λg [g]) · m0 =
λg ρM (g)(m0 ) ∈ M 0
g∈G
g∈G
for all m0 ∈ M 0 and all λg ∈ C. Since every element a ∈ A can be written as a linear
combination of {[g]}g∈G , this shows that M 0 ⊂ M is a submodule.
Definition 6.18. Let A be an algebra. An A-module M is said to be simple if it contains
no proper non-zero submodules.
Definition 6.19. Let A be an algebra and let M be an A-module. If m ∈ M , then the
submodule generated by m is the set A · m := {a · m|a ∈ A}. It is a submodule because
b · (a · m) = (ba) · m.
40
Lemma 6.20. Suppose M is a simple A-module and suppose m ∈ M is non-zero. Then
A · m = M . Conversely, suppose that for every non-zero element m ∈ M , A · m = M . Then
M is a simple A-module.
Proof. Since A · m ⊂ M is a submodule, it is either {0} or M . But 1A · m = m 6= 0, so
A · m 6= {0}, so A · m = M .
Conversely, suppose we have a non-zero submodule M 0 ⊂ M . Then for some non-zero
m0 ∈ M 0 , A · m0 is a submodule of M 0 , not just M . Therefore, A · m0 = M ⊂ M 0 and
M0 = M.
Example 6.21. Suppose M is 1-dimensional. Then M is simple because there are no proper
non-zero vector subspaces.
Example 6.22. Suppose A = C[G] for some finite group G. Then an A-module M is simple
if and only if the representation (M, ρM ) is irreducible.
Example 6.23. Let A = Matd (C) and let M = C⊕d . We make M into an A-module via the
usual action of matrices on column vectors. Thus, we get a structure map ρ : Matd (C) →
Hom(C⊕d , C⊕d ) by setting ρ(P )(v) := P · v (this is actually an isomorphism!).
Then M is a simple A-module. Indeed, suppose M 0 ⊂ M is an A-submodule. If M 0 6= {0},
then after changing basis, we may assume that M 0 contains the first standard basis vector
v1 . For each v ∈ C⊕d , there is a matrix Pv such that Pv · v1 = v. Thus, A · v1 = M ⊂ M 0 .
So the only non-zero submodule of C⊕d is C⊕d itself.
Remark 6.24. Suppose A is an algebra which finite-dimensional as a complex vector space,
and suppose M is a simple A-module. Then for any m ∈ M , A · m = M , so the map
A→M
a 7→ a · m
is a surjective linear map of vector spaces. So M is also finite-dimensional over C.
Example 6.25. Here is an example of submodules that do not come from direct sums. Let
A be the 2-dimensional algebra C[x]/x2 with basis {1, x}, unit 1, and multiplication defined
by x2 = 0.
Let M be A, considered as a module over itself. Then A · x ⊂ M is a non-zero submodule.
But
(λ + µx) · x = λx + µx2 = λx
so as a vector subspace of M , A · x = hxi.
If M = A·x⊕M 0 , then M 0 ⊂ M must be a 1-dimensional subspace such that M 0 ∩A·x = {0}.
In other words, as a vector space M 0 = hλ + µxi with λ 6= 0. But (λ0 + µ0 x) · (λ + µx) =
λ0 λ+(λ0 µ+λµ0 )x — this is not an element of hλ+µxi unless µ0 = 0. So there is no equivalent
of Maschke’s theorem for modules over a general algebra.
41
We can classify all simple A-modules, though. Suppose M is a simple A-module. Since A is
commutative, x · M := {x · m|m ∈ M } ⊂ is a submodule of M (since a · (x · m) = x · (a · m)).
Since M is simple, either x · M = {0} or x · M = M . If the latter holds, then we may
conclude that
x2 · M = x · (x · M ) = x · M = M
But x2 = 0, so x2 · M = M = {0}.
Thus, if M 6= {0}, we must have x · m = 0 for all m ∈ M , and the action of A is given by
(λ + µx) · m = λ · m. If dim M > 1, then M clearly has proper non-zero submodules. Thus,
the simple A-modules are 1-dimensional vector spaces with x ∈ A acting as 0.
Now that we have defined submodules and simple modules, we can state a generalization of
Schur’s lemma:
Theorem 6.26.
1. Let A be an algebra and let M, N be simple A-modules. If f : M → N
is a homomorphism of A-modules, then either f = 0 or f is an isomorphism.
2. Suppose A is an algebra which is finite-dimensional as a complex vector space. Let M
be a simple A-module. Then an A-linear map f : M → M is multiplication by a scalar
λ ∈ C.
Proof. The proofs are virtually the same as for G-linear maps of irreducible representations:
1. ker(f ) ⊂ M and im(f ) ⊂ N are submodules of M and N respectively, and since M
and N are assumed simple, they are both either 0 or all of M or N .
2. Since A is finite-dimensional, so is M . Then f has an eigenvalue λ ∈ C, so λ1M − f :
M → M is an A-linear map which is not an isomorphism. Since M is assumed simple,
λ1M − f = 0, which implies f = λ1M .
Remark 6.27. There was an early exercise to describe Hom(V ⊕r , V ⊕r )G for an irreducible
representation (V, ρV ) of G. We can now generalize this to state that for a finite-dimensional
algebra A and a simple A-module M ,
HomA (M ⊕r , M ⊕r ) ∼
= Matr (C)
as algebras.
6.5
Semisimple modules and algebras
Definition 6.28. An A-module M is semisimple if it is isomorphic to the direct sum of
simple A-modules.
42
Maschke’s theorem states that if A = C[G], then every A-module is semisimple. But if
A = C[x]/x2 , then A is not semisimple as a module over itself.
Lemma 6.29. If M is a semisimple A-module, then its decomposition into simple modules
is unique, up to reordering.
Proof. The proof is identical to the proof of Theorem 3.1 — if M 0 is a simple A-module,
then dim HomA (M 0 , M ) is the multiplicity of M 0 in any decomposition of M .
Lemma 6.30. Let A be an algebra and let M be an A-module. Suppose that there is an
∼
isomorphism α : M −
→ N1 ⊕ · · · ⊕ Nr , where the Ni are simple A-modules. If L ⊂ M is a
submodule, then there is a subset I ⊂ {1, . . . , r} such that α−1 (⊕i∈I Ni ) is a complementary
submodule to L.
Proof. Let I ⊂ {1, . . . , r} be a maximal subset such that α−1 (⊕i∈I Ni ) ∩ L = {0}. To prove
the lemma, it suffices to show that α−1 (⊕i∈I Ni ) and L together span M . Let X denote their
span.
If j ∈ I, then α−1 (Nj ) ⊂ X, by construction. If j ∈
/ I, consider α−1 (Nj ) ∩ X. Since
Nj is simple, either α−1 (Nj ) ∩ X = {0} or α−1 (Nj ) ∩ X = α−1 (Nj ). In the first case,
α−1 (Nj ) ∩ L = {0}, contradicting the maximality of I. Therefore α−1 (Nj ) ∩ X = α−1 (Nj )
and α−1 (Nj ) ⊂ X. Thus, X ⊃ M and we are done.
Proposition 6.31. Let A be an algebra and let M be a finite-dimensional semisimple Amodule. Then any submodule M1 ⊂ M is also semisimple.
∼
Proof. We may write α : M −
→ N1 ⊕ · · · ⊕ Nr , where the Ni are simple A-modules. Then
⊕
M1 has a complement M2 ⊂ M such that α|M2 : M2 −
→i∈I Ni for some I ⊂ {1, . . . , r}.
Then there is a map β : M → ⊕i∈I
/ Ni with kernel M2 . Since M1 ∩ M2 = {0} by definition,
β|M1 → ⊕i∈I
N
is
an
isomorphism
and M1 is semisimple.
i
/
Corollary 6.32. A finite-dimensional A-module M is semisimple if and only if every submodule L ⊂ M has a complementary submodule L0 ⊂ M .
Proof. We have seen that if M is semisimple, then every submodule L ⊂ M has a complementary submodule L0 ⊂ M .
To show the converse, we proceed by induction on dim M . If dim M = 1, then M is
simple and therefore semisimple. Suppose that dim M = d and we know the corollary if
dim M ≤ d − 1. If M is simple, we are done. If not, M has a proper non-zero submodule
M1 ⊂ M ; by passing to a smaller submodule of M1 if necessary, we may assume that M1
∼
is simple. There is a complementary submodule M 0 ⊂ M such that M1 ⊕ M 0 −
→ M , and
0 < dim M 0 < dim M .
We claim that every submodule L ⊂ M 0 admits a complementary submodule L0 ⊂ M 0 .
Granting this, the inductive hypothesis implies that M 0 is the direct sum of simple Amodules. Since M ∼
= M 0 ⊕ M1 , we are done.
43
Now we prove the claim. Recall that there are A-linear projection and inclusion maps
p1 : M → M1 and i1 : M1 → M . Now we observe that L ⊕ M1 ⊂ M , so there is a
complementary submodule N ⊂ M . We define L0 := {n − i1 (p1 (n)) : n ∈ N }, and we claim
that L0 is a complementary submodule of L ⊂ M 0 .
We check first that L0 ⊂ M 0 and L0 is a submodule. But p1 (n − i1 (p1 (n))) = 0, so L0 ⊂ M 0 .
Furthermore, L0 is by definition the image of the map 1M − i1 ◦ p1 : N → M 0 , and since 1M ,
i1 , and p1 are A-linear, so is (1M − i1 ◦ p1 )|N , and its image is an A-module.
We next check that L0 ∩ L = {0}. But if m ∈ L0 ∩ L, then m = n − i1 (p1 (n)) for n ∈ N ;
then i1 (p1 (n)) ∈ M1 and n ∈ N is the sum of an element of L and an element of M1 . Since
N is the complement of L ⊕ M1 , this implies that m = 0.
Finally, we check that the natural map L0 ⊕ L → M 0 is an isomorphism. This map is
injective, so it suffices to check that dim L0 + dim L = dim M 0 . Since dim N + dim L =
dim M − dim M1 = dim M 0 , it further suffices to show that 1M − i1 ◦ p1 : N → M 0 is injective,
so that dim L0 = dim N . But if m = i1 (p1 (m)), then m ∈ M1 . Since M1 ∩ N = {0}, the
restriction of 1M − i1 ◦ p1 to N is injective and we are done.
Definition 6.33. Let A be an algebra. We say that A is semisimple if A is semisimple as a
module over itself.
Example 6.34.
1. Let A = C[G]. Then as a module over itself, A ∼
= Vreg , which decomposes as the direct sum of A-modules (corresponding to irreducible representations of
G).
2. Let A = Matd (C). Then if we consider matrix multiplication, left multiplication by
P ∈ A preserves columns of matrices. Thus, if we let Ni denote the ith column vectors,
A∼
= V1 ⊕ · · · Vd as an A-module.
3. On the other hand, A = C[x]/x2 is not semisimple, because A · x ⊂ A has no complementary submodule.
4. If A and B are semisimple algebras, then so is A ⊕ B.
Proposition 6.35. Let A be a finite-dimensional semisimple algebra. Then every finitedimensional A-module M is semisimple.
Proof. Let m1 , . . . , mr be a basis for M . Then m1 , . . . , mr generate M as an A-module, in
the sense that the map
p : A⊕r → M
X
(a1 , . . . , ar ) 7→
ai · mi
i
is surjective. But A⊕r is isomorphic to the direct sum of simple A-modules, so it is semisimple.
Therefore, ker(p) ⊂ A⊕r has a complementary submodule N ⊂ A⊕r , which is also semisimple.
But then the restriction p|N : N → M is an isomorphism, and thus M is semisimple.
44
Theorem 6.36. Let A be a finite-dimensional semisimple algebra. Then there are finitely
many isomorphism classes of simple A-modules. If M1 , . . . , Mr is a complete list of nonisomorphic simple A-modules, then there is an isomorphism of A-modules A ∼
= ⊕i Midim Mi .
Proof. The proof is virtually identical to the decomposition of the regular representation
(which is the case A = C[G] for some finite group G). One checks that if A ∼
= ⊕i Midi , then
di = dim HomA (A, Mi ) = dim Mi .
The semisimple algebra Matd (C) is particularly simple. As a module over itself, it is the
direct sum of copies of C⊕d (viewed as the space of column vectors. Since every simple
Matd (C)-module appears as a summand of Matd (C), V := C⊕d is the only simple Matd (C)module (up to isomorphism). In addition, we can define an equivalence between finitedimensional complex vector spaces and finite-dimensional Matd (C)-modules.
If M is a finite-dimensional Matd (C)-module, it is semisimple and therefore isomorphic (as
a module) to V ⊕s for some s. Then HomA (V, M ) ∼
= HomA (V, V )⊕s ∼
= C⊕s .
On the other hand, if W is a finite-dimensional complex vector space we can make the tensor
product V ⊗ W into a Matd (C)-module by setting a · v ⊗ w = (a · v) ⊗ w.
This is an example of a concept called Morita equivalence.
Our next goal is to prove that every finite-dimensional semisimple algebra A is isomorphic
to a direct sum of matrix algebras.
Lemma 6.37. For any algebra A, Aop ∼
= HomA (A, A).
Proof. If we view A as a left A-module over itself, right multiplication by a ∈ A is an A-linear
map
ma : A → Ab
7→ ba
Moreover, every A-linear map A → A arises in this way: Given f : A → A which is A-linear,
f (b) = f (b · 1A ) = b · f (1A )
so f is uniquely determined by its evaluation at 1A .
Thus, we have constructed an isomorphism of vector spaces Aop → HomA (A, A). We need
to check that it is an algebra homomorphism. But
mop (a, a0 ) 7→ (ma0 a : b 7→ ba0 a) = ma ◦ ma0
as desired.
Lemma 6.38. Let V be a finite-dimensional complex vector space. Then Hom(V, V )op is
naturally isomorphic to Hom(V ∗ , V ∗ ).
45
Proof. Given f ∈ Hom(V, V ), recall that we have defined f ∗ Hom(V ∗ , V ∗ ) via f ∗ (g) := g ◦ f ,
where g ∈ V ∗ . Then f 7→ f ∗ is an isomorphism of vector spaces, and since (f1 ◦ f2 )∗ (g) =
g ◦ (f1 ◦ f2 ) = f2∗ (f1∗ (g)), (f1 ◦ f2 )∗ = f2∗ ◦ f1∗ and it is an isomorphism of algebras.
In particular, this lemma implies that the opposite of a matrix algebra is a matrix algebra,
though an isomorphism between the two depends on a choice of basis.
Theorem 6.39 (Artin–Wedderburn). Let A be a finite-dimensional semisimple algebra.
Then A is isomorphic to a direct sum of matrix algebras.
Proof. Let Mi be a simple A-module. Then we have seen that A ∼
= ⊕i Mi⊕ dim Mi . But then
⊕ dim Mj ∼
Aop ∼
) = ⊕i HomA (Mi⊕ dim Mi , Mi⊕ dim Mi )
= HomA (A, A) ∼
= ⊕i,j HomA (Mi⊕ dim Mi , Mj
∼
= ⊕i Matdim M (C)
i
Since the opposite of a matrix algebra is a matrix algebra, this shows that A is isomorphic
to the direct sum of matrix algebras.
In particular, group algebras are direct sums of matrix algebras. But our construction
depended on a choice of decomposition of A into simple modules.
We can actually be a bit more precise. Let M1 , . . . , Mr be a complete list of non-isomorphic
simple A-modules, and let ρi : A → Hom(Mi , Mi ) be the structure map.
Corollary 6.40. Let ρ := ⊕i ρi : A → ⊕i Hom(Mi , Mi ) be the direct sum of the ρi . This is
an isomorphism of algebras.
P
Proof. Both sides have the same dimension, namely i (dim Mi )2 . Thus, to show it is an
algebra isomorphism, it suffices to show it is injective. If ρ(a) = 0 for a ∈ A, then ρi (a) = 0
for all i. But since A is isomorphic (as a module) to a direct sum of copies of the Mi , this
implies that multiplication by a is the zero map on A. In particular, a = a · 1A = 0.
Example 6.41. Consider A = C[C2 ]. We know there are two simple A-modules up to
isomorphism, both 1-dimensional, corresponding to the two irreducible representations of
C2 . If M1 corresponds to the trivial representation and M2 corresponds to the non-trivial
representation, then ρ1 : C[C2 ] → Hom(M1 , M1 ) ∼
= C is given by sending [g] 7→ 1 and
∼
ρ2 : C[C2 ] → Hom(M2 , M2 ) = C is given by sending [g] 7→ −1. The direct sum of these two
maps is
ρ : C[C2 ] → C ⊕ C
1 7→ (1, 1)
[g] 7→ (1, −1)
which is exactly the isomorphism we wrote down at the beginning.
46
Example 6.42. Let A = C[S3 ]. There are three irreducible representations of S3 : the trivial
representation Vtriv , the sign representation Vsign , and a 2-dimensional representation W . If
we choose an appropriate basis for W , the representation is given by
0
(1 2 3) 7→ ω0 ω−1
(2 3) 7→ ( 01 10 )
where ω = e2πi/3 .
So the map ρ : C[S3 ] → C ⊕ C ⊕ Mat2 (C) is given by
0
(1 2 3) 7→ (1, 1, ω0 ω−1
)
0
1
(2 3) 7→ (1, −1, ( 1 0 ))
6.6
The center of an algebra
Definition 6.43. Let A be an algebra. The center of A is defined to be
Z(A) := {z ∈ A : az = za for all a ∈ A}
The center is a commutative subalgebra of A.
Proposition 6.44. Let V be a finite-dimensional vector space and let A := Hom(V, V ).
Then Z(A) = {λ1V : λ ∈ C}.
Proof. If we choose a basis for V , A becomes a matrix algebra. Let eij be the matrix with
a 1 in the (i, j) spot and 0s elsewhere. By considering the matrices which commute with all
of the eij , the result follows.
Corollary 6.45. Let A be a finite-dimensional semisimple algebra. Then Z(A) is isomorphic
as an algebra to C⊕r , where r is the number of isomorphism classes of simple A-modules.
Proof. We have seen that A is isomorphic to ⊕i Hom(Mi , Mi ), where M1 , . . . , Mr is a complete list of non-isomorphic simple A-modules. It is clear that for any algebras B, B 0 ,
Z(B ⊕ B 0 ) ∼
= Z(B) ⊕ Z(B 0 ), so Z(A) ∼
= C⊕r .
Now we return to the setting of group algebras. If A = C[G] for a finite group G, this
corollary shows that Z(A) ∼
= C⊕r , where r is the number of isomorphism classes of irreducible
representations of G.
P
Proposition 6.46. Let A = C[G]. Then Z(A) is spanned by the elements zC(g) := h∈C(g) [h],
where C(g) denotes the conjugacy class of g.
47
P
Proof. We may write an element a ∈ C[G] as a = g∈G λg [g]. Then a ∈ Z(A) if and only if
[g 0 ]a = a[g 0 ] for all g 0 ∈ G. But this holds if and only if
X
X
−1
λg [g] =
λg [g 0 gg 0 ] ⇔ λg = λg0 −1 gg0 for all g, g 0 ∈ G
g∈G
g∈G
But this is equivalent to
a = λg1 zC(g1 ) + . . . + λgr zC(gr )
where g1 , . . . , gr are representatives of the conjugacy classes of G.
But now we have shown that dim Z(C[G]) is equal to the number of conjugacy classes of G.
We can finally prove that character tables are square:
Corollary 6.47. Let G be a finite group. Then the number of isomorphism classes of
irreducible representations of G is equal to the number of conjugacy classes of G.
Proof. Both of these numbers are equal to the dimension of Z(C[G]):
We have seen that A ∼
= ⊕ri=1 Hom(Mi , Mi ), where the Mi run over distinct irreducible representations, so dim Z(C[G]) = r.
On the other hand, we have proved that {zC(g) } span Z(C[G]), so dim Z(C[G]) is equal to
the number of conjugacy classes of G.
7
Burnside’s Theorem
This material is not examinable. We would like to finally apply representation theory to
prove something about finite groups.
Theorem 7.1 (Burnside). Let G be a finite group of size pa q b where p, q are distinct primes
and a, b ∈ N with a + b ≥ 2.
Before we start proving this, we make some definitions about algebraic numbers.
Definition 7.2. Let α ∈ C. We say that α is an algebraic number if there is a polynomial
p(x) ∈ Q[x] with rational coefficients such that p(α) = 0. A polynomial p(x) is said to be
monic if p(x) = xn + an−1 xn−1 + . . . + a0 . We say that α ∈ C is an algebraic integer if there
is a monic polynomial p(x) ∈ Z[x] with integer coefficients such that p(α) = 0.
If α ∈ C is an algebraic number, there is a monic polynomial p(x) ∈ Q[x] of minimal degree
having α as a root; p(x) is unique and irreducible, and we call it the minimal polynomial of
α. If α ∈ C is an algebraic integer, its minimal polynomial has coefficients in Z. The roots
of p(x) are called the conjugates of α.
48
Example 7.3.
1. The minimal polynomial of i is x2 + 1, so i is actually an algebraic
integer. The only other conjugate of i is −i.
2. More generally, if ζ is an nth root of 1, then ζ is a root of xn −1 = 0 so ζ is an algebraic
integer. The minimal polynomial of ζ divides xn − 1, so all of the conjugates of ζ are
also nth roots of 1 (but not all nth roots of 1 are conjugates of ζ! the conjugates are
the other roots of the minimal polynomial)
3. If n ∈ Z, n 6= 0, then 1/n is an algebraic number (because it is a root of nx − 1), but
not an algebraic integer. In fact, if α = m/n ∈ Q with m and n coprime and n > 1,
then α is a root of the polynomial nx − m, so x − m/n is its minimal polynomial. Since
it doesn’t have integer coefficients, α is not an algebraic integer.
4. If α =
√
1+ 5
,
2
then α is a root of x2 − x − 1, so it is an algebraic integer.
We will need a couple of facts about algebraic numbers:
• If α, β ∈ C are both algebraic numbers, then αβ and α + β are also algebraic numbers.
Similarly, if α, β ∈ C are both algebraic integers, then so are αβ and α + β.
• If α and β are algebraic numbers, then the conjugates of α + β are of the form α0 + β 0 ,
where α0 and β 0 are conjugates of α and β, respectively. If r ∈ Q, then the conjugates
of rα are of the form rα0 , where α0 is a conjugate of α.
Lemma 7.4. Let G be a finite group and let χ : G → C be a character of a representation
of G. Then
1. For every g ∈ G, χ(g) is an algebraic integer
2. If 0 < |χ(g)/χ(e)| < 1, then χ(g)/χ(e) is not an algebraic integer.
is an algebraic integer.
3. If χ is an irreducible character and g ∈ G, then |C(g)| χ(g)
χ(e)
Proof.
1. Recall that χ(g) is the sum of roots of 1. Indeed, if (V, ρV ) is a representation
such that χ = χV , then for any g ∈ G, we can choose a basis of V so that ρV (g) = (λij )i,j
is diagonal (this follows by restricting ρV to the abelian group hgi ⊂ G, and using the
corollary to Schur’s lemma that representations of abelian groups are direct sums of
1-dimensional representations). Since g has finite order, so does ρV (g), and so there is
some n ≥ 1 such that λnii = 1 for all i. Therefore, χ(g) is the sum of algebraic integers,
so is an algebraic integer itself.
2. The conjugates of χ(g)/χ(e) all have the form (ζ10 + . . . + ζd0 )/d, which also satisfies
|(ζ10 + . . . + ζd0 )/d| ≤ 1. If p(x) = xn + . . . + a0 ∈ Z[x] is the minimal polynomial
of χ(g)/χ(e), then ±a0 is the product of χ(g)/χ(e) and all of its conjugates, and so
|a0 | < 1. But we assumed that p(x) has integer coefficients, so a0 = 0. Moreover, since
p(x) is a minimal polynomial, it is irreducible, so p(x) = x, and therefore χ(g)/χ(e) = 0,
contradicting our assumption that 0 < |χ(g)/χ(e)|.
49
P
3. Let (V, ρV ) be the irreducible representation such that χ = χV . Let z = h∈C(g) [h] ∈
C[G], so that z ∈ Z(C[G]) by 6.46. It follows that multiplication by z defines a module
homomorphism mz : V → V ; by Schur’s lemma this map is multiplication by a scalar
λz ∈ C.
Since mz : V → V is a linear transformation, it has a trace.
On the one hand, the
P
trace is λz dim V = λz χ(e). On the other hand, mz = h∈C(g) ρV (h), so Tr(mz ) =
P
χ(g)
h∈C(g) χV (g) = |C(g)|χV (g), and λz = |C(g)| χ(e) .
We claim that λz is an algebraic integer. Indeed, we can view mz : C[G] → C[G]
as a module homomorphism from C[G] to itself, and λz is an eigenvalue. But with
respect to the standard basis of C[G] (namely {[g]}g∈G ), mz has integer coefficients.
Therefore, its characteristic polynomial is monic with integer coefficients, and since the
characteristic polynomial of a matrix kills eigenvalues of that matrix, λz is the root of
a monic polynomial with integer coefficients, and so is an algebraic integer.
Corollary 7.5. Let (V, ρV ) be an irreducible representation of G. Then dim V divides |G|.
Proof. We want to show that |G|/χ(e) is an integer; it is a rational number, so it suffices to
prove it is an algebraic integer. By row orthogonality,
X
χV (g)χV (g) = |G|
g∈G
so
X χV (g)
g∈G
χV (e)
χV (g) =
|G|
χ(e)
· χV (g); since these are
But the left side is equal to the sum of terms of the form |C(g)| χχVV (g)
(e)
algebraic integers, so is
|G|
.
χ(e)
Theorem 7.6 (Burnside). Let G be a finite group which has a conjugacy class of size ps
where p is a prime and s ≥ 1. Then G is not simple.
Proof. Choose some g ∈ G with |C(g)| = ps . Since ps > 1, G is not abelian and g 6= e. Let
χ1 , . . . , χr be the irreducible characters of G (with χ1 the trivial character).
The column orthogonality relations (applied to e and g) imply that
1+
r
X
χi (g)χi (e) = 0
i=2
so
r
X
χi (g)χi (e)/p = −1/p
i=2
50
Since −1/p is not an algebraic integer, there is some i such that χi (g)χi (e)/p is not an
algebraic integer. In fact, since χi (g) is an algebraic integer, χi (e)/p is not, and in particular,
p 6| χi (e) and χi (g) 6= 0.
Since χi (e) and |C(g)| are relatively prime, Bézout’s identity implies that there are integers
a, b such that aχi (e) + b|C(g)| = 1. Therefore,
aχi (g) + b|C(g)|
χi (g)
χi (g)
=
χi (e)
χi (e)
The left side is an algebraic integer, so the right side must be, as well. It follows that
|χi (g)| = χi (e), and so if (Vi , ρi ) is the irreducible representation corresponding to χi , then
ρi (g) = λ · 1V .
Now let H = {h ∈ G : ρi (h) = λ · ρi (e) for λ ∈ C, λ 6= 0}. Then H ⊂ G is a normal
subgroup, and since g ∈ H and g 6= e, H 6= {e}.
Assume G is simple. Then H = G and for every h ∈ G, ρV (h) is multiplication by a scalar.
Since V is irreducible, this implies that dim V = 1. Now let H 0 ⊂ H = G be the kernel of ρi .
Since χi 6= χ1 , Vi is not the trivial representation and so H 0 6= G. Since G is simple, H 0 = {e}
and ρi is a faithful representation. But an injective homomorphism ρi : G → GL1 (C) to an
abelian group implies that G is abelian, contradicting the assumption that it has a conjugacy
class with size bigger than 1. We conclude that G is not simple.
Now we can prove Burnside’s pa q b theorem.
Proof. Recall that if |G| = pa with a ≥ 2, then G is not simple. Indeed, if G is abelian,
the result is clear. If not, then the center Z(G) ⊂ G is a subgroup with p-power order;
if G is simple and non-abelian, then Z(G) = {e}. But the sizes of the conjugacy classes
of G add up to |G| and have p-power order (since for any g ∈ G, the centralizer of g
ZG (g) := {h ∈ G : hg = gh} ⊂ G is a subgroup and |C(g)| · |ZG (g)| = |G|); since {e} is
a conjugacy class of size 1, and the sum of the sizes of the conjugacy classes is divisible by
p, there must be at least p conjugacy classes of size 1. Thus, |Z(G)| ≥ p > 1. If we let
H ⊂ Z(G) be a cyclic subgroup of order p, then {e} ( H ( G is a normal subgroup and so
G is not simple.
Now we consider the general case. One of Sylow’s theorems implies that there is a subgroup
P ⊂ G with |P | = pa . Choose g ∈ Z(P ); since Z(P ) 6= {e} by the previous paragraph,
we may choose g 6= e. Then |C(g)| · |ZG (g)| = |G|. Since P ⊂ ZG (g), pa | |ZG (g) and so
|C(g)| = q r for some r ≥ 0.
If r = 0, then g ∈ Z(G) so Z(G) 6= {e} and we can find a normal subgroup {e} ( H ( G
with H ⊂ Z(G), as above.
If r ≥ 1, Theorem 7.6 implies that G is not simple.
51