Download pdf

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
MATH 55A NOTES
JONATHAN WANG
Contents
1. Maps
2. Rings
3. Modules
4. Fields
5. Linear algebra
6. Dual vector spaces
7. Tensor products
8. Groups
8.1. Group actions
9. Tensor products and powers
9.1. Exterior product
9.2. k-Algebras
10. Field extensions
10.1. Fundamental theorem of algebra
11. Linear algebra revisited
11.1. Eigenvalues and characteristic polynomial
11.2. Generalized eigenvectors
11.3. Inner products
12. Group representations
1
2
3
5
6
8
8
11
12
15
15
17
20
22
23
23
24
27
30
1. Maps
A map f (morphism, function) from X to Y is an assignment ∀x ∈ X to an
element f (x) ∈ Y .
Definition. f is called injective (monomorphism, 1-1) if x1 6= x2 ⇒ f (x1 ) 6= f (x2 ).
f is called surjective (epimorphism, onto) if ∀y ∈ Y, ∃x ∈ X st y = f (x). f is
called bijective (isomorphism) if it is both inj and sur.
Let Maps(X, Y ) denote the set of all maps f : X → Y .
Definition. X1 × X2 is the set whose elements are pairs (x1 , x2 ).
f
Lemma. For a set Y a function Y → (X1 × X2 ) is the same as a pair of functions
f1 : Y → X1 and f2 : Y → X2 .
Date: Fall 2007.
1
2
JONATHAN WANG
This shows that Maps(Y, X1 × X2 ) ' Maps(Y, X1 ) × Maps(Y, X2 ).
Given a set X we can construct a new set Subsets(X) whose elements are all
subsets of X (including ∅).
Lemma. There exists an isomorphism between Subsets(X) and Maps(X, {0, 1}).
Theorem. There is no isomorphism between X and Subsets(X) (Cantor diagonalization).
A relation on X is a subset S ⊂ X × X: we specify which ordered pairs (x1 , x2 )
relate by x1 ∼ x2 .
Definition. An equivalence relation is a relation that satisfies reflexivity, symmetry, transitivity.
X/∼ a particular subset in Subsets(X).
Definition. An element U ∈ Subsets(X) is called a cluster wrt ∼ if (1) U 6= ∅, (2)
y, z ∈ U ⇒ y ∼ z, (3) y ∈ U, z ∼ y ⇒ z ∈ U .
Definition. X/∼ consists of clusters. π : X → X/∼ defined by π(x) = {x0 ∈ X |
x ∼ x0 }.
2. Rings
A ring is a set R with a function
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
+
R × R → R and +(a, b) =: a + b
a+b=b+a
a + (b + c) = (a + b) + c
∃0 ∈ R st a + 0 = a
∀a ∃ − a st a + (−a) = 0. a − b := a + (−b)
Items 1-5 is definition of abelian group.
·
R × R → R and ·(a, b) =: a · b
a · (b · c) = (a · b) · c
∃1 ∈ R st 1 · a = a · 1 = a
(a + b) · c = a · c + b · c
c · (a + b) = c · a + c · b
Example.
(1) R = Z
(2) R = Q, C, R
(3) Z/nZ where for a ∈ Z, ā = π(a). Check that ā+b̄ = π(a+b) and ā·b̄ = π(ab)
works.
(4) R[t] So far all commutative rings.
Definition. A ring R is commutative if a · b = b · a.
(5) Matn×n (R) non-commutable.
ϕ
R1 −→ R2
Definition. A ring homomorphism ϕ is a map of sets from R1 to R2 st
• ϕ(a1 + a2 ) = ϕ(a1 ) + ϕ(a2 )
• ϕ(a1 a2 ) = ϕ(a1 )ϕ(a2 )
MATH 55A NOTES
3
• ϕ(1R1 ) = 1R2
π
Z → Z/nZ is a ring homomorphism.
3. Modules
Fix a ring R. An R-module is a set M together with the following data:
+
(1) M × M −→ M
+(m1 , m2 ) =: m1 + m2
m1 + m2 = m2 + m1
m1 + (m2 + m3 ) = (m1 + m2 ) + m3
∃0 ∈ M st 0 + m = m
∀m ∈ M, ∃ − m ∈ M st m + (−m) = 0
·
(2) R × M −→ M written a · m
1R · m = m
a1 · (a2 · m) = (a1 · a2 ) · m
(a1 + a2 )m = a1 m + a2 m
a(m1 + m2 ) = am1 + am2
Example.
M = {0}
M = R using old addition and multiplication.
f
Given two modules M1 and M2 an R-module homomorphism M1 −→ M2 is a
map of sets st
(1) f (m0 + m00 ) = f (m0 ) + f (m00 )
(2) f (am) = af (m) ∀a ∈ R
f
g
M1 −→ M2 −→ M3
g ◦ f : M1 → M3
(1) g(f (m1 + m2 )) = g(f (m1 ) + f (m2 )) = g(f (m1 )) + g(f (m2 ))
(2) g(f (am)) = g(af (m)) = ag(f (m))
Proposition. For any R-module M , ∃ a bijection between HomR (R, M ) = {Rmodule homo R → M } and M . HomR (R, M ) ' M
Φ
Proof. M −→ HomR (R, M ) defined by Φ(m)(a) = am. It is a map of R-modules
Ψ
by axioms. HomR (R, M ) −→ M defined by Ψ(f ) = f (1R ).
Ψ(Φ(m)) = Φ(m)(1R ) = 1R m = m
Φ(Ψ(f ))(a) = Φ(f (1R ))(a) = a · f (1R ) = f (a)
so Φ ◦ Ψ = idHomR (R,M ) and Ψ ◦ Φ = idM .
Given R-modules M1 and M2 , we introduce a new R-module M1 ⊕ M2 as a set
M1 ⊕ M2 := M1 × M2 with
• (m01 , m02 ) + (m001 , m002 ) = (m01 + m001 , m02 + m002 )
• 0M1 ⊕M2 = (0M1 , 0M2 )
• a(m1 , m2 ) = (am1 , am2 )
4
JONATHAN WANG
Proposition. For any R-module N , ∃ the following two bijections
I. HomR (M1 ⊕ M2 , N ) ' HomR (M1 , N ) × Hom(M2 , N )
II. HomR (N, M1 ⊕ M2 ) ' HomR (N, M1 ) × HomR (N, M2 )
Proof. II.
φ
HomR (N, M1 ⊕ M2 ) −→ HomR (N, M1 ) × HomR (N, M2 )
defined by φ(f ) = (f1 , f2 ) where f (n) = (f1 (n), f2 (n)). And ψ(f1 , f2 )(n) = f (n) =
(f1 (n), f2 (n)).
I.
φ
HomR (M1 ⊕ M2 , N ) −→ HomR (M1 , N ) × HomR (M2 , N )
with φ(f ) = (f1 , f2 ), f1 (m2 ) = f (m1 , 0M2 ), f2 (m2 ) = f (0M1 , m2 ). In the other
direction, ψ(f1 , f2 )(m1 , m2 ) = f1 (m1 ) + f2 (m2 ).
Z/nZ is a Z-module and C is an R-module.
If M1 = R⊕n and M2 = R⊕m , then HomR (M1 , M2 ) ' Matm×n (R).
Lemma. ϕ is injective ⇔ ϕ−1 (0M2 ) = {0M1 }.
Proof. (⇐=)
ϕ(m1 ) = ϕ(m01 ) ⇒ ϕ(m1 − m01 ) = 0M2 ⇒ m1 − m01 = 0M1 ⇒ m1 = m01
(=⇒) is obvious.
ϕ
Given m1 , m2 , . . . , mn ∈ M we have R⊕n −→ M given by a1 m1 + a2 m2 + · · · +
an mn .
Definition. m1 , . . . , mn are linearly independent if this map is injective.
Lemma.
m1 , . . . , mn are linearly dependent if and only if ∃a1 , . . . , an ∈ R not all
P
0 st i ai mi = 0.
Proof. Suppose ∃a1 , . . . , an not all 0. Therefore ϕ(a1 , . . . , an ) = ϕ(0, . . . , 0) = 0 ⇒
ϕ non-injective.
Definition. A homomorphism ϕ : M1 → M2 is surjective if it is surjective as a
map of sets.
HomZ (Z/nZ, Z) = {0}: ϕ(i) = a ⇒ ϕ(ni) = na but ϕ(ni) = ϕ(0) = 0 6= na, a
contradiction.
ϕ
Definition. m1 , . . . , mn span M if the corresponding map R⊕n → M is surjective.
P
Lemma. m1 , . . . , mn span M iff ∀m ∈ M, ∃a1 , . . . , an ∈ R st i ai mi = m.
Bijective homomorphisms = bijective maps.
Lemma. ∃!ψ st ϕ ◦ ψ = idX , ψ ◦ ϕ = idX .
by letting ψ be the inverse map of sets of ϕ.
Definition. m1 , . . . , mn are a basis for M if the corresponding map R⊕n → M is
an isomorphism.
P
Lemma. m1 , . . . , mn is a basis iff ∀m ∈ M, ∃!a1 , . . . , an ∈ R such that i ai mi =
m.
MATH 55A NOTES
5
M 0 ⊂ M if m1 , m2 ∈ M 0 ⇒ m1 + m2 ∈ M 0 and a ∈ R, m ∈ M 0 ⇒ a · m ∈ M 0 .
ϕ
M1 −→ M2
ker(ϕ) = {m ∈ M1 | ϕ(m) = 0}
Im(ϕ) = {m ∈ M2 | ∃m1 ∈ M1 , ϕ(m1 ) = m}
M ⊂ M . Introduce M/M 0 . Define ∼ on M by m1 ∼ m2 if m1 − m2 ∈ M 0 .
0
π
Lemma. M −→ M/∼. ∃! R-module structure on M/∼ for which π is a homomorphism.
M/M 0 = M/ ∼
f
Proposition. If M1 M2 , there exists an isomorphism M1 / ker(f ) ' M2 .
f
Proposition. Every R-module M is isomorphic to coker(f ) for some RI → RJ
g
Proof. There is surjection RM M . Let K = ker g ⊂ RM and define f : RK K ,→ RM . Then coker(f ) = RM / Im f = RM / ker g ' M .
4. Fields
Let R be a commutative ring.
Definition. R is called a field if ∀a 6= 0, ∃a−1 st a−1 · a = 1R .
Lemma. If k is a field and a, b 6= 0 ⇒ a · b 6= 0.
1 = (ab)(a−1 b−1 ) = 0 is a contradiction.
Lemma. If p is a prime then Z/pZ = Fp is a field.
Proof 1. ∀x ∈ Fp − {0} the map Fp − {0} → Fp − {0}, y 7→ xy is injective.
y1 x = y2 x ⇒ (y1 − y2 )x = 0 ⇒ y1 = y2 . Therefore since Fp − {0} is finite
the map is surjective.
Proof 2. x = n̄, n ∈ Z (x 6= 0). ∃y, m with yn + mp = 1 ⇒ ȳn̄ = 1.
2
R = R[t]/(t + 1) ' C
Definition. A polynomial p(t) is irreducible if there does not exist p1 , p2 with
deg p1 , p2 < deg p st p(t) = p1 (t)p2 (t).
R[t]/p(t) is a field iff p(t) is irreducible.
Definition. I ⊂ R is a left ideal if I is additive subgroup of R and rx ∈ I for all
x ∈ I, r ∈ R.
A field k has characteristic 0 if homomorphism φ : Z → k is injective. Otherwise
k has positive characteristic.
Consider ker φ. It is an ideal in Z. Any ideal in Z has the form nZ for some
n ∈ Z.
Proof. Consider the smallest non-zero (positive) element n in I. If we have m =
nm0 + m00 then m00 ∈ I but m00 < n, a contradiction.
nZ is kernel so Z/ ker φ = Z/nZ → k. This map is injective.
Assume that k does not have characteristic 0. Claim: n is prime.
6
JONATHAN WANG
Proof. n = ab with a, b < n. φ(ab) = φ(n) = 0 ⇒ φ(a) = 0 or φ(b) = 0, a
contradiction.
So if φ : Z → k is non-injective, then Fp → k. p is called the characteristic.
Example.
p1 (t)
p2 (t)
(1) k field and t an indeterminate. k(t) = p(t)
q(t) where q1 (t) = q2 (t) if p1 (t)q2 (t) =
q1 (t)p2 (t).
Fp (t) another field of characteristic p.
√
√
T
(2) Q[ 2] = k⊂C,√2∈k k. Claim: Q ⊂ Q[ 2]. 1 ∈ k, 1 + · · · + 1 = n ∈ k ⇒
1
k ∈ k, so Q ⊂ k.
5. Linear algebra
For R = k (k is a field), we call R-modules k-vector spaces.
Definition. A vector space V is finite dimensional if ∃ surjection k n V for some
n ∈ N. k ⊕n := k n .
Equivalently,
Lemma. V is finite dimensional if ∃ finitely many vectors v1 , . . . , vn ∈ V that span
it.
∼
Proposition. Let V be fin-dim. Then ∃ an isomorphism k n → V .
ϕ
Proof. Take the minimal n ∈ N st k n V exists.
Claim: ϕ is an isomorphism.
Need to show that it is injective.
P
Suppose ϕ(a1 , . P
. . , an ) =
ai vi = 0 P
where vi = ϕ(0, . . . , 1, . . . , 0). Assume a1 6= 0.
n
n
Then a1 v1 = − i=2 ai vi ⇒ v1 = − i=2 bi vi where bi = ai /a1 . Claim that
ϕ
kn
∧
ψ
> V
>>
k n−1
contradicts minimality.
Corollary. Let V be a fin-dim vector space, and let V 0 ⊂ V be a subspace. Then
∼
∃W ⊂ V such that V 0 ⊕ W → V .
π
Proof. (0 → V 0 → V → V /V 0 → 0; short exact sequence can be split)
Reformulation: π admits a right inverse j. Proof from homework problem 3.
0
>
V0
>
V
π
>>
V /V 0
∧
∧
>
0
∧
∼
km
kn
Obtain V /V 0 is finite dimensional, so by proposition V /V 0 ' k n . Set W = Im(j).
j
π ◦ j = id so j is injective, and V /V 0 → V by
V /V 0
j⊕i
∼
>
W
⊂
Claim that V /V 0 ⊕ V 0 −→ V is isomorphism.
>
V
MATH 55A NOTES
7
Theorem. Let f : k m → k n be an injective map. Then m ≤ n.
0
0
Proof. Suppose the statement is true for injective maps k m → k n with n0 < n ⇒
m0 ≤ n0 . We start with f and produce f 0 : k m−1 → k n−1 st if f is injective then f 0
is also injective.
f (1, 0, . . . , 0) = v ∈ k n and span(v) = {av, a ∈ k} ⊂ k n . v = (a1 , . . . , an ).
Assume a1 6= 0. k n−1 ⊂ k n ⊃ span(v).
∼
k n−1 ⊕ (span(v) ' k) → k n
Claim: this is an isomorphism.
Injectivity: (0, b2 , . . . , bn ) + b(a1 , . . . , an ) = 0 ⇒ b = 0 ⇒ bi = 0.
Surjectivity: (0, b2 , . . . , bn ) + (a1 , . . . , an ) = (c1 , . . . , cn ). b = c1 /a1 . Choose
b2 , . . . , bn so the others match.
f
So we have k ⊕ k m−1 → span(v) ⊕ k n−1 . Then the restriction and projection
f 0 : k m−1 → k n−1 is injective: suppose ∃w ∈ k m−1 st f 0 (w) = 0. Consider
the initial map f km−1 (w) = (g(w), f 0 (w)) where g(w) = bv = bf (1, 0, . . . , 0).
f (−b, w) = −bf (1, 0, . . . , 0) + f (0, w) = −bv + g(w) + f 0 (w) = 0 ⇒ w = 0.
f
Theorem. If k n k m then m ≤ n.
Proof. f admits a right inverse g. f ◦ g = idkm ⇒ g is injective.
Corollary. If k m ' k n then m = n.
Definition. Let V be a fin. dim. vector space (it is isomorphic to k n ). Then
dim V := n is a well-defined dimension.
Corollary. If dim V = n then any collection of more then n vectors is linearly
dependent.
Proof. If m vectors, k m ,→ V ' k n ⇒ m ≤ n.
Proposition. If V is fin. dim., it admits a basis.
Lemma. V fin. dim., V 0 ⊂ V ⇒ V 0 fin. dim.
Corollary. Every lin. independent colection of vectors can be completed to a basis.
Since V ' V 0 ⊕ V /V 0 , take a basis for V /V 0 and add it to the collection.
Theorem. k m ,→ k n ⇒ m ≤ n.
0
0
Proof. Suppose statement is true for maps k m → k n , n0 < n. Then let vi =
(ai1 , . . . , ain ) for i = 1, . . . , m. Doing Gauss elimination on the matrix with rows vi
and inducting proves theorem.
Lemma. The following are equivalent:
(1) v1 , . . . , vm are lin. ind. and m ≥ d
(2) v1 , . . . , vm span and m ≤ d
(3) v1 , . . . , vm form a basis (m = d)
Let R be commutative.
Lemma. If M1 , M2 are R-modules, HomR (M1 , M2 ) has the structure of R-module.
8
JONATHAN WANG
Proof.
(ϕ + ψ)(m1 ) = ϕ(m1 ) + ψ(m1 )
(a · ϕ)(m1 ) = a · ϕ(m1 ) = ϕ(a · m1 )
Must check that
(a · ϕ)(a0 · m) = ϕ(a · a0 · m) = (a · a0 )ϕ(m) = (a0 · a)ϕ(m) = a0 · (a · ϕ)(m) (1) HomR (N, M1 ⊕ M2 ) ' HomR (N, M1 ) ⊕ HomR (N, M2 )
(2) HomR (M1 ⊕ M2 , N ) ' HomR (M1 , N ) ⊕ HomR (M2 , N )
(3) HomR (R, N ) ' N
Proof. Define Φ : N → HomR (R, N ) by Φ(n)(a) = a · n.
Φ(b · n)(a) = (ab)n = (ba)n = b · Φ(n)(a) ⇒ Φ(bn) = b · Φ(n)
6. Dual vector spaces
∗
For a vector space V , V := Homk (V, k) is the dual.
Lemma. Let V be fin. dim., then V ∗ is of the same dimension.
Proof. Case 1: V = k. Then Hom(k, k) ' k by (3).
Case 2: V ' k n . (k n )∗ = (k ∗ )⊕n = k n .
Definition. If v1 , . . . , vd is basis of V , then vi∗ ∈ V ∗ defined by
(
0 j 6= i
∗
vi (vj ) =
1 j=i
P
is basis of V ∗ . ϕ ∈ V ∗ has ϕ = ai vi∗ where ai = ϕ(vi ).
T
V1 → V2 induces T ∗ : V2∗ → V1∗ . Given ϕ ∈ V2∗ , define [T ∗ (ϕ)](v1 ) = ϕ(T v1 ) ∈ k
for v1 ∈ V1 . Also check that
T ∗ (ϕ)(a · v1 ) = ϕ(T av1 ) = aϕ(T v1 ) = a · T ∗ (ϕ)(v1 )
This map T 7→ T ∗ from Homk (V1 , V2 ) → Homk (V2∗ , V1∗ ) is actually a homomorphism of k-modules (homework).
Φ : V → (V ∗ )∗ defined by Φ(v)(ϕ) = ϕ(v) is k-linear (check).
Lemma. If V, W fin. dim., then Homk (V, W ) is fin. dim.
Proof 1. Hom(k n , k m ) = split.
Proof 2. v1 , . . . , vn basis of V and w1 , . . . , wm basis of W . Then define Tij (vi ) = wj
and Tij (vi0 ) = 0 for i0 6= i.
7. Tensor products
B
A bilinear pairing U, V → W is B : U × V → W with
B(u + u0 , v) = B(u, v) + B(u0 , v)
B(u, v + v 0 ) = B(u, v) + B(u, v 0 )
B(a · u, v) = a · B(u, v) = B(u, a · v)
Example.
(1) V, V ∗ → k by v, ϕ 7→ ϕ(v)
(2) U, Hom(U, V ) → V by u, T 7→ T (u)
MATH 55A NOTES
9
(3) Hom(U, V ), Hom(V, W ) → Hom(V, W ) by T, S 7→ S ◦ T
(·,·)
(4) V × V → k scalar products
B
univ
U ⊗ V is the tensor product of U, V if we are given U, V −→
U ⊗ V with
the universal property ∀W , the assignment f ∈ Hom(U ⊗ V, W ) 7→ f ◦ Buniv is a
bijection between Hom(U ⊗ V, W ) and Bil(U, V → W ).
U, V
Buniv
>
B
U ⊗V
f
> ∨
W
Tensor product is unique:
U, V
^
B
univ
>
(U^
⊗V)
∧
>
^
^
B
univ
g∨
f
^
(U^
⊗V)
Claim: ∃!f, g st f ◦ g = id, g ◦ f = id and diagram commutes.
^
^
^
Proof. By universal property of (U^
⊗ V ), ∃!g st B
univ = g ◦ Buniv . Similarly ∃!f
^
^
^
^
^
^
^
st B
univ = f ◦ Buniv . Since Buniv = (f ◦ g) ◦ Buniv and Buniv = id ◦ Buniv ,
f ◦ g = id.
This reasoning is called the Yoneda Lemma.
Define Buniv (u, v) =: u ⊗ v.
Lemma. (u0 + u00 ) ⊗ v = u0 ⊗ v + u00 ⊗ v
Proof. Follows from bilinearity of Buniv .
If we have V = k and Buniv : U, k → U defined by Buniv (u, a) = a · u, then
Lemma. U ⊗ k = U using Buniv above satisfies the universal property.
Proof. Need to show taht the assignment f ∈ Hom(U, W ) 7→ f ◦ Buniv is a bijection
between Hom(U, W ) ↔ Bil(U, k → W ).
Given B : U, k → W , define f (u) = B(u, 1). We will show f 7→ B 7→ f 0 .
f 0 (u) = B(u, 1) = f ◦ Buniv (u, 1) = f (u).
Now for B 7→ f 7→ B 0 . a · B 0 (u, 1) = B 0 (u, a) = f ◦ Buniv (u, a) = f (a · u) =
B(a · u, 1) = a · B(u, 1). So we have bijection, and U ⊗ k = U .
2
1
Lemma. If (U ⊗ V1 , Buniv
) and (U ⊗ V2 , Buniv
) exist, then U ⊗ (V1 ⊕ V2 ) exists
and is isomorphic to (U ⊗ V1 ) ⊕ (U ⊗ V2 ) with
Buniv : U × (V1 ⊕ V2 ) → U ⊗ (V1 ⊕ V2 ) = (U ⊗ V1 ) ⊕ (U ⊗ V2 )
1
2
defined by Buniv (u, (v1 , v2 )) = (Buniv
(u, v1 ), Buniv
(u, v2 )).
Proof. For all W ,
U, V1 ⊕ V2
>
(U ⊗ V1 ) ⊕ (U ⊗ V2 )
B
f
> ∨
W
10
JONATHAN WANG
Given B, we have B1 (u, v1 ) = B(u, (v1 , 0)) and B2 (u, v2 ) = B(u, (0, v2 )). Then
f1 ↔ B1 and f2 ↔ B2 , f1 : U ⊗ V1 → W, f2 : U ⊗ V2 → W .
Corollary. U ⊗ V exists for any two fin. dim. vector spaces U, V .
Proof. V = k ⊕ · · · ⊕ k. U ⊗ k exists. So if U ' k n , V ' k m then
M
U ⊗V '
k
1≤i≤n,1≤j≤m
If u1 , . . . , un ∈ U, v1 , . . . , vm ∈ V are bases, then corresponding basis of U ⊗ V is
ui ⊗ vj .
If we are given U1 ⊗ V1 , U2 ⊗ V2 and maps f : U1 → U2 , g : V1 → V2 , can we
construct a map f ⊗ g between U1 ⊗ V1 and U2 ⊗ V2 ? Consider
> U1 ⊗ V1
U1 , V1
B
f ⊗g
>
∨
W = U2 ⊗ V2
2
where B(u1 , v1 ) := Buniv
(f (u1 ), g(v1 )).
Lemma. (f ⊗ g)(u1 ⊗ v1 ) = f (u1 ) ⊗ g(v1 ).
2
(f (u1 ), g(v1 )) = f (u1 ) ⊗ g(v1 ).
Proof. Buniv
Given U ⊗ V exists, we define a map Hom(U ⊗ V, W ) → Hom(U, Hom(V, W )).
Given f : U ⊗ V → W , define ϕ : U → Hom(V, W ) by ϕ(u) = B(u, ·). Linearity
in the 2nd argument shows that ϕ(u) is k-linear. Linearity in 1st argument shows
that ϕ is k-linear. The map defined is also k-linear.
T
Define U ∗ ⊗ V → Homk (U, V ). Need a bilinear B : U ∗ , V → Hom(U, V ). Define
B(ϕ, v)(u) = ϕ(u) · v.
Define U ∗ ⊗ V ∗ → (U ⊗ V )∗ . Need B : U ∗ , V ∗ → (U ⊗ V )∗ so B(ϕ, ψ) =? To
define U ⊗ V → k, let (B(ϕ, ψ))(u, v) = ϕ(u) · ψ(v).
Lemma. If U ⊗V exists, then V ⊗U exists and ∃! isomorphism S where S(u⊗v) =
v ⊗ u.
V ⊗U
Proof. Existence: Let V ⊗ U = U ⊗ V and define Buniv
: V, U → U ⊗ V by
V ⊗U
U ⊗V
Buniv
(v, u) = Buniv
(u, v)
Uniqueness of isomorphism follows from previous lemma.
Proposition. U ⊗ V is equal to the span of pure tensors (finite sums of u ⊗ v).
π
Proof. Let W = U ⊗ V /{ span of pure tensors } with projection U ⊗ V → W . Then
the following diagram commutes
U, V
Buniv
>
U ⊗V
0
π
> ∨
W
so π = 0 by universal property, which implies U ⊗ V equals the span of pure
tensors.
MATH 55A NOTES
11
8. Groups
A set G is a group if
·
(1) G × G → G
(2) 1 ∈ G
(3) (g1 g2 )g3 = g1 (g2 g3 )
(4) 1 · g = g · 1 = g
(5) ∀g ∃g −1 st gg −1 = g −1 g = 1
If g1 g2 = g2 g1 ∀g1 , g2 ∈ G we say G is abelian.
Example.
(1) X is a set. Aut(X) = {ϕ : X → X | ϕ is isomorphism}
(2) X = {1, . . . , n}. Aut(X) = Sn
(3) Z, Z/nZ under +
(4) k ∗ := k − {0} under ×
(5) V vector space. GL(V ) = {T : V → V | T isomorphism}
(6) SL(V ) = {g ∈ Hom(V, V ) | det g = 1}
(7) O(n) = {T ∈ Matn×n : T T T = T T T = Id}
(8) AutZ (Z⊕n ) = GL(n, Z)
ϕ
G1 −→ G2 if ϕ is a map of sets and ϕ(g1 g2 ) = ϕ(g1 )ϕ(g2 ).
Example.
(1)
ϕ
Lemma. Z/nZ → G ⇔ ∃g ∈ G | g n = 1
Proof. (⇒)g = ϕ(1)
(⇐) Define ϕ̃ : Z → G by ϕ̃(i) = g i , which induces map ϕ : Z/nZ → G. (2) C, + → (C − 0), · by z 7→ exp(2πiz)
ϕ
(3) Sn −→ GL(n) where ϕ(σ) ∈ Matn×n for σ ∈ Sn defined by (ϕ(σ))(ei ) =
eσ(i) .
Definition. A subgroup is a subset with
• h1 , h 2 ∈ H ⇒ h1 h2 ∈ H
• 1∈H
• ∀h ∈ H, h−1 ∈ H
For ϕ : G1 → G2 , ker ϕ = {g ∈ G | ϕ(g) = 1G2 }.
Lemma. ker ϕ is subgroup of G1 and Im ϕ is subgroup of G2 .
For H ⊂ G, the set G/H := G/ ∼ where g1 ∼ g2 if ∃h ∈ H st g1 = g2 h ⇔
π
g2−1 g1 ∈ H. So we have G −→ G/H with π(g) = ḡ. Can we define a group structure
on G/H so that π is hom.
ḡ1 ḡ2 = π(g1 g2 )
0
00
0
00
suppose ∃g2 , g2 st π(g2 ) = π(g2 ). Then g200 = g20 h ⇒ g1 g200 = g1 g20 h ⇒ π(g1 g20 ) =
?
π(g1 g200 ). But in order for π(g10 g2 ) = π(g100 g2 ), we need g100 g2 = g10 g2 h0 . We have
g100 = g10 h ⇒ g100 g2 = g10 hg2 . So for g10 hg2 = g100 g2 h0 , need
hg2 = g2 h0 ⇒ h0 = g2−1 hg2 ∈ H
Definition. H ⊂ G is normal if ∀g ∈ G, h ∈ H, g −1 hg ∈ H.
12
JONATHAN WANG
Proposition. ∃ a group structure on G/H with π being a homomorphism if and
only if H is normal.
Proof. Suppose H is normal. Then by the discussion above we are good. If H were
not normal, we would be able to find a g2 st g2−1 hg2 ∈
/ H.
Lemma. If G is finite, then |G| is finite and |G| = |H| · |G/H|.
π
Proof. For any partition of X → X/ ∼,
X
|X| =
|π −1 (x̄)|
x̄∈X/∼
Observe the sublemma
Lemma. ∀ḡ ∈ G/H, |π −1 (ḡ)| = |H|.
Proof. Suppose ḡ = π(1). π −1 (π(1)) = H. For any g let us construct an isomor∼
phism between π −1 (π(g)) → H by h 7→ gh.
Corollary. If G is finite then |G| is divisible by |H|.
Definition. An order of g ∈ G is infinite if there does not exist n st g n = 1.
Otherwise it is the minimal integer n st g n = 1.
Lemma. In a finite group, order of each element divides order of group.
Proof. ∃n, m st g n = g m ⇒ g n−m = 1 so order is finite.
{1, g, . . . , g n−1 }, g n = 1 where n is order.
Consider the set
Corollary. For prime p and any integer n not dividing p, np−1 ≡ 1 ∈ F∗p .
8.1. Group actions.
Given group G and set X, G y X if we are given G × X → X st
• g1 (g2 x) = (g1 g2 )x
• 1G x = x
Lemma. We have an action of G on X if and only if G → Aut(X) (as groups).
Proof. Given action, we have ϕ(g) = g(·). Given homomorphism ϕ, action gx =
(ϕ(g))x.
(g1 g2 )x = (ϕ(g1 g2 ))x = (ϕ(g1 )ϕ(g2 ))x = g1 (g2 x)
G y G by (left) multiplication g g1 = gg1 .
Let X1 , X2 be two sets acted on by G. A G-map between them is a map of sets
f
X1 → X2 where f (gx1 ) = gf (x1 ). The set of all G-maps is HomG (X1 , X2 ).
Proposition. There ∃! action of G on G/H st π : G → G/H is a G-map.
Proof. Set π(g1 g) = g1 π(g). Suppose g 0 and g 00 such that π(g 0 ) = π(g 00 ). Need to
show that π(g1 g 0 ) = π(g1 g 00 ):
g 00 = g 0 h ⇒ g1 g 00 = g1 g 0 h ⇒ π(g1 g 00 ) = π(g1 g 0 )
Universal property: Let G be a group, H subgroup, and X a set acted on by G.
Proposition. ∃ bijection between the sets
HomG (G/H, X) ' {x ∈ X | hx = x ∀h ∈ H}
MATH 55A NOTES
13
Proof. (⇒) Given f : G/H → X, let x := f (π(1)). Then
hx = f (hπ(1)) = f (π(h)) = f (π(1)) = x
(⇐) Given x ∈ X, ϕ(ḡ) = gx.
π(g 0 ) = π(g 00 ) ⇒ g 00 = g 0 h ⇒ g 00 x = g 0 hx = g 0 x
Proposition. Every finite group is isomorphic to a subgroup of Sn for some n.
Proof. Set n = |G|. We want ϕ : G ,→ Sn = Aut(X) where X = G. Take ϕ
associated with G y X. ϕ(g1 ) 6= IdX if g 6= 1 since (ϕ(g))(1) = g · 1 = g.
Define an equiv ∼ on X to say that x1 ∼ x2 if ∃g ∈ G st x2 = gx1 . An orbit
is an equiv class wrt ∼. The orbit O ⊂ X is a subset where ∀x1 , x2 ∈ O, ∃g st
gx1 = x2 . We say the action is transitive if X is just one orbit.
Example. For H ⊂ G we can look at G/H where ḡ = g · 1̄, which shows the action
is transitive.
Lemma. Every set with a transitive action on G is isomorphic to G/H for some
H.
f
Proof. Pick an element x ∈ X. Consider StabG (x) ⊂ G. Define G/ StabG (x) → X
by f (ḡ) = g · x (well-defined by definition of stabilizer). Action is transitive implies
f is surjective.
f (ḡ1 ) = f (ḡ2 ) ⇒ g1 x = g2 x ⇒ g2−1 g1 x = x ⇒ g2−1 g1 ∈ StabG (x)
so f is an isomorphism.
This lemma implies that every orbit is isomorphic to G/ StabG (x) for x ∈ O.
g1 × g → g1 g is called the left multiplication action G y G.
The adjoint/conjugation action is given by g1 × g → g1 · g · g1−1 . We check this is a
valid action: Ad1 (g) = 1g1−1 = g, and
Adg1 g2 (g) = g1 g2 gg2−1 g1−1 = Adg1 (g2 gg2−1 ) = Adg1 (Adg2 (g))
Definition. g 0 is conjugate to g 00 if they belong to the same orbit under the action
of conjugation. Orbits are called conjugacy classes.
Definition. For g ∈ G, define ZG (g) := StabAd(G) (g) = {g1 ∈ G | g1 gg1−1 = g} =
{g1 ∈ G | g1 g = gg1 }. Define ZG = {g ∈ G | ZG (g) = G} = {g ∈ G | gg1 =
g1 g ∀g1 ∈ G}.
Example. Z(GLn (R)) = λ · Id.
Proposition. Let G be finite and |G| = pn for prime p. Then ZG 6= {1}.
F
F
Proof. G =
O '
G/ StabG (x) where O are conjugacy classes. O1 = {1}.
Suppose for contradiction that ZG = {1}, so ∀x ∈ G, x 6= 1, ZG (x) 6= G ⇒
0
|G/ StabG (x)| = pn for n0 > 0. This is a contradiction since it implies pn =
P n0
1+ p .
Generalization of above:
Proposition. Suppose G y X, G is finite with |G| = pn where prime p satisfies
gcd(|X|, p) = 1, then ∃x ∈ X such that StabG (x) = G ⇒ x fixed by G.
14
JONATHAN WANG
F
F
Proof. Suppose there is no fixed point. Then X =
O =
G/ StabG (x), so
P
0
0
|G/ StabG (x)| = pn and n0 > 0. Then |X| =
pn but |X| is not divisible by
p.
Proposition. Let G be a finite group of order p2 . Then G is abelian.
Proof. We know from the previous proposition that ZG 6= {1}. Therefore |ZG | = p
or p2 since it is a subgroup. If |ZG | = p2 , ZG = G so G is abelian. Suppose
|ZG | = p. Then ∃x ∈ G, x ∈
/ ZG . But ZG ⊂ ZG (x) 6= G and ZG (x) is a subgroup,
so ZG = ZG (x) which is a contradiction since x ∈ ZG (x).
Let H, H 0 ⊂ G be two subgroups. We say they are conjugate if ∃g ∈ G st
∼
gHg −1 = H 0 or equivalently Adg : H → H 0 .
Lemma. H is normal if and only if it is only conjugate to itself.
Theorem (Sylow). Let G be a finite group and p be prime st |G| = pn · r where
gcd(r, p) = 1.
(1) There exists a subgroup H ⊂ G whose order is |H| = pn .
0
(2) Every subgroup H 0 with |H 0 | = pn is conjugate to a subgroup of H.
We call a subgroup with order pn a p-Sylow subgroup.
Corollary. If H1 and H2 are two p-Sylow subgroups, they are conjugate.
Proof of 1. Consider the action of left multiplication G y Subsets(G). Take
Subsets(G) ⊃ S = {s ∈ Subsets(G) | |Us | = pn }
From combinatorics we have that
n pn r · (pn r − 1) · · · (pn r − pn + 1)
p r
(pn r − 1) · · · (pn r − pn + 1)
=
|S| =
=
r
pn · · · 1
(pn − 1) · · · 1
pn
We see that if pi | (pn r − m), then pi | m, so gcd(|S|, p) = 1. Looking atFthe
G-orbits acting on S (since G preserves order of subsets), we see that S = O.
Therefore there exists an orbit O st gcd(|O|, p) = 1. For s ∈ O, set H = StabG (s)
so O ' G/H. Since |O| is coprime with p and |G| = pn · r, pn divides |H|.
On the other hand, consider H y Us (where Us is now the actual subset).
F This
action is well-defined because H is the stabilizer of s. Therefore Us = O0 and
each orbit O0 = H/ StabH (g) for g ∈ O0 ⊂ Us ⊂ G. Now hg = g ⇒ h = 1 so all
stabilizers are trivial, and |Us | = |H| · ( number of orbits ), so |H| divides |Us | = pn .
Therefore the subgroup H has order pn .
Proof of 2. Assuming (1), let H be the subgroup of order pn . Taking the usual
0
action G y G/H, we have the restriction H 0 y G/H. Since |G/H| = r, |H 0 | = pn ,
and gcd(r, p) = 1, there exists ḡ ∈ G/H fixed by H 0 by the previous proposition.
Fixing h0 ∈ H 0 , we have
h0 ḡ = ḡ ⇒ ∃h ∈ H st h0 g = gh ⇒ g −1 h0 g = h ⇒ h0 ∈ gHg −1
so H 0 ⊂ gHg −1 ⇒ g −1 H 0 g ⊂ H.
MATH 55A NOTES
15
9. Tensor products and powers
f
Recall that U ⊗ V → W assigns values to f (u ⊗ v) st f ((u1 + u2 ) ⊗ v) =
∼
f (u1 ⊗ v) + f (u2 ⊗ v). There is an isomorphism U ⊗ V → V ⊗ U induced by
u ⊗ v 7→ v ⊗ u. If U = V then we get the non-trivial map V ⊗ V → V ⊗ V given
by v1 ⊗ v2 7→ v2 ⊗ v1 . This map is the identity if and only if dim V ≤ 1.
M ult
We now introduce the space V1 ⊗ V2 ⊗ V3 . A map V1 × V2 × V3 → W is
multilinear if it is linear in each variable. Then we define V1 ⊗ V2 ⊗ V3 as we defined
V1 ⊗ V2 :
M ultuniv
>
V1 × V2 × V3
V1 ⊗ V2 ⊗ V3
f
M ult
>
∨
W
Theorem. U1 ⊗ U2 ⊗ U3 exists and is isomorphic to (U1 ⊗ U2 ) ⊗ U3 .
Proof. Define U1 × U2 × U3
M ultuniv
−→
(U1 ⊗ U2 ) ⊗ U3 by
M ultuniv (u1 , u2 , u3 ) = (u1 ⊗ u2 ) ⊗ u3
Given M ult : U1 × U2 × U3 → W , fix u3 and consider B(u1 , u2 ) := M ult(u1 , u2 , u3 )
fu
f
which is in bijection with U1 ⊗ U2 →3 W . Define U1 ⊗ U2 ⊗ U3 → W by f (w ⊗ u3 ) =
fu3 (w). To check f is well-defined, we need
f (w ⊗ u03 ) + f (w ⊗ u003 ) = f (w ⊗ (u03 + u003 ))
Checking for w = u1 ⊗ u2 , we have
M ult(u1 , u2 , u03 ) + M ult(u1 , u2 , u003 ) = M ult(u1 , u2 , u03 + u003 )
so fu03 + fu003 = fu03 +u003 . This gives a bijection between M ult and f .
Now we can denote M ultuniv (u1 , u2 , u3 ) = u1 ⊗ u2 ⊗ u3 and we have an isomorphism (u1 ⊗ u2 ) ⊗ u3 7→ u1 ⊗ u2 ⊗ u3 . By the same argument,
(U1 ⊗ U2 ) ⊗ U3 ' U1 ⊗ U2 ⊗ U3 ' U1 ⊗ (U2 ⊗ U3 )
We now similarly define T n V := V ⊗n . If V has a basis e1 , . . . , ek then ei1 ⊗ ei2 ⊗
· · · ⊗ ein where ij ∈ {1, . . . , k} form a basis of T n V , so dim(T n V ) = k n .
f
To define a map U1 ⊗ · · · ⊗ Un → W it is necessary and sufficient to define
f (u1 ⊗· · ·⊗un ) which is multilinear. We can therefore define an action Sn y T n (V )
by
σ · (v1 ⊗ · · · ⊗ vn ) = vσ(1) ⊗ · · · ⊗ vσ(n) , σ ∈ Sn
9.1. Exterior product.
A multilinear map Alt : V × · · · × V → W is said to be alternating if
Alt(v, v) = 0 ⇒ Alt(v1 , v2 ) = −Alt(v2 , v1 )
which further implies Alt(v1 , . . . , vn ) = 0 if vi = vj for some 1 ≤ i < j ≤ n.
Proposition. T n V admits a unique quotient vector space Λn V st
T nV
π
g
>>
Λn V
> ∨
W
16
JONATHAN WANG
where g factors through Λn V if and only if the corresponding multilinear map V n →
W is alternating.
Consider the following subspace of T n V :
span{v1 ⊗ · · · ⊗ vn | ∃i, j st vi = vj }
and set Λn V := T n V /span{}.
Proposition. A map g : T n → W factors through Λn V if and only if the corresponding M ult : V n → W is alternating.
Let Altuniv be the composition
Vn
M ultuniv
−→
π
T n V → Λn V
which then has the universal property that the assignment f 7→ f ◦ Altuniv is a
bijection between Homk (Λn V, W ) ↔ Alt(V n , W ).
Vn
M ultuniv
>
T nV
π
>
g
> ∨ <
f
Λn V
Alt
W
Denote π(v1 ⊗ · · · ⊗ vn ) =: v1 ∧ · · · ∧ vn . It follows that
v1 ∧ · · · ∧ vi ∧ · · · ∧ vj ∧ · · · ∧ vn = −v1 ∧ · · · ∧ vj ∧ · · · ∧ vi ∧ · · · ∧ vn
and v1 ∧ · · · ∧ vn = 0 if there is repetition. For σ ∈ Sn ,
vσ(1) ∧ · · · ∧ vσ(n) = sign(σ)(v1 ∧ · · · ∧ vn )
Theorem. Let V be finite dimensional with dimension k and basis e1 , . . . , ek .
(1) If n > k, then Λn V = {0}.
(2) If n ≤ k, then the set of ei1 ∧ · · · ∧ ein st 1 ≤ i1 < i2 < · · · < in ≤ k form a
basis of Λn V .
Proof. 1) ei1 ⊗ · · · ⊗ ein form a basis for T n V , and by pigeonhole, there will be a
repetition, so π(ei1 ⊗ · · · ⊗ ein ) = 0.
2) ei1 ⊗ · · · ⊗ ein span T n V . We eliminate any basis elements with repetitions.
If the ij ’s are not in order, applying σ rearranges the order while only changing the
sign. Therefore ei1 ∧ · · · ∧ ein (1 ≤ i1 < · · · < in ≤ k) span Λn V . Observe the fact
that
Lemma. Let U be a vector space with u1 , . . . , um spanning it. If for every W and
T
w1 , . . . , wm ∈ W there exists a unique map U → W st T (uj ) = wj , then u1 , . . . , um
is a basis.
We will prove the conditions of the previous lemma. Given W and wi1 ,...,in define
g
T V → W by
(
0
if ∃ repetition
g(ei1 ⊗ · · · ⊗ ein ) =
sign(σ)wσ(i1 ),...,σ(in ) otherwise
n
where σ puts the indices in order. This corresponds to an alternating map V n → W
so g factors through Λn V .
MATH 55A NOTES
f
17
f ⊗g
g
We have from before that U1 → U2 , V1 → V2 induces a map U1 ⊗ V1 −→ U2 ⊗ V2 .
n
f
T f
Extending, we have that V1 → V2 induces T n V1 −→ T n V2 where f (v1 ⊗ · · · ⊗ vn ) =
f (v1 ) ⊗ · · · ⊗ f (vn ).
Proposition. There exists a unique map Λn f : Λn V1 → Λn V2 that makes the
following diagram commute.
T nf
T n V1
>
T n V2
>
∨
n
π2
π1
∨
n
Λn f
Λ V1
Λ V2
Proof. The diagram forces uniqueness, and ker π1 maps into ker π2 so the induced
map is well-defined.
If dim V = k, then dim(Λn V ) = nk . Therefore if dim V = n, dim(Λn V ) = 1
T
and det(V ) := Λn V . If dim V1 = dim V2 = n and V1 → V2 , det T := Λn T , where
det T
det(V1 ) −→ det(V2 ).
Theorem. Let V be any vector space with v1 , . . . , vk lin. ind. vectors. Then
vi1 ∧ · · · ∧ vin for 1 ≤ i1 < · · · < in ≤ k are linearly independent in Λn V .
Proof. Extend v1 , . . . , vk to a basis of V . Then vi1 ∧· · ·∧vin are linearly independent
because they are part of the basis of Λn V .
9.2. k-Algebras. Let k be a field. A k-algebra is a ring A together with a (ring)
ϕ
homomorphism k → A st
ϕ(x) · a = a · ϕ(x)
∀x ∈ k, a ∈ A
Example. A = Matn×n (k) and x 7→ x Id maps k → A.
A map between k-algebras A1 → A2 is a ring homomorphism where the following
diagram commutes.
A1
>
<
>
A2
k
Observe that for a ∈ A, x ∈ k, defining x · a = ϕ(x) · a makes A a k-vector space.
Lemma. A k-algebra is equivalent to a ring A with the structure of a k-vector
m
space and a map A ⊗ A → A st the following diagram commutes.
k
A⊗A⊗A
k
id ⊗m
>
A⊗A
k
k
m⊗id
∨
A⊗A
k
and m(a1 ⊗ a2 ) = a1 a2 .
m
m
∨
>
A
18
JONATHAN WANG
Proof. (⇒) Suppose there is an algebra structure. Then define m by a1 ⊗a2 7→ a1 a2 .
Checking, (ϕ(x)a1 )a2 = a1 (ϕ(x)a2 ) satisfies linearity. We have
a1 ⊗ a2 ⊗ a3
>
a1 ⊗ a2 a3
∨
∨
a1 a2 ⊗ a3
a1 a2 a3
>
ϕ
(⇐) Suppose we have a map a1 a2 = m(a1 ⊗ a2 ). Then defining k → A by
ϕ(x) = x · 1A gives a ring homomorphism:
ϕ(x · x0 ) = (x · x0 )1A = m((x · x0 )(1A ⊗ 1A )) = m(ϕ(x) ⊗ ϕ(x0 ))
and
ϕ(x) · a = m((x · 1A ) ⊗ a) = x · m(1A ⊗ a) = x · a = x · m(a ⊗ 1A ) = a · ϕ(x) The previous lemma essentially says that multiplication in a k-algebra A is klinear.
Let V be a vector space. Then
T (V ) := k ⊕ V ⊕ T 2 V ⊕ · · · ⊕ T n V ⊕ . . .
We define multiplication T m V ⊗ T n V → T m+n V by
(v1 ⊗ · · · ⊗ vm ) ⊗ (v10 ⊗ · · · ⊗ vn0 ) 7→ (v1 ⊗ · · · ⊗ vm ⊗ v10 ⊗ · · · ⊗ vn0 )
This induces multiplication on
M
M
T `V ' T V
TV ⊗ TV '
T mV ⊗ T nV →
k
m,n≥0
`
The multiplication is associative:
T mV ⊗ T nV ⊗ T `V
T m V ⊗ T n+` V
>
∨
∨
T m+n V ⊗ T ` V
>
T m+n+` V
We can define ϕ to be the inclusion k ,→ T V , making T V a k-algebra (with identity
1 ∈ k). T V is called the tensor algebra.
T (V ) has the universal property that for any k-algebra A, f 7→ f ◦ i forms a
bijection between the algebra homomorphisms T (V ) → A and Hom(V, A), where i
is inclusion.
V
⊂
i
>
T (V )
f
> ∨
A
Next define
Λ(V ) := k ⊕ V ⊕ · · · ⊕ Λn V ⊕ . . .
Proposition. There exists a unique k-algebra structure st the map T (V ) → Λ(V )
is a k-algebra homomorphism.
MATH 55A NOTES
19
Proof. In order for the map to be a k-algebra homomorphism, we must have
Λn ⊗ Λm V
>
Λn+m V
∧
∧
πn ⊗πm
T nV ⊗ T mV
f
>
T n+m V
f ⊗g
g
(Side note: U1 U2 and V1 V2 implies U1 ⊗ V1 U2 ⊗ V2 .)
f
g
Lemma. If U1 → U2 and V1 → V2 are surjective maps, then (ker f ⊗ V1 ) ⊕ (U1 ⊗
ker g) → U1 ⊗ V1 is a surjection onto ker(f ⊗ g).
Using the lemma and setting f = πn and g = πm , it is clear that the map
T n V ⊗ T m V → T m+n V → Λm+n V vanishes on ker(πn ⊗ πm ). So we have defined
Λn V ⊗ Λm V → Λn+m V by
0
0
(v1 ∧ · · · ∧ vn ) ⊗ (v10 ∧ · · · ∧ vm
) 7→ v1 ∧ · · · ∧ vn ∧ v10 ∧ · · · ∧ vm
which induces the desired k-algebra structure on Λ(V ).
Λ(V ) is called the exterior or wedge algebra.
Proposition. Let V 0 ⊂ V be a fin. dim. vector space. Then
(1) Λn V 0 → Λn V is injective.
(2) Suppose n = dim V 0 . Then the following diagram induces a map.
Λn V 0 ⊗ Λm V
>
Λn V ⊗ Λm V
>
id ⊗Λm π
Λn+m V
>
>
Λn V 0 ⊗ Λm (V /V 0 )
(3) The latter map Λn V 0 ⊗ Λm (V /V 0 ) → Λn+m (V ) is injective.
In the particular case where m = dim V /V 0 , we have dim V = m + n and the
previous map becomes an isomorphism between det(V 0 ) ⊗ det(V /V 0 ) ' det(V )
Proof. 1) Take a basis for V 0 and extend to V . The corresponding basis of Λn V 0 is
lin. ind. in Λn V , so map is injective.
2) We need to show that Λn V 0 ⊗ ker(Λm V → Λm (V /V 0 )) goes to 0 under the
→→ composition. Let e1 , . . . , en form a basis for V 0 and let f1 , . . . , fk be the
complementary basis vectors that together form the basis of V . Therefore
ei1 ∧ · · · ∧ ei` ∧ fj1 ∧ · · · ∧ fjm−`
form a basis of Λm V . ker(Λm V → Λm (V /V 0 )) is spanned by ei1 ∧ · · · ∧ ei` ∧ fj1 ∧
· · · ∧ fjm−` for ` 6= 0. Λn V 0 is one dimensional with basis vector e1 ∧ · · · ∧ en , so
the result follows.
3) By removing the kernel, we have a basis for Λn V 0 ⊗ Λm (V /V 0 ) given by
(e1 ∧ · · · ∧ en ) ⊗ (fj1 ∧ · · · ∧ fjm ) 7→ e1 ∧ · · · ∧ en ∧ fj1 ∧ · · · ∧ fjm
where 1 ≤ j1 < · · · < jm ≤ k. Each basis element goes to a distinct basis element
in Λn+m V so the map is injective.
20
JONATHAN WANG
10. Field extensions
Let k be a field. We have the associated k-algebra k[t] of single variable polynoevx
mials in k. For fixed x ∈ k, we can evaluate polynomials, giving a map k[t] −→
k.
Theorem (Bezout). p ∈ k[t] is divisible by t − x if and only if p(x) = 0.
Corollary. If p(xi ) vanishes for x1 , . . . , xn where xi 6= xj and n > deg p, then
p = 0.
For finite fields, we can have p(x) = q(x) ∀x ∈ k yet p 6= q.
Example. tp − t ∈ Fp [t] has xp−1 = 1 for x ∈ F∗p .
ϕ
Lemma. If A is a k-algebra, then to give a homomorphism k[t] −→ A is equivalent
to specifying an element in A st ϕ(t) = a.
Using PS 5, Problem 5d,e, we have the bijection
V = k > k[t] ' Sym(V )
> ∨
A
Lemma. Any ideal in k[t] is of the form Ip for some p ∈ k[t].
Proof. Ip = {q · p | q ∈ k[t]}. Given an ideal I, take p ∈ I to be an element of
lowest degree. Given any other p̃ ∈ I, we can divide with remainder
p̃ = p · q1 + q2
deg q2 < deg p
q2 ∈ I but p has lowest degree, so q2 = 0.
ϕ
Lemma. Giving a hom. k[t]/Ip −→ A is equivalent to specifying a ∈ A st p(a) = 0.
Proof. Given ϕ, set a = ϕ(t). Then
0 = ϕ(p(t)) = p(ϕ(t)) = p(a)
Conversely, given a ∈ A, by the previous lemma we have a hom. k[t] → A which
factors through to k[t]/Ip .
Let R be a commutative ring. An ideal m ( R is called maximal if @ ideal
m0 ⊆ R st m0 ) m.
Lemma. Ip is maximal if and only if p is irreducible.
Proof. (⇐) If p is irreducible, suppose m0 ) m. Then m0 = Ip0 , so Ip ⊂ Ip0 ⇒ p ∈
Ip0 ⇒ p0 divides p, a contradiction.
(⇒) Suppose Ip is maximal. If p reducible, p = p0 q so we have Ip0 ) Ip .
ev
x
For x ∈ k, the evaluation function k[t] −→
k has ker(evx ) = It−x , a maximal
ideal.
Proposition. There is an order-preserving (order by inclusion) bijection between
ideals in R/I and ideals in R containing I.
π
Proof. We have R → R/I. Send I1 ⊂ R to π(I1 ) ⊂ R/I.
Proposition. A proper ideal m ( R is maximal if and only if R/m is a field.
MATH 55A NOTES
21
Proof. (⇐) Suppose R/m is a field. The only ideals in a field are 0 and the entire
field: if a ∈ I then aa−1 = 1 ∈ I. So by the previous proposition the only ideal
containing m in R is R.
(⇒) If m is maximal, then again the only ideals in R/m are 0 and R/m. So the
ideal generated by any ā ∈ R/m equals R/m and hence contains 1̄. Therefore ā
has an inverse, so R/m is a field.
Corollary. Every maximal ideal in k[t] is the kernel of some surjective hom. from
k[t] k 0 ←- k where k 0 is a field and a k-algebra.
Proof. Given a maximal ideal M ( k[t], consider k[t] k[t]/M = k 0 . For any
ϕ
surjective hom. k[t] k 0 , its kernel is a maximal ideal because k 0 ' k[t]/ ker ϕ. We therefore have that k 0 = k[t]/Ip for p irreducible is a field.
Example. For k = R, p = t2 + 1, k 0 = k[t]/(t2 + 1) = C. To see this, define
ϕ : R[t]/(t2 + 1) → C by ϕ(t) := i.
f
Proposition. If k → R is nonzero, f is injective.
Proof. If f (a) = 0 for a 6= 0, f (1) = f (aa−1 ) = 0 so f = 0.
Definition. k is algebraically closed if every polynomial has a root.
Proposition. The following are equivalent:
(1) k is algebraically closed.
(2) Every irreducible polynomial Q
is of degree 1.
(3) Every polynomial factors as (t − xi ).
Proof. (1) ⇒ (3): Given p, it has a root. Divide by t − x and continue. (3) ⇒ (2)
is obvious.
(2) ⇒ (1): Take p with no root of lowest degree, which must be irreducible
(otherwise a factor would either have a root or be of lower degree). Then deg p = 1,
a contradiction.
Example.QNo finite field F is algebraically closed. If x1 , . . . , xn are all the elements
of F, then (t − xi ) + 1 has no root.
The following will be proved later:
Theorem (Fundamental theorem of algebra). C is algebraically closed.
Definition. Let k be a field. A field extension is k ,→ k 0 . A field extension is finite
if k 0 is finite dimensional as a k-vector space.
Example. R ,→ C is a field extension.
Example. Take k and an irreducible polynomial p ∈ k[t]. k ,→ k 0 = k[t]/Ip is a
field extension, and dim k 0 = deg p as 1, t, . . . , tdeg p−1 form a basis.
Theorem. k is algebraically closed if and only if any finite field extension is trivial.
Proof. (⇐) If k is not algebraically closed, ∃ an irreducible p of deg p > 1, so k[t]/Ip
is a finite extension.
22
JONATHAN WANG
(⇒) Suppose k is algebraically closed and ∃k 0 ) k. Let y ∈ k 0 −k and
k0 .
Pnn = dim
2
n
i
Then 1, y, y , . . . , y are linearly dependent and there exist ai ∈ k st i=0 ai y = 0.
Define
n
X
p(t) =
ai ti
i=0
Qn
By
Q algebraic closure, p(t) = i=1 (t − bi ) for bi ∈ k. We now have p(y) = 0 6=
(y − bi ) since y ∈
/ k, a contradiction.
Lemma. Let k be a field and p a polynomial.
(1) ∃k 0 ⊃ k a finite field extension st p has a root in k 0 .
(2) ∃k 00 ⊃ k a finite field extension st p factors completely in k 00 .
Proof. 1) Take an irreducible factor q of p and set k 0 = k[t]/Iq . Let t̄ ∈ k 0 be the
image of t under the projection k[t] −→ k[t]/Iq . Then q(t̄) = q(t) = 0 ∈ k 0 so t̄ is a
root of q and thus p in k 0 .
finite
finite
2) If we have field extensions k 00 ⊃ k 0 ⊃ k, then dimk0 (k 00 ) · dimk (k 0 ) =
dimk (k 00 ) so k 00 ⊃ k is finite. Therefore we can repeat (1) until p factors completely,
which requires at most deg p times by Bezout’s theorem.
10.1. Fundamental theorem of algebra.
Proof 1. Suppose p(z) = z n + · · · + a1 z + a0 is a polynomial in C with no root.
Define f1 : S 1 → C − 0 by z 7→ z n and f0 : S 1 → C − 0 by z 7→ a0 . We use the
following result from algebraic topology:
Lemma. f1 and f0 are not homotopic.
Assuming p has no root, we construct a homotopy F between f0 and f1 using
three segments:
F0,1/3
F1/3,2/3
F2/3,1
f0 ∼ f1/3 ∼ f2/3 ∼ f1
Let f1/3 (z) = p(z) and F0,1/3 (t, z) = p(tz) for 0 ≤ t ≤ 1. Now for some real
R >> 0, set f2/3 (z) = R−n p(Rz) and F1/3,2/3 (t, z) = t−n p(tz) for 1 ≤ t ≤ R.
a1
a0
an−1 n−1
z
+ · · · + n1 z + n
f2/3 (z) = z n +
R
R
R
Lastly define
n−1
X ai
F2/3,1 (t, z) =
tz i + z n
n−i
R
i=0
for 0 ≤ t ≤ 1. This last segment is well-defined (i.e., F2/3,1 (t, z) 6= 0) if
|a0 |R−n + · · · + |an−1 |R−1 < 1
Proof 2. Assume p has real coefficients (if not consider pp̄ where p̄ is polynomial
obtained by conjugating coefficients – the resulting coefficients all equal their conjugates by symmetry), so p ∈ R[t]. We can write deg p = 2n · dQwhere d is odd, and
induct on n. Extend C to C0 so that p factors completely into (t − ai ) for ai ∈ C0 .
Define
Y
qr (t) =
(t − (rai aj + ai + aj )) ∈ C0 [t], r ∈ R
i<j
Proposition. qr (t) has coefficients in R.
MATH 55A NOTES
23
Proof. Consider qr (t) as an element of R[t, s1 , . . . , sn ] = R[t] for R = R[s1 , . . . , sn ]
where s1 , . . . , sn are variables for the roots of p. Then the coefficients of qr (t) in R
are symmetric functions in s1 , . . . , sn .
Qn
Pn
Definition. If we expand the polynomial i=1 (t − si ), we get i=0 gi ti for gi ∈ R.
The functions gi are called the elementary symmetric polynomials.
Theorem. Any symmetric function in R is a scalar multiple of a sum of products
of elementary symmetric polynomials.
Since the gi ’s evaluated at a1 , . . . , an are the coefficients of p, they are real, so
by the previous theorem, the coefficients of qr (t) are actually real.
p
= (deg p)(deg p − 1)/2, so it is
Observe that the degree of qr (t) is equal to deg
2
not a multiple of 2n and satisfies inductive hypothesis. Therefore qr (t) has a root
zr ∈ C. From our expansion of qr we see that any root must equal rai aj + ai + aj =
r
zij
for some i, j. So by pigeonhole, there must be some i, j such that
r1
zij
= r1 ai aj + ai + aj
r2
zij
= r2 ai aj + ai + aj
Solving this pair of equations, we have ai aj , ai + aj ∈ C. ai , aj are the solutions to
the polynomial (z − ai )(z − aj ) = z 2 − (ai + aj )z + ai aj , so ai , aj ∈ C.
To prove base cases, we must check that all second degree polynomials have
complex roots (quadratic formula) and odd degree polynomials have a root (intermediate value theorem).
11. Linear algebra revisited
11.1. Eigenvalues and characteristic polynomial.
T
Definition. For a map V → V , v ∈ V is an eigenvector for T if ∃λ ∈ k st T v = λv.
λ is called the eigenvalue.
Theorem. If k is algebraically closed and V is fin. dim., there always exists an
eigenvector.
Counterexample. V = k[t] and T defined by multiplying by t has no eigenvalue.
Counterexample. Suppose there exists k 0 ) k and let V = k 0 be fin. dim. Then
for y ∈ k 0 − k, T (z) = y · z has no eigenvalue.
Proof. For T ∈ Homk (V, V ), the characteristic polynomial chT (t) is a polynomial
in k[t] of degree dim V defined by chT (x) = det(T − x Id). Choose a basis for V
and write T as a matrix


a11 . . . a1n

.. 
A =  ...
and
At = A − tIn×n
. 
an1
...
ann
Take chT (t) = det(At ) ∈ k[t]. By construction the value of chT (t) at t = x equals
det(Ax ) = det(T − x Id).
Since k is algebraically closed, chT has a root λ ∈ k. chT (λ) = det(T − λ Id) = 0
so T − λ Id has a nonzero kernel, so ∃v ∈ ker(T − λ Id) ⇒ T v − λv = 0.
24
JONATHAN WANG
ϕ
T
If V is a k-vector space with V → V and there exists extension k ,→ k 0 , then
T
0
V 0 = k 0 ⊗ V is a k 0 -vector space and induces V 0 → V 0 . Then chT 0 ∈ k 0 [t] equals
k
ϕ(chT ).
T
Let V → V with chT ∈ k[t]. If we write
chT (t) = an tn + · · · + a1 t + a0
then a0 = det T, . . . , an−1 = (−1)n−1 Tr(T ), an = (−1)n . We define the operator
chT (T ) := an T n + · · · + a1 T + a0 Id
Theorem (Cayley-Hamilton Theorem). chT (T ) = 0.
Proof. Given a matrix A, define the adjoint matrix Aadj by setting aadj
ij equal to
(−1)i+j times the j, i-minor of A. Therefore we have A·Aadj = Aadj ·A = det(A) Id.
So
(A − t Id)adj (A − t Id) = det(A − t Id) Id = chT (t) Id
We can write (A − t Id)adj = Bn tn + Bn−1 tn−1 + · · · + B1 t + B0 . So
X
X
Bi ti (A − t Id) = −Bn tn+1 +
(Bi A − Bi−1 )ti + B0 A = chT (t) Id
so Bi A − Bi−1 = ai Id for all i. Therefore
X
X
chT (A) =
ai Ai =
(Bi Ai A − Bi−1 Ai ) + B0 A = Bn An+1 = 0
T
For the rest of this section consider V to be fin. dim. with V → V .
Lemma. λ is an eigenvalue if and only if λ is a root of chT .
Proof. chT (λ) = 0 ⇔ det(T − λ Id) = 0 ⇔ T − λ Id is not invertible ⇔ T − λ Id has
a nonzero kernel ⇔ ∃v ∈ V st T v = λv.
11.2. Generalized eigenvectors.
Definition. T is nilpotent if ∃m st T m = 0 (m can be taken to be dim V ).
Proposition. T is nilpotent if and only if chT = (−1)n tn where n = dim V .
Proof. Stage 1: Assume chT factors over k.
(⇒) Need to show @ nonzero eigenvalue. If T v = λv, T n v = λn v = 0 implies λ = 0.
(⇐) Using Cayley-Hamilton, we have T n = 0.
Stage 2: For arbitrary k, extend to k ,→ k 0 algebraically closed. Then consider
T0
V 0 = k 0 ⊗ V and V 0 → V 0 using Stage 1.
k
Definition. T is diagonalizable (semi-simple) if there exists a basis v1 , . . . , vn st
T vi = λi vi .
Example. If T is nilpotent and diagonalizable, then T = 0.
Definition. v ∈ V is a generalized eigenvector wrt λ ∈ k if for some m (T −
λ Id)m v = 0. λ is then a generalized eigenvalue.
Example. Let T be nilpotent, then every v ∈ V is generalized eigenvector with
generalized eigenvalue 0.
Lemma. λ is an eigenvalue if and only if λ is a generalized eigenvalue.
MATH 55A NOTES
25
Proof. Forward direction is obvious. For the other direction, take v to be a generalized eigenvector. Let m be the minimal integer such that (T − λ Id)m v = 0. Then
v 0 = (T − λ Id)m−1 v is an eigenvector.
Let Vλ denote the set of all generalized eigenvectors with gen. eigenvalue λ.
Check that Vλ is a vector subspace.
Lemma. T maps Vλ to itself.
Proof. In general, S maps Vλ to itself if S commutes with T .
(T − λ Id)m Sv = S(T − λ Id)m v = 0
Lemma. For µ 6= λ, T − µ Id : Vλ → Vλ is invertible.
Proof. We show T is injective. If T v = µv, then (T − λ Id)m v = (µ − λ)m v 6= 0, a
contradiction.
L
∼
Theorem. Assume chT factors in k. Then λ Vλ −→ V .
P
Proof. Injectivity: suppose λ vλ = 0 for vλ ∈ Vλ . Suppose this expression involves
a minimal number of distinct λ’s. Then
X
X
(T − λ1 Id)m1
(T − λ1 Id)m1 vλ = 0
vλ =
λ
λ6=λ1
which is a linear combination with
Lfewer terms, so zero is only possibility.
Surjectivity: Define V 0 := Im( Vλ → V ) and V 00 := V /V 0 . Then
V0
T0
∨
0
>
π
V
>
V 00
T 00
∨
00
T
∨
> V
> V
V
Assume V 6= 0. Then since chT = chT 0 · chT 00 , chT 00 must factor into linear terms
over k, so it has a root λ which is also a root of chT . So there exists v 00 ∈ V 00 st
T 00 v 00 = λv 00 . We show that there exists v ∈ V which is a gen. eigenvector of λ and
π(v) = v 00 . This is a contradiction since v ∈ Vλ ⊂ V 0 so π(v) = v 00 = 0.
Take some representative ṽ st π(ṽ) = v 00 . Let mλ := dim Vλ . Then set
Y
1
(T − λ0 Id)mλ0 ṽ
v=Q
0 mλ0
λ0 6=λ (λ − λ )
0
00
λ 6=λ
00
00
Since π ◦ T = T ◦ π, π(v) = v . Next, (T − λ Id)mλ +1 is a constant multiple of
Y
Y
(T − λ0 Id)mλ0 (T − λ Id)ṽ = 0
(T − λ Id)mλ +1
(T − λ0 Id)mλ0 ṽ =
λ0 6=λ
λ0
since π((T − λ Id)ṽ) = (T − λ Id)v 00 = 0V 00 .
Proposition (Cayley-Hamilton). Assume chT factors. Then
Y
chT (t) =
(λ − t)mλ
λ
Proof is in the homework.
Q
Corollary. λ (λ Id −T )mλ = 0 since (T − λ Id)mλ |Vλ = 0.
26
JONATHAN WANG
For the rest of the section,
L assume chT factors completely in k.
Since we showed V =
Vλ , we want a projection πλ : V → Vλ st π|Vλ = Id and
π|Vλ0 = 0 for λ0 6= λ.
Proposition. ∃pλ (t) st πλ = pλ (T ).
Proof. Define
qλ (t) :=
Y
(t − λ0 )mλ0
λ0 6=λ
Then qλ (T )|Vλ0 = 0 for λ0 6= λ and qλ (T )|Vλ is invertible. Since gcd(qλ (t), (t −
λ)mλ ) = 1, there exist rλ , sλ st
rλ qλ + sλ (t − λ)mλ = 1
Take pλ := rλ qλ . Then pλ (T )|Vλ0 = 0 and pλ (T )|Vλ = Id.
Let
Vλ0
denote the eigenspace, while Vλ denotes gen. eigenspace.
Proposition. The following are equivalent:
(1) T is semi-simple.
(2) Every generalized eigenvector is an eigenvector.
0
(3) V
λ ⊆ Vλ is equality.
L
(4)
Vλ0 → V is isomorphism.
Proof. All are obvious except
(1) ⇒ (2). Suppose we have basis so T (vi ) = λi vi . If
P
(T − µ Id)m v = 0, v = ai vi then
X
(λi − µ)m ai vi = 0
so the only possibility is that ai = 0 for all i st λi 6= µ.
Theorem (Jordan Decomposition).
(1) ∃!T = T nilp + T ss st [T nilp , T ss ] = T nilp T ss − T ss T nilp = 0 and T nilp is
nilpotent and T ss is semi-simple.
(2) ∃pnilp (t), pss (t) st T nilp = pnilp (T ), T ss = pss (T ).
Proof. 1) Existence: set T ss |Vλ = λ Id and T nilp |Vλ = T − λ Id (is nilpotent).
Uniqueness: Since T ss commutes with T nilp , it commutes with T , so T ss maps
Vλ → Vλ . Furthermore T ss |Vλ is semi-simple since every gen. eigenvector is an
eigenvector. Suppose T ss (v) = µv for v ∈ Vλ , µ 6= λ. Then by binomial expansion
we see that for sufficiently large m,
(T − µ Id)m v = (T nilp + (T ss − µ Id))m v = 0
so (T − µ Id)|
a contradiction.
Therefore T ss |Vλ = λ Id.
PVλ is not injective,
P
ss
ss
2) T = λ λπλ so let p (t) = λ λpλ (t) and pnilp (t) = t − pss (t).
Definition. T : V → V is regular nilpotent if T dim V −1 6= 0.
Proposition. T is regular nilpotent if and
ei , T (e1 ) = 0.

0 1
 0 1


0
0
only if V admits a basis st T (ei+1 ) =
0



1
0
MATH 55A NOTES
27
Proof. (⇐) T n−1 (en ) = e1 6= 0.
(⇒) Suppose T dim V −1 6= 0. Take en to be any element st T n−1 en 6= 0. Define
Pn−1
en−i := T i en . If i=0 ai T i en = 0, let k be the smallest number such that ak 6= 0.
Then
X
T n−1−k
ai T i en = ak T n−1 en = 0
which is a contradiction.
i
Note that if T is regular nilpotent, then dim(ker T ) = i.
Lemma. Let T be nilpotent. TFAE:
(1) T is regular nilpotent.
(2) dim(ker T ) = 1.
(3) dim(ker T i ) = i.
Theorem (Jordan Canonical Form). Let T : V → V be nilpotent.
(1) ∃V = ⊕Vi st T : Vi → Vi and T |Vi is regular.
(2) ∀m, the number of i st dim Vi = m is independent of decomposition ( i.e.,
the decomposition is unique up to the order of the Jordan blocks).
Proof. 2) Observe that dim(ker T m ) − dim(ker T m−1 ) equals exactly the number of
i with dim Vi ≥ m. Therefore
dim(ker T m ) − dim(ker T m−1 ) − (dim(ker T m+1 ) − dim(ker T m ))
determines the number of i st dim Vi = m, which is independent of decomposition.
1) Represent V as a direct sum V = ⊕Vi with T : Vi → Vi such that no further
decomposition is possible. From the homework, if T nilpotent and indecomposable,
then T is regular.
11.3. Inner products.
We now consider V as a fin. dim. k-vector space where k = R or C. An inner
product h·, ·i : V × V → k satisfies
• hv10 + v100 , v2 i = hv10 , v2 i + hv100 , v2 i
• hav1 , v2 i = ahv1 , v2 i
• hv2 , v1 i = hv1 , v2 i
• hv, vi ∈ R≥0 and hv, vi = 0 ⇔ v = 0
P
Example. For V = k n , define h(a1 , . . . , an ), (b1 , . . . , bn )i = ai b̄i .
p
Denote the norm kvk := hv, vi and say v ⊥ u if hv, ui = 0.
Theorem (Pythagorean). ku + vk2 = kuk2 + kvk2 if u ⊥ v.
Proof. hu + v, u + vi = hu, ui + hu, vi + hv, ui + hv, vi.
Theorem (Cauchy Schwarz). |hu, vi| ≤ kukkvk with equality if and only if v = au
for a ∈ k.
Proof. Suppose v = au + w where w ⊥ u. Then hv, ui = ahu, ui. Checking,
hv,ui
w = v − hu,ui
u works. Set v1 = au so v = v1 + w.
|hu, vi| = |hu, v1 i| = kukkv1 k ≤ kukkvk
where the inequality follows from Pythagorean theorem: kvk2 = kv1 k2 + kwk2 , so
we have equality if and only if kwk = 0 ⇔ w = 0.
28
JONATHAN WANG
Theorem (Triangle inequality). ku + vk ≤ kuk + kvk with equality if and only if
v = au, a ∈ R≥0 .
Proof in homework.
If V is a complex vector space, define a new space V where a · v := āv. Then we
have V → V ∗ by v 7→ ξv where ξv (w) := hw, vi.
Proposition. V → V ∗ is an isomorphism.
Proof. It is injective since ξv (v) = hv, vi =
6 0 if v 6= 0. The two spaces have the
same dimension, so it is an isomorphism.
Theorem. hu + v, u + vi ≤ hu, ui + hv, vi + 2kukkvk
Proof. hu + v, u + vi = hu, ui + hv, vi + hu, vi + hv, ui and use Cauchy Schwarz.
For U ,→ V , let U ⊥ = {v ∈ V | hv, ui = 0 ∀u ∈ U }.
∼
Proposition. U ⊕ U ⊥ → V .
Proof. Injectivity: u = −v for v ∈ U ⊥ implies u ⊥ v ⇒ hu, ui = 0 ⇒ u = 0.
Surjectivity: Enough to show dim(U ⊥ ) ≥ dim V − dim U . Let e1 , . . . , en be
basis for U . Then U ⊥ = ker(V → k ⊕n ) defined by v 7→ (hv, e1 i, . . . , hv, en i). By
rank-nullity, dim U ⊥ ≥ dim V − n.
Definition. A collection of nonzero vectors e1 , . . . , ek ∈ V is called orthogonal if
hei , ej i = 0 ∀i 6= j. Any orthogonal collection is linearly independent, since
X
h
ai ei = 0, ej i = aj hej , ej i = 0 ⇒ aj = 0
P
hv,ei i
. An orthogonal collection is orthonormal if kei k = 1.
If v = ai ei , then ai = he
i ,ei i
Proposition (Gram-Schmidt). If v1 , . . . , vk is a lin. ind. collection, ∃!e1 , . . . , ek
orthogonal collection such that ei ∈ span(v1 , . . . , vi ) and ei −vi ∈ span(v1 , . . . , vi−1 ).
Proof. Let e1 = v1 . Suppose e1 , . . . , ei−1 have been found, and span(v1 , . . . , vi−1 ) =
span(e1 , . . . , ei−1 ). Then using P
induction it is enough to show that ∃!ei st ei − vi ∈
span(e1 , . . . , ei−1 ). If ei = vi + j aj ej , then
hei , ej<i i = hvi , ej i + aj hej , ej i = 0 ⇒ ai = −
hvi , ej i
hei , ej i
Corollary. V admits an orthogonal basis and consequently an orthonormal basis.
T : V1 → V2 is an isometry if it is an isomorphism and hT v10 , T v100 i = hv10 , v100 i.
Lemma. A basis is orthonormal if and only if k n → V is an isometry with dot
product on k n .
Definition. For T : V → V , there ∃!T adj : V → V st hv1 , T adj (v2 )i = hT (v1 ), v2 i.
Proof. Consider ξ ∈ V ∗ where ξ(w) := hT (w), vi. Since V ' V ∗ , ξ = ξu for u ∈ V ,
so ξu (w) = hw, ui. Set T adj (v) = u. Checking,
hw, T adj (cv)i = hT (w), cvi = c̄hT (w), vi = c̄hw, T adj (v)i = hw, cT adj (v)i
T is self-adjoint if T = T adj .
Lemma. If we choose an orthonormal basis of V and construct the corresponding
matrix for T , then T is self-adjoint if and only if aij = aji .
MATH 55A NOTES
Proof. Observe that aij = hT ei , ej i = hei , T ej i = aji .
29
Lemma. (T1 T2 )adj = T2adj T1adj and (T adj )adj = T .
An isometry T : V → V is called orthogonal if k = R and unitary if k = C.
Proposition. T is an isometry if and only if T −1 = T adj .
Proof. T is an isometry iff
hu, vi = hT u, T vi = hT adj T (u), vi
for arbitrary v. Letting v range over an orthonormal basis implies T adj T = Id.
Proposition. ker T = (Im T adj )⊥ .
Proof. u ∈ ker T ⇔ T u = 0 ⇔ hT u, vi = 0 ∀v ⇔ hu, T adj (v)i = 0 ⇔ u ∈
(Im T adj )⊥ .
Corollary. Im(T ) = ker(T adj )⊥ .
The proof follows from the following lemma and (T adj )adj = T .
Lemma. (U ⊥ )⊥ = U .
Proof. Since U ⊕ U ⊥ ' V ' U ⊥ ⊕ (U ⊥ )⊥ , the two spaces have equal dimension
and U ⊂ (U ⊥ )⊥ so we have equality.
Corollary. T is injective if and only if T adj is surjective, and T is surjective if
and only if T adj is injective.
Definition. For T : V → V , T is normal if T T adj = T adj T .
Theorem (Complex Spectral). Over C, T is normal if and only if T admits an
orthonormal basis of eigenvectors.
Proof. (⇐) Writing T in this basis, T = diag(λ1 , . . . , λn ) and T adj = diag(λ̄1 , . . . , λ̄n )
commute.
(⇒) Let Vλ0 be a nonzero λ eigenspace. Then T : Vλ0 → Vλ0 . Let U = (Vλ0 )⊥ .
Claim: T sends U to itself. We have
U = (ker(T − λ Id))⊥ = Im(T adj − λ̄ Id)
In general, if S and T commute, then S preserves Im T . T adj commutes with T , so
T preserves Im(T adj − λ̄ Id).
L 0
Therefore we have decomposed T into V = Vλ0 ⊕ U . By induction, V =
Vλ
where Vλ0 are all mutually orthogonal. Pick an orthonormal basis for each space. T
Recall that V1 → V2 induces the right composition map T ∗ : V2∗ → V1∗ . The
following diagram commutes:
V∗
T∗
>
∧
V∗
∧
∼
V
∼
T adj
>
V
30
JONATHAN WANG
12. Group representations
For a finite group G, a representation is a vector space V with ρ : G → GL(V ) =
Aut(V ) such that ∀g ∈ G, Tg = ρ(g) : V → V and
T g1 g2 = T g1 · T g2
T1 = IdV
Given (V1 , ρ1 ), (V2 , ρ2 ), S ∈ HomG (V1 , V2 ) if ∀g, ρ1 (g) = Tg1 , ρ2 (g) = Tg2
S
V1
>
V2
Tg1
∨
Tg2
∨
S
> V2
V1
Unless stated otherwise, assume all representations are finite dimensional.
Example.
(1) (V1 ⊕ V2 , ρ1 ⊕ ρ2 )
(2) V = k, ρ is trivial. G → {Id} ∈ Aut(k, k)
(3) Consider S ∈ HomG (k, V ). Then
a
>
S(a)
g
g
∨
∨
a
>
S(a) = gS(a)
Since HomG (k, V ) ,→ Homk (k, V ) ' V ,
HomG (k, V ) ' {v ∈ V | Tg (v) = v ∀g ∈ G} := V G
(4) Define a representation on Homk (V1 , V2 ) 3 S by g · S := g ◦ S ◦ g −1 .
ρ1 (g −1 )
S
ρ2 (g)
V1 −→ V1 −→ V2 −→ V2
Check that (g 0 g 00 )S = g 0 (g 00 S).
(5) HomG (V1 , V2 ) = (Homk (V1 , V2 ))G .
ρ2 (g) ◦ S = S ◦ ρ1 (g) ∀g ⇔ ρ2 (g) ◦ S ◦ ρ1 (g −1 ) = S ∀g
(6) Define a representation on Homk (V, k) = V ∗ 3 ϕ by (g · ϕ)(v) = ϕ(g −1 v).
(7) Define a representation on V1 ⊗ V2 by g(v1 ⊗ v2 ) = gv1 ⊗ gv2 . The canonical
map V1∗ ⊗ V2 → Homk (V1 , V2 ) is actually a homomorphism of representations.
Given G, we construct the k-algebra k[G] = span{δg : g ∈ G} (formal sum) so
{δg } is a basis. Then define multiplication by
X
X
X
ai δgi
bj δ g j =
ai bj δgi gj
i
j
i,j
In particular, δg1 δg2 = δg1 g2 . We include k ,→ k[G] by a 7→ a · δ1 . This makes k[G]
into a k-algebra.
Lemma. On a vector space V , TFAE:
(1) action of G
(2) action of k[G]
P
P
Proof. (⇒) ( ai δgi )v := ai (gi v).
(⇐) g · v := δg v.
MATH 55A NOTES
31
Let Fun(G) = set of all k-valued functions on G.
Lemma. (k[G])∗ = Fun(G).
Proof. { Linear maps from k[G] → k} '
L
k δg .
We can define a representation ` on Fun(G) 3 f by (`(g) · f )(g1 ) = f (g
0 00
00 −1
(`(g g )f )(g1 ) = f ((g )
0 −1
(g )
00
0 −1
g1 ) = (`(g )f )((g )
0
−1
g1 ):
00
g1 ) = (`(g )(`(g )f ))(g1 )
Proposition. Given a representation V , HomG (V, Fun(G)) ' V ∗ .
Proof. For S : V → Fun(G), define ϕ(v) := (S(v))(1). For ϕ : V → k, define
(S(v))(g) = ϕ(g −1 v). To check S respects G-actions,
S(g1 v)(g) = ϕ(g −1 g1 v) = S(v)(g1−1 g) = (`(g1 )S(v))(g)
To check we have isomorphism, ϕ → S → ϕ0 where ϕ0 (v) = (S(v))(1) = ϕ(v), and
S → ϕ → S 0 where (S 0 (v))(g) = ϕ(g −1 v) = S(g −1 v)(1). Using the intertwining
property of S, we have S(g −1 v)(1) = (`(g −1 )S(v))(1) = S(v)(g).
Suppose G is finite and we have a short exact sequence of representations
ρ
0 → V1 → V → V2 → 0
and V is fin. dim. There ∃ splitting as vector spaces, but does there exist a splitting
as representations? We must find i : V2 → V st ρ◦i = IdV2 , which gives V = V1 ⊕V2 .
Theorem. A splitting always* exists.
Proof. Step 1: Let v2 ∈ V2 be invariant (v ∈ V2G ). We will show that ∃v ∈ V G and
ρ(v) = v2 . Let v 0 ∈ V be any vector st ρ(v 0 ) = v2 . Take
X
X
ṽ :=
g · v0
ρ(ṽ) =
gv2 = |G|v2
g∈G
P 0
1
gv .
Define AvG : V → V by AvG (v 0 ) := |G|
*(Theorem is true if characteristic of field - |G|. In particular if field has characteristic 0.
G
Example. k = F2 , V = span{e1 , e2 }. V k by (ae1 , be2 ) 7→ a + b.
Claim: @(a, b) ∈ V which is invariant and projects to 1. If we have action
σ(a, b) = (b, a), invariant duals are of the form (a, a). However 2a = 0 in F2 .)
Step 2: We know ∃i : V2 → V in Homk (V2 , V ). Now take S = AvG (i) ∈
Homk (V2 , V )G = HomG (V2 , V ). Then
1 X
1 X
ρ ◦ g ◦ i ◦ g −1 =
g ◦ IdV2 ◦g −1 = IdV2
ρ◦S =
|G|
|G|
Given a representation V and a vector space U (or equivalently treating U as
trivial representation), we can define representation on V ⊗ U by g(v ⊗ u) = gv ⊗ u.
k
G
Lemma. (V ⊗ U ) ' V
G
⊗ U.
Proof. The map V G ⊗ U → (V ⊗ U )G is clear. This map is an isomorphism because
U ' k n so
∼
V G ⊗ U ' V G ⊕ · · · ⊕ V G → (V ⊕ · · · ⊕ V )G ' (V ⊗ U )G
Lemma. For a vector space V , TFAE:
32
JONATHAN WANG
(1) An action of G1 × G2 .
(2) Action of G1 and action of G2 that commute g2 (g1 v) = g1 (g2 v).
Proof. (⇐) Define (g1 × g2 )v := g1 (g2 v).
(⇒) Define g1 v := (g1 × 1)v and g2 v := (1 × g2 )v. G1 , G2 clearly commute.
Given G1 -representation V1 and G2 -representation V2 , we can define G1 × G2 on
V1 ⊗ V2 by (g1 × g2 )(v1 ⊗ v2 ) = g1 v1 ⊗ g2 v2 .
Lemma. (V1 ⊗ V2 )G1 ×G2 ' V1G1 ⊗ V2G2 .
Proof. Method 1. Use the natural map V1G1 ⊗ V2G2 → (V1 ⊗ V2 )G1 ×G2 . We have
from previous lemma that W G1 ×G2 = (W G1 )G2 . Here W = V1 ⊗ V2 , so from the
other lemma, W G1 = V1G1 ⊗ V2 and (V1G1 ⊗ V2 )G2 = V1G1 ⊗ V2G2 .
Method 2. Observe that for U1 ⊂ V1 , U2 ⊂ V2 ,
(V1 ⊗ U2 ) ∩ (U1 ⊗ V2 ) = U1 ⊗ U2 ⊂ V1 ⊗ V2
W
G1 ×G2
=W
G1
∩ W G2 , so
(V1 ⊗ V2 )G1 ×G2 = (V G1 ⊗ V2 ) ∩ (V1 ⊗ V2G2 ) = V1G1 ⊗ V2G2
Lemma. HomG1 ×G2 (V1 ⊗ V2 , V10 ⊗ V20 ) ' HomG1 (V1 , V10 ) ⊗ HomG2 (V2 , V20 ).
Proof. Use the natural ← map. Lemma follows from
(Homk (V1 ⊗ V2 , V10 ⊗ V20 ))G1 ×G2 ' (Homk (V1 , V10 ) ⊗ Homk (V2 , V20 ))G1 ×G2
Definition. A representation is called irreducible if it does not have a non-trivial
sub-representation.
Lemma. V is irreducible if and only if ∀v ∈ V, span{gv | g ∈ g} = V .
Corollary. Assume G is finite and char(k)
- |G|. Every fin. dim. representation
L
Vi , where Vi are irreducible.
V of G is isomorphic to a direct sum
Proof. If V is reducible, ∃V1 ,→ V and we get the short exact sequence
0 → V1 → V → V /V1 → 0
A splitting exists, which implies V = V1 ⊕ V /V1 .
Lemma (Schur’s Lemma). If V is irreducible and k is alg. closed, then k '
EndG (V ) (i.e., only scalar multiplication).
Counterexample. k = R not alg. closed, V = R2 = C, G = Z/nZ. For
c. If V is reducible, there must be c with
m ∈ G, c ∈ C, let m · c = exp 2πim
n
dim span{m · c} = 1 or m · c = rc for r ∈ R, which cannot happen if exp 2πim
∈
/ R.
n
However there are |C| different endomorphisms.
Proof. Let T ∈ EndG (V ). Then ∃λ ∈ k st eigenspace Vλ0 6= {0}. For v ∈ Vλ0 ,
span{gv} = V and
T gv = gT v = gλv = λgv
so T must be a scalar.
Assume G is finite, char(k) - |G|, k alg. closed.
Corollary. For fin. dim. V ,
L
(1) Every representation V of G can be written as
π π ⊗ Uπ where π are
distinct irreducible representations.
MATH 55A NOTES
L
π ⊗Uπ2 , then HomG (V1 , V2 ) =
Homk (Uπ1 , Uπ2 ).
L
Proof. 1) Every representation can be written as
π where π may not be distinct.
Grouping distinct irreducible representations together, we have
(2) If V1 '
L
π ⊗Uπ1 , V2 '
33
L
V = ⊕ π ⊕nπ = ⊕ π ⊗ k nπ
π
π
so let Uπ = k . Note that we also have Uπ = k nπ = HomG (π, V ).
2) We have
nπ
HomG (π ⊗ U, π 0 ⊗ U 0 ) = HomG (π, π 0 ) ⊗ Homk (U, U 0 )
If π 6' π 0 , HomG (π, π 0 ) = 0. Otherwise HomG (π, π 0 ) = k. k ⊗ Homk (U, U 0 ) =
Homk (U, U 0 ). The result follows.
Corollary. V is irreducible if and only if EndG (V ) is one dimensional.
Proposition. Let V1 , V2 be an irreducible representations of G1 , G2 , respectively.
(1) Then V1 ⊗ V2 is irreducible as a representation of G1 × G2 .
(2) Every irreducible representation of G1 × G2 is of the form V1 ⊗ V2 .
Proof. 1) EndG1 ×G2 (V1 ⊗ V2 ) = EndG1 (V1 ) ⊗ EndG2 (V2 ) = k ⊗ k = k, so V1 ⊗ V2
is also irreducible.
2) Let W be a representation
L of G1 × G2 . Look at W as a representation of only
G1 and decompose into W =
π1 ⊗Uπ1 , Uπ1 = HomG (π1 , W ). Now if we consider
∼
W as representation of G2 , π1 → g2 (π
L1 ) since G2 commutes with G1 . Therefore G2
only acts on Uπ1 . So W splits into
π1 ⊗ Uπ1 where π1 is G1 -representation and
Uπ1 is G2 -representation. The irreducibility of W then implies that there is only
one direct summand π1 ⊗ Uπ1 , and π1 , Uπ1 are both irreducible.
Recall the representation Fun(G) of G with left action g f (g1 ) = f (g −1 g1 ). For
a representation V of G, HomG (V, Fun(G)) ' V ∗ by sending φ 7→ ϕ where ϕ(v) =
φ(v)(1). We define a right action of G by f g (g1 ) = f (g1 g). Since
f ((g2−1 g)g1 ) = f (g2−1 (gg1 )) ⇒
g2
(f g1 ) = (g2 f )g1
the actions commute so Fun(G) is a representation of G × G = G1 × G2 . Then as in
PS 9 Problem 4, HomG1 (V 0 , W 0 ) can be viewed as a G2 representation. Therefore
HomG (V, Fun(G)) is a representation of G by right action.
∼
Proposition. The isomorphism T : HomG (V, Fun(G)) → V ∗ is compatible with
G-actions.
Proof. Suppose T (φ) = ϕ. Then
T (g · φ)(v) = [(g · φ)(v)](1) = [φ(v)g ](1) = φ(v)(g)
= [g
−1
φ(v)](1) = φ(g −1 v)(1) = ϕ(g −1 v) = (g · ϕ)(v)
so T (g · φ) = g · ϕ.
Corollary. For G finite, char(k) - |G|, k alg. closed,
G × G y Fun(G) ' ⊕ π ⊗ π ∗
π
since HomG (π, Fun(G)) ' π ∗ .
We define a new action G y Fun(G) by (Adg (f ))(g1 ) = f (g −1 g1 g), which is the
same as G → G × G y Fun(G) with left and right actions.
34
JONATHAN WANG
Definition. A function is called Ad-invariant if Adg (f ) = f ∀g.
Let G/ Ad(G) be the set of conjugacy classes of elements in G.
f
Lemma. A function G → k is Ad-invariant if and only if it factors as a function
π
G → G/ Ad(G) → k (its value on every conjugacy class is constant).
Corollary. The number of conjugacy classes equals number of pairwise non-isomorphic
irreducible representations of G.
Proof. By defining functions equal to 1 on a single conjugacy classes and 0 elsewhere, we form a basis (Fun(G))Ad(G) so the number of conjugacy classes equals
X
dim(Fun(G))Ad(G) =
dim(π ⊗ π ∗ )Ad(G)
π
Since π ⊗ π ∗ ' Homk (π, π),
(π ⊗ π ∗ )Ad(G) = (Endk (π, π))G = EndG (π, π) ' k
so
P
dim(π ⊗ π ∗ )Ad(G) equals number of irreducible representations.
∗
0
0
Since HomG (V, Fun(G)) ' V and V ⊗ HomG (V , W ) → W (PS 9, Problem 4),
MC
we have V ⊗ HomG (V, Fun(G)) → Fun(G) ⇒ V ⊗ V ∗ → Fun(G). This map is the
“matrix coefficient” map.
Lemma. M C(v ⊗ ξ)(g) = ξ(g −1 v).
Proof. ξ ∈ V ∗ corresponds to Ξ ∈ HomG (V, Fun(G)) where Ξ(v)(g) = ξ(g −1 v), so
M C(v ⊗ ξ) = Ξ(v) and the lemma follows.
We can define the bilinear map B : V ⊗ V ∗ , End(V ) → k by
B(v ⊗ ξ, T ) = hT (v), ξi := ξ(T (v))
so M CV (v ⊗ ξ) = B(v ⊗ ξ, g −1 ). Note that V ⊗ V ∗ ' Endk (V ) 3 IdV , and
M C(IdV )(g) = Tr(g −1 , V ) by PS 9. We also have
M CV ⊗W ((v ⊗ w) ⊗ (ξ ⊗ ψ)) = M CV (v ⊗ ξ) · M CW (w ⊗ ψ)
which implies TrV ⊗W = TrV · TrW for TrV = M C(IdV ).
Corollary. (Fun(G))Ad(G) = span(Tr(·, π)). Traces of irreducible representations
form a basis of the set of invariant functions.
Proof. Observe that Fun(G) ' ⊕ π ⊗ π ∗ as G × G representations, and
π
MC
π ⊗ π ∗ → Fun(G)
is the inclusion map. Using Schur’s Lemma,
(Fun(G))Ad(G) ' ⊕ (π ⊗ π ∗ )G ' ⊕ (Endk (π, π))G
π
π
= ⊕ EndG (π, π) = ⊕ span(Idπ )
π
π
Ad(G)
So M C(Idπ ) = Tr(·, π) span (Fun(G))
. Since dim(Fun(G))Ad(G) equals the
number of irreducible representations, this forms a basis.
P
1
−1
For f1 , f2 ∈ Fun(G)Ad(G) , define (f1 , f2 ) := |G|
).
g f1 (g) · f2 (g
k=C
MATH 55A NOTES
35
Lemma. f (g −1 ) = f (g) for f ∈ Fun(G)Ad(G) .
Proof. Tr(g, V ) = Tr(g −1 , V ) by PS 10, Problem 7, and f is a sum of traces.
P
1
Therefore (f1 , f2 ) = |G|
f1 (g)f2 (g).
(
0 π1 6' π2
Theorem. (Trπ1 , Trπ2 ) =
1 π1 ' π2
Proof. From PS 10, Problem 5, Tr(g −1 , π2 ) = Tr(g, π2∗ ) so
1 X
1 X
Tr(g, π1 ) · Tr(g −1 , π2 ) =
Tr(g, π1 ) · Tr(g, π2∗ )
(Trπ1 , Trπ2 ) =
|G|
|G|
1 X
=
Tr(g, π1 ⊗ π2∗ )
|G|
P
1
Tr(g, W ) = dim(W G ).
Theorem. |G|
Proof. We have B : W ⊗ W ∗ , End(W ) → k defined by B(w ⊗ ξ, T ) = hT (w), ξi.
Now define a map W ⊗ W ∗ → k by sending w ⊗ ξ to
1 X
1 X
B(w ⊗ ξ, g −1 ) =
M CW (w ⊗ ξ)(g)
B(w ⊗ ξ, AvG ) =
|G|
|G|
G
P
1
Tr(g −1 , W ) is the desired value. We can
so Tr(AvG ) = B(IdW , AvG ) = |G|
decompose W = Im(AvG ) ⊕ ker(AvG ) where Im(AvG ) = W G . AvG vanishes on
kernel and is identity on image, so Tr(AvG ) = dim(Im AvG ) = dim(W G ).
The previous theorem implies that
(Trπ1 , Trπ2 ) = dim((π1 ⊗ π2∗ )G ) = dim HomG (π2 , π1 )
so we are done by Schur’s lemma.
Next we would like to classify all representations of Sn . We know that |Irr(Sn )| =
|Sn / Ad(Sn )|, the number of conjugacy classes in Sn , which corresponds to the partitions of n. A partition p of n is of the form n = n1 +n2 +· · ·+nk where ni ≥ ni+1 .
Map p 7→ Sp ⊆ Sn , where Sp = Sn1 × · · · × Snk is the subgroup of all permutations
that preserve the “chunks” defined by the partition. If we represent p with bars of
height ni , then p 7→ p̄ by inverting the diagram. Clearly p̄¯ = p. We say p ≤ q if
we can roll blocks down from p to get q. More formally, p ≤ q if ∀k, the number of
squares below line k in p is ≤ the number of squares below line k in q.
Sn
Sn
Using the definition of IndG
H (U ) from PS 10, we consider IndSp (k) and IndSp̄ (sign)
where sign is a Sn representation on k given by multiplication by the sign of each
factor.
Theorem.
(1) HomSn (IndSSnp (k), IndSSnq̄ (sign)) 6= 0 only if p ≤ q.
(2) HomSn (IndSSnp (k), IndSSnp̄ (sign)) is 1 dimensional.
Proof will be given later.
Let πp be the image of a non-zero map IndSSnp (k) → IndSSp̄n (sign).
Proposition. πp is irreducible.
36
JONATHAN WANG
Proof. By (b) of the previous theorem, there is only one map so if πp is reducible
then we can define another map by projecting onto the sub-representation.
Proposition. p1 ≤ p2 and p̄1 ≤ p̄2 ⇒ p1 = p2 .
Proposition. p1 6= p2 ⇒ πp1 6' πp2 .
Proof. Suppose πp1 ' πp2 . We have from definition that
IndSSnp (k) πp1
1
and πp2 ,→ IndSSp̄n (sign)
2
are nonzero maps. Since πp1 ' πp2 , we can compose to get a nonzero map
IndSSnp (k) → IndSSnp̄ (sign). Then by (a) of the previous theorem, p1 ≤ p2 . By
1
2
symmetry, p2 ≤ p1 so p1 = p2 .
Proposition. The πp exhaust all irreducible representations of Sn .
Proof. The sets have the same cardinality.
Corollary. πp is the only constituent of
stituent of
IndSSnq (k)
for q > p.
IndSSnp (k)
that does not appear as a con-
Related documents