Download Semisimplicity - UC Davis Mathematics

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Basis (linear algebra) wikipedia , lookup

Field (mathematics) wikipedia , lookup

Dual space wikipedia , lookup

Birkhoff's representation theorem wikipedia , lookup

Algebraic K-theory wikipedia , lookup

Tensor product of modules wikipedia , lookup

Ring (mathematics) wikipedia , lookup

Congruence lattice problem wikipedia , lookup

Fundamental theorem of algebra wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Algebraic number field wikipedia , lookup

Polynomial ring wikipedia , lookup

Commutative ring wikipedia , lookup

Transcript
MATH 250B: ALGEBRA
SEMISIMPLICITY
1. Remarks on non-commutative rings
We begin with some reminders and remarks on non-commutative rings. First, recall that if R is a
not necessarily commutative ring, an element x ∈ R is invertible if it has both a left (multiplicative)
inverse and a right (multiplicative) inverse; it then follows that the two are necessarily equal. Note
however, that unlike the example of n × n matrices over a field, in a general noncommutative ring
having a left inverse does not imply the existence of a right inverse, and vice versa. When R is not
commutative, we will use the term “module” over R to mean “left module” over R.
Furthermore, the standard definitions of matrices and matrix multiplication all work perfectly
over non-commutative rings, and in particular given a non-commutative ring R, we have the ring
Matn (R) of n × n matrices with coefficients in R. We say that R is a division ring if every nonzero
elements is invertible; that is, a division ring is a field which is not necessarily commutative. Then,
much of the theory of vector spaces over a field extends to modules over a division ring: in particular,
they are all free, and any two bases of a given module have the same cardinality. We will still refer
to modules over division rings as “vector spaces”, and the cardinality of a basis as the “dimension.”
Example 1.1. The quaternions form a division ring over the real numbers.
Working over a division ring, homomorphisms between finite-dimensional vector spaces still
correspond to matrices, as in the field case. However, we will want to work with a somewhat
more general version of this correspondence. Suppose R is any ring, and let M = M1 ⊕ · · · ⊕ Mn ,
N = N1 ⊕ · · · ⊕ Nm for some R-modules Mi , Ni . Then the group HomR (M, N ) is in natural
bijection with “matrices” of the form


ϕ11 . . . ϕ1n
 ..
..  ,
 .
. 
ϕm1 . . . ϕmn
where ϕi,j ∈ HomR (Mj , Ni ). As a special case of this:
Proposition 1.2. Let M be an R-module. Then for any n > 0 we have a ring isomorphism
∼
EndR (E ⊕n ) → Matn (EndR (E)).
Warning 1.3. There is a subtlety which arises in the noncommutative case, which is that if M is a
one-dimensional vector space over a division ring D, with basis {v}, then we have a natural map of
groups D → EndD (M ) sending c ∈ D to the map sending v to c · v. However, although this map is
a bijection, it is not a ring isomorphism, as one can see that it reverses the order of multiplication.
Moreover, we see that the map sending v to c·v is not “scalar multiplication by c”, as the module
condition implies that it sends av to acv, not cav. Thus, there are no “scalar multiplication”
endomorphisms of a module over a non-commutative ring.
Warning 1.4. Recall also that the definition, given R-modules M, N , of the R-module structure on
HomR (M, N ) required R to be commutative in order to get a module structure. Indeed, suppose
f : M → N is a homomorphism. Given r ∈ R, we would like to define rf by m 7→ rf (m). However,
this is not a homomorphism if R is not commutative: given r0 ∈ R, we have (rf )(r0 m) = rf (r0 m) =
rr0 f (m), whereas r0 (rf )(m) = r0 rf (m).
1
2. Semisimple modules
Definition 2.1. A module M over a ring R is simple if it is nonzero and has no submodules other
than 0 and itself. M is semisimple if it is a direct sum of simple modules.
Example 2.2. Z/pZ is a simple Z-module for any p, and accordingly any direct sum of such
modules is semisimple. However, Z/p2 Z is not semisimple: it is clearly not a direct sum of two or
more submodules, but on the other hand, it is not simple, since it contains the submodule pZ/p2 Z.
One of the most basic facts about simple and semisimple modules is that it is relatively easy to
describe their endomorphism rings. We begin with the case of simple modules.
Proposition 2.3 (Schur’s lemma). Suppose M, N are simple R-modules. Then every nonzero
homomorphism M → N is an isomorphism. In particular, EndR (M ) is a division ring.
Proof. Let ϕ : M → N be a nonzero homomorphism. Its kernel and image are submodules of M
and N respectively, and since ϕ 6= 0, simplicity implies that we must have ker ϕ = 0 and im ϕ = N .
Thus, ϕ is bijective, and hence an isomorphism.
We can now apply this to semisimple modules.
Proposition 2.4. Suppose M = M1⊕n1 ⊕ · · · ⊕ Mr⊕nr , with the Mi simple R-modules, and pairwise
non-isomorphic. Then the Mi and ni are uniquely determined up to permutation, and the ring
EndR (M ) can be realized as a ring of matrices of the form


ϕ1
... 0
.. 
 ..
.
 . ϕ2

,
.


..
0
. . . ϕr
where ϕi ∈ Matni (Di ), with Di the division ring EndR (Mi ).
Proof. The matrix description follows immediately from our previous discussion, together with
Proposition 2.3. It thus remains to see the asserted uniqueness. Suppose we have also M ∼
=
N1⊕m1 ⊕ · · · ⊕ Ns⊕ms , so that we have an isomorphism
∼
M1⊕n1 ⊕ · · · ⊕ Mr⊕nr → N1⊕m1 ⊕ · · · ⊕ Ns⊕ms .
Since this map is injective, we see that for each i, the induced map
Mi → N1⊕m1 ⊕ · · · ⊕ Ns⊕ms
induced by included the first factor of Mi into M is nonzero, and thus that for some j, the induced
⊕m
map Mi → Nj j is nonzero. This last map is determined by mj maps Mi → Nj , at least one of
which must be nonzero. We conclude by Proposition 2.3 that Mi ∼
= Nj . Thus, each Mi is isomorphic
to some Nj , and considering the inverse map we conclude that r = s and (up to isomorphism) the
Nj are just a permutation of the Mi .
0
It is thus enough to see that if we have a simple module M 0 , with (M 0 )⊕n ∼
= (M 0 )⊕n , then
n = n0 . But we have
0
Matn (EndR (M 0 )) ∼
= EndR (M 0⊕n ) ∼
= EndR (M 0⊕n ) ∼
= Matn0 (EndR (M 0 )),
and we can check that these isomorphisms are actually isomorphisms of vector spaces over EndR (M 0 )
(which is a division ring, by Proposition 2.3). Since isomorphic vector spaces have the same dimension, we conclude that n2 = (n0 )2 , and hence that n = n0 .
We now move on to give some equivalent characterizations of semisimple modules:
2
Proposition 2.5. Suppose that M is an R-module. Then the following are equivalent:
(i) M is semisimple;
L
(ii) M has simple submodules Mi ⊆ M for i ∈ I such that the induced map i∈I Mi → M is
surjective;
(iii) for every submodule M 0 ⊆ M , there is a submodule N ⊆ M such that M = M 0 ⊕ N .
Furthermore, under the above equivalent conditions, we have that every nonzero submodule of M
contains a simple submodule.
Note that (ii) is equivalent to saying that M is a sum (not necessarily direct) of simple submodules.
Proof. First, let {Mi ⊆ M }i∈I be any collection of simple submodules with sumLequal to M , and
M 0 ⊆ M any module: we claim that if J ⊆ I is maximal so that ϕJ : M 0 ⊕ j∈J Mj → M is
injective, then in fact ϕJ is an isomorphism. It suffices to show that Mi is contained in the image
of ϕJ for all i ∈ I. By the simplicity of the Mi , we have that Mi ∩ im ϕJ is either 0 or Mi . But
if it is 0, we can add i to J, contradicting the maximality. Thus we conclude that ϕJ is indeed
surjective, as claimed.
Our claim proves both that (ii) implies (i), by setting N = 0, and also that (i) implies (iii),
by letting the Mi be the simple modules in a direct sum decomposition of M . It thus remains to
see that (iii) implies (ii), for which the crucial point is that under hypothesis (iii), every nonzero
submodule of M must contain a simple module. This is equivalent to showing that for every nonzero
v ∈ M , the module Rv contains a simple submodule. Let I ⊆ R be the kernel of the map R → M
sending r to rv. Then because v 6= 0, we have I 6= R, and by Zorn’s lemma I is contained in some
maximal left ideal m of R. We then have that mv is a maximal submodule of Rv, not equal to
Rv. By hypothesis, we have some submodule N ⊆ M such that mv ⊕ N = M . Because mv ⊆ Rv,
we then have that Rv = mv ⊕ (N ∩ Rv). We then have that N ∩ Rv is simple, since any proper
submodule would yield a proper submodule of Rv strictly containing mv, contradicting maximality.
We thus conclude that N ∩ Rv is the desired simple submodule of Rv.
We can now easily conclude (ii): let M 0 ⊆ M be the sum of all simple submodules of M . If
M 0 6= M , we have M = M 0 ⊕ N for some N ⊆ M nonzero. But then N contains a simple
submodule, contradicting the definition of M 0 . We therefore conclude that the three conditions
are equivalent, and our argument has also shown that every nonzero submodule of M contains a
simple submodule, as desired.
Corollary 2.6. If M is a semisimple R-module, then every submodule or quotient module of M is
semisimple.
Proof. Let N ⊆ M be a submodule, and let N 0 ⊆ N be the sum of all simple submodules. By
Proposition 2.5 (iii), we can write M = N 0 ⊕ M 0 for some M 0 ⊆ M . Since N 0 ⊆ N , we have
N = N 0 ⊕ (M 0 ∩ N ). If M 0 ∩ N 6= 0, then by Proposition 2.5 it must contain a simple submodule,
but this violates the definition of N 0 , so we conclude that M 0 ∩ N = 0 and N = N 0 . By Proposition
2.5 (ii), we conclude that N is semisimple.
Now suppose that N = M/N 0 for some N 0 . Then by Proposition 2.5 (ii), we have M = N 0 ⊕ M 0
for some M 0 ⊆ M . Then the quotient map M → M/N 0 = N induces an isomorphism M 0 → N ,
and we have M 0 semisimple by the case of submodules above, so we conclude that N is likewise
semisimple.
Corollary 2.7. Suppose that a ring R is semisimple when considered as a module over itself. Then
every R-module is semisimple.
3
Proof. It is clear from the definition that a direct sum of semisimple modules is semisimple, so if
R is semisimple over itself, then so is every free R-module. Since every module is a quotient of a
free module, we conclude from Corollary 2.6 that every R-module is semisimple.
Example 2.8. A commutative ring R is simple as a module over itself if and only if its only ideals
are 0 and R, if and only if R is a field.
It L
is almost as rare for R to be semisimple as a module over itself. Specifically, suppose that
R = i Ii for some simple ideals Ii . Then for any i 6= j, we have Ii Ij ⊆ Ii ∩ Ij = (0), so we see
that (assuming R is not a field, so there is more than one of the Ii ) we cannot have R an integral
domain.
Example 2.9. If k is a field, we will show later that the ring Matn (k) is semisimple over itself.
Example 2.10. If G is a finite group, and k is a field of characteristic not dividing the order of
G, then we will show that the group ring k[G] is semisimple; this is a crucial step in the modern
development of the representation theory of finite groups.
Corollary 2.11. Let M be an R-module, and suppose that M is the sum of a family of simple
submodules Mi . Then every simple submodule of M is isomorphic to one of the Mi .
Proof. Since M is a sum of simple submodules, by Proposition 2.5 it is semisimple. Let N ⊆ M
be a simple submodule. Then again applying Proposition 2.5, there exists a submodule M 0 ⊆ M
such that M = N ⊕ M 0 , so there is a surjective homomorphism M N . Since M is a sum of the
Mi , the induced map Mi → N must be nonzero for some i, and then by Proposition 2.3 Mi ∼
= N,
as desired.
3. Density and consequences
If M is an R-module, we have discussed that for r ∈ R, the “scaling by r” endomorphism of M
is not an R-module endomorphism, because the order of multiplication is reversed. However, we
note that M may also be considered as a module over the ring EndR (M ): if f ∈ EndR (M ), for
m ∈ M we can define f · m to be f (m). We then have:
×r
Proposition 3.1. If M is an R-module, and r ∈ R, then the multiplication-by-r map M → M is
a homomorphism of EndR (M )-modules. This induces a ring homomorphism
R → EndEndR (M ) (M ).
Proof. We need to check that for all f ∈ EndR (M ), and m ∈ M , we have f · rm = r(f · m). But
f · rm = f (rm) = rf (m) = r(f · m)
by definition of the action of f , and the assumption that f is a homomorphism of R-modules. That
the induced map is a ring homomorphism is immediate from the definition.
Definition 3.2. If M is an R-module, then EndR (M ) is the commutant of R, and EndEndR (M ) (M )
is the bicommutant of R.
Thus, the homomorphism of Proposition 3.1 maps R to its bicommutant. It is natural to wonder
how large the image of this map is. The density theorem, due to Jacobson, says that any element
of the bicommutant can be “approximated” by an element in the image, as follows:
Theorem 3.3. Suppose M is a semisimple R-module, and set R0 = EndR (M ). Fix any f ∈
EndR0 (M ). Then for any x1 , . . . , xn ∈ M , there exists r ∈ R such that
rxi = f (xi ) for i = 1, . . . , n.
4
Furthermore, if M is finitely generated over R0 , then the homomorphism
R → EndR0 (M )
of Proposition 3.1 is surjective.
The first step in the proof of Theorem 3.3 is the following lemma, which is just the special case
of the theorem when n = 1.
Lemma 3.4. Let M be a semisimple R-module, and set R0 = EndR (M ). Then for any f ∈
EndR0 (M ), and any x ∈ M , there exists r ∈ R such that rx = f (x).
Proof. By semisimplicity, there is a submodule N ⊆ M such that M = Rx ⊕ N . Let π ∈ R0 =
EndR (M ) be the map sending (rx, n) to (rx, 0). Then since f ∈ EndR0 (M ), we have f (x) =
f (1 · x, 0) = f (π(x)) = πf (x), and we conclude that f (x) ∈ Rx, so there exists r ∈ R with
f (x) = rx, as desired.
Lemma 3.5. Let M be an R-module, and R0 = EndR (M ). Given n > 1, and f ∈ EndR0 (M ), we
have that f ⊕n ∈ EndRn0 (M ⊕n ), where Rn0 = EndR (M ⊕n ).
Proof. We know that Rn0 ∼
= Matn (R0 ). Given g ∈ Rn0 , represent g by (gi,j )16i,j6n , with gi,j ∈ R0 .
Given (m1 , . . . , mn ) ∈ M ⊕n , since f ∈ EndR0 (M ), we have
n
n
X
X
f ⊕n g(m1 , . . . , mn ) = f ⊕n (
g1,j (mj ), . . . ,
gn,j (mj ))
j=1
j=1
n
n
X
X
=(
f g1,j (mj ), . . . ,
f gn,j (mj ))
j=1
n
X
=(
g1,j (f mj ), . . . ,
j=1
j=1
n
X
gn,j (f mj ))
j=1
= g(f m1 , . . . , f mn )
= gf ⊕n (m1 , . . . , mn ),
which is what we wanted.
Proof of Theorem 3.3. We first observe that the surjectivity assertion in the case that M is finitely
generated over R0 is immediate, since in this case f is determined by its values on a finite generating
set for M .
For the main statement, the basic trick is to apply Lemma 3.4 to the induced endomorphism
f ⊕n : M ⊕n → M ⊕n .
Indeed, by Lemma 3.5 we have that f ⊕n is a homomorphism of EndR (M ⊕n )-modules. Then we
can apply Lemma 3.4 to f ⊕n and the element (x1 , . . . , xn ) ∈ M ⊕n , and we obtain an r ∈ R such
that
(rx1 , . . . , rxn ) = r(x1 , . . . , xn ) = f ⊕n (x1 , . . . , xn ) = (f (x1 ), . . . , f (xn )),
yielding the desired statement.
Remark 3.6. Note where we use the finiteness of n in the proof of Theorem 3.3: an infinite direct
sum of semisimple modules is still semisimple, but does not contain an element (xi )i∈I with infinitely
many xi nonzero, so the argument doesn’t apply.
Before we apply the density theorem, we make some fundamental observations.
Definition 3.7. If R is a ring, define the opposite ring Ro to have the same elements as R, but
with multiplication reversed: given r, r0 ∈ R, we define r · r0 in Ro to be equal to r0 · r in R.
5
Exercise 3.8. Show that if we consider R as a module over itself, we have
EndR (R) = Ro ,
where R0 acts on R via right multiplication.
Conclude that
EndR (R⊕n ) = Matn (Ro ).
Next, recall that if D is a division ring, then every module over D is free; thus, if V is finitely
generated over D, it has some finite dimension n, and in this case
∼ EndD (D⊕n ) ∼
EndD (V ) =
= Matn (Do ).
We also define:
Definition 3.9. An R-module M is faithful if for all nonzero r ∈ R, there exists m ∈ M with
rm 6= 0.
Observe that faithfulness is equivalent to the injectivity of the natural map
R → EndR0 (M ),
with R0 = EndR (M ).
The following consequence of the density theorem is a form of Wedderburn’s theorem:
Theorem 3.10. Let M be a semisimple, faithful module over R, let R0 = EndR (M ), and suppose
that M is finitely generated over R0 . Then R = EndR0 (M ). In particular, if M is simple, R is
isomorphic to a matrix algebra over the division ring (R0 )o .
Proof. In the case that M is simple, we already observed in Exercise 3.8 above that if M is finitely
generated over R0 , then EndR0 (M ) is isomorphic to a matrix algebra over (R0 )o , so it suffices to
prove that the canonical map R → EndR0 (M ) is an isomorphism. But the definition of faithful
implies immediately that this map is injective, and Theorem 3.3 gives us that it is surjective, so we
conclude the desired result immediately.
Wedderburn’s theorem thus gives us a criterion for proving that a ring can be realized as a matrix
algebra over a division ring. The following corollary is closer to the original version of Wedderburn’s
theorem:
Corollary 3.11. Let k be a field, and R a k-algebra which is finite-dimensional as a k-vector space,
and has no two-sided ideals other than 0 and R. Then there is a division ring D containing k such
that R is isomorphic to a matrix algebra over Do .
Proof. Suppose M is a simple R-module. Then D := EndR (M ) is a division ring by Proposition
2.3, and since k is contained in the center of R, we see that scalar multiplication by an element of
k is R-linear, and thus that D naturally contains k. According to Theorem 3.10, it is then enough
to show that there exists a faithful simple R-module M which is finite-dimensional over D. But if
M is any non-zero R-module, it is necessarily faithful, since the kernel of
R → EndD (M )
is a two-sided ideal which cannot be all of R, as R contains a multiplicative identity. Let M be a
minimal nonzero left ideal of R: such an ideal exists, because all ideals are also D-modules, and
finite-dimensional since D contains k, and R is assumed finite-dimensional over k. Since two vector
spaces of the same dimension with one contained in the other must be equal, we conclude that
a minimal nonzero left ideal of R exists. It is simple by minimality, and finite-dimensional over
D because it is finite-dimensional over k, so we can apply Theorem 3.10 to conclude the desired
statement.
6
We give one more corollary, which requires a simple lemma.
Lemma 3.12. Let k be an algebraically closed field, and D a k-algebra which is finite-dimensional
over k and a division ring. Then D = k.
Proof. Given x ∈ D, consider the subring k[x] ⊆ D. This is necessarily commutative since we
assume that k is contained in the center of R, and since D is finite-dimensional over k, we conclude
that x is algebraic over k. Finally, since D is a division ring, it does not have zero divisors, so the
minimal polynomial of x must be irreducible. Thus, k[x] is an algebraic field extension of k, and
hence equal to k. Since x was arbitrary, we conclude D = k.
Remark 3.13. Note that although the quarternions are a C-module, they are not a C-algebra, since
our definition requires C to be in the center of a C-algebra. Thus, they do not contradict the
statement of the lemma.
Corollary 3.14. Suppose that k is an algebraically closed field, and R is a k-algebra, finitedimensional over k and without two-sided ideals other than 0 and R. Then R is isomorphic to
a matrix algebra over k.
Proof. This follows immediately from Corollary 3.11 together with Lemma 3.12.
4. Semisimple rings
Definition 4.1. A ring R is semisimple if it is semisimple as a module over itself.
Thus, we can rephrase Corollary 2.7 as the statement that every module over a semisimple ring
is semisimple.
When given two left ideals of a ring, we say they are isomorphic if they are isomorphic as modules.
Definition 4.2. A ring R is simple if it is semisimple and if any two left ideals of R are isomorphic.
Warning 4.3. Lang’s definition of simple rings differs from standard terminology as follows: usually,
a simple ring is a ring without nontrivial two-sided ideals. We will see in Theorem 4.8 below that
simple rings in Lang’s sense do not have non-trivial two-sided ideals, but the converse does not
necessarily hold. However, in some important cases it does; see Corollary 4.12 below.
Our main aims are to give structure results on semisimple and simple rings, and to prove that
all matrix algebras are simple.
TheoremQ4.4. Let R be semisimple. Then R has a finite collection of simple rings Ri ⊆ R such
that R = i Ri .
Furthermore, we can describe the Ri as follows: each Ri corresponds to an isomorphism classes
of simple left ideals of R, and Ri is equal to the sum of all the left ideals in the isomorphism class.
Lemma 4.5. Let I be a simple left ideal of a ring R, and let M be a simple R-module. Then if
IM 6= 0, we must have I isomorphic to M .
Proof. If IM 6= 0, there exists some m ∈ M with Im 6= 0. Then the map r 7→ rm from I to M is
nonzero, and by Proposition 2.3, it is an isomorphism.
Proof of Theorem 4.4. Let {Ij }j∈J be a family of simple left ideals of R such that every simple left
ideal of R is isomorphic to precisely one of the Ij . For each j ∈ J, define Rj to be the sum of all
simple left ideals of R isomorphic to Ij . Then by construction, Rj is a left ideal of R. Since R is
assumed semisimple, we see that it must be the sum of the Rj . By Lemma 4.5, we have Rj Rj 0 = (0)
for all distinct j, j 0 ∈ J. It follows that Rj R = Rj Rj for all j, and since Rj Rj ⊆ RRj ⊆ Rj , we see
that Rj R ⊆ Rj , and thus Rj is also a right ideal of R.
7
Now, since the sum of the Rj is equal to R, we can write
1 = e1 + · · · + em ,
with ei ∈ Rji for some j1 , . . . , jm . Given x ∈ R, write
X
x=
xj ,
j∈J
with xj ∈ Rj and all but finitely many xj = 0. For i = 1, . . . , m we have eji x = eji xji since
eji xj 0 = 0 for all j 0 6= ji . Furthermore,
xji = 1 · xji = ej1 · xji + · · · + ejm · xji = eji · xji .
Finally,
x = ej1 · x + · · · + ejm · x = xj1 + · · · + xjm .
We draw several conclusions: first, if x ∈ Rj for some j 6∈ {j1 , . . . , jm }, we have eji · x = 0 for
each i, so x = 0. Thus, we have J = {j1 , . . . , jm }. Moreover, since xji = eji · x, we see that the xji
are uniquely determined by x, so find that R is a direct sum of the Rj . Finally, we see that the eji
act as identity elements in Rji , so since we already have that Rji is a two-sided ideal, we conclude
it is a ring. We thus find that R is a (finite) direct product of the Rj . Lastly, observe that an
Rj -submodule of Rj is the same as an R-submodule of Rj due to the direct product decomposition,
so the simple R-submodules of Rj are also the simple Rj -submodules. Since each Rj is a sum of
isomorphic R-submodules, we conclude by Corollary 2.11 that all the simple submodules of Rj are
isomorphic, and thus that Rj is a simple ring.
We can likewise give a structure theorem for modules over a semisimple ring.
Q
Theorem 4.6. Let R be a semisimple ring, and M an R-module. Let R = Rj as in Theorem
4.4, with ej ∈ Rj the identity for each j, and Ij a simple left ideal of R contained in Rj . Then:
M
M
M=
Rj M =
ej M,
j
j
and Rj M is the sum of all submodules of M isomorphic to Ij .
Proof. First observe that since ej ∈ Rj acts as the identity on Rj , we immediately have Rj M = ej M ,
and in addition, ej M is an R-module via the action r · (ej m) = ej (rm). In particular, ej acts as
the identity on ej M .
Let Mj be the sum of the simple submodules of M isomorphic to Ij . Our first claim is that if
N is a simple submodule of M , and ej N 6= 0 for some j, then N ∼
= Ij and hence N ⊆ Mj . Indeed,
since ej ∈ Rj , there must exist some Ij0 ⊆ Rj a simple left ideal of R isomorphic to Ij such that
Ij N 6= 0. Then by Lemma 4.5 we have N ∼
as desired.
= Ij0 ∼
= Ij , P
Next, given any N ⊆ M simple, then since 1 = j ej , we must have ej N 6= 0 for some j, so
N ⊆ Mj . Since M is semisimple by Corollary 2.7, we conclude that M is the sum of the Mj . We
claim that in fact it is a direct sum. To verify this, it is enough to show that for each j, we have
X
Mj ∩ (
Mj 0 ) = (0).
j 0 6=j
If this intersection is nonzero, then it contains
some simple submodule N . But N ⊆ Mj implies by
P
0
∼
Corollary 2.11 that N ∼
I
,
while
N
⊆
M
= j
j 0 implies that N = Ij 0 for some j 6= j, which is a
j 0 6=j
contradiction.
It thus remains to show that Mj = Rj M for all j ∈ J. Both modules are semisimple, so it is
enough to check equality on simple submodules. If N ⊆ Rj M = ej M , then since ej acts as the
identity on ej M , we have ej N 6= 0, and hence by the above claim, N ∼
= Ij , so N ⊆ Mj . Conversely,
8
if N ⊆ Mj , we have by Corollary 2.11 that N ∼
= Ij . On the other hand, ej 0 N 6= 0 for some j 0 ,
and again applying our claim above we find N ∼
= Ij 0 , so N ⊆ Mj 0 . Since we have shown that
Mj ∩ Mj 0 = (0) when j 6= j 0 , we conclude that j = j 0 , so ej N 6= 0. But then ej N = Rj N is a
nonzero submodule of N , and by simplicity, ej N = N , so we conclude that N ⊆ ej M . We have
thus shown Mj = ej M , completing the proof of the theorem.
Corollary 4.7. Let R be a semisimple ring. Then every simple R-module is isomorphic to a simple
left ideal of R. In particular, if R is simple, then any two simple R-modules are isomorphic.
We now continue on to study simple rings.
Theorem 4.8. Let R be a simple ring. Then R is a finite direct sum of simple left ideals. If I, I 0
are simple left ideals, then there exists r ∈ R such that Ir = I 0 . In particular, IR = R, and R has
no two-sided ideals other than 0 and R.
Proof.LBy definition, there exist simple left ideals Ij (all isomorphic
P to one another) such that
R = j Ij . We first claim that this is a finite sum. Write 1 = j ij , with ij ∈ Ij , and all but
finitely many ij = 0. We claim that in fact we have all ij 6= 0, so that there are only finitely many
Ij , as desired. Indeed, for any j 0 6= j we have Ij ij 0 ∈ Ij 0 , since Ij 0 is a left ideal, but given any
r ∈ Ij , we can write
X
r=
rij 0 .
j0
Since a direct sum representation is unique, we conclude that rij = r, and thus (since Ij is assumed
nonzero) that ij 6= 0, as claimed.
Now, let I, I 0 be any simple left ideals of R. Since R is semisimple, we have R = I ⊕ J for some
submodule (i.e., left ideal) J ⊆ R, and we thus have a surjective homomorphism R I which
is the identity on I. By simplicity of R, we have I ∼
= I 0 , so fixing an isomorphism, we obtain an
(R-linear) endomorphism
∼
R I → I 0 ,→ R
sending I to I 0 . By Exercise 3.8, this is induced by right scalar multiplication by some element
r ∈ R, so we conclude that I 0 = Ir, as desired. Since R is a sum of simple left ideals, this implies
that IR = R. This in turn implies that if J is a nonzero two-sided ideal, it must be all of R: indeed,
J must contain some simple left ideal I, and then
J = JR ⊇ IR = R.
Finally, as promised, we show that matrix algebras over a division ring are simple.
Theorem 4.9. Let D be a division ring, and V a finite-dimensional vector space over D. Then
R = EndD (V ) is simple, and V is a simple R-module. Moreover, D = EndR (V ).
Proof. Suppose v ∈ V is nonzero. Then v can be completed to a basis of V over D, and thus v
may be mapped to any element of V under a suitable D-endomorphism. We conclude that V has
no nonzero left R-modules other than V ; that is, that V is simple over R. Now, let {v1 , . . . , vm }
be a basis for V over D. Then we obtain an R-module homomorphism R → V ⊕m sending r to
(rv1 , . . . , rvm ). This map is bijective because given (w1 , . . . , wm ) ∈ V ⊕m , there exists a unique
D-linear endomorphism of V sending each vi to wi . We thus conclude R ∼
= V ⊕m as an R-module.
It is thus semisimple, and indeed simple by Corollary 2.11.
It remains to verify that D = EndR (V ). We will apply Theorem 3.10 with D and R in place of
R and R0 . We thus need to verify that V is semisimple and faithful over D, and that V is finitely
generated over R. But every non-zero vector space over a division ring is semisimple and faithful,
9
and every simple module is finitely generated (indeed, generated by any nonzero element), so we
conclude that D = EndR (V ), as desired.
Corollary 4.10. If D, D0 are division rings, and Matn (D) ∼
= Matn0 (D0 ) for some n, n0 , then
0
0
D∼
= D and n = n .
Proof. If R = Matn (D) = EndDo ((Do )⊕n ), we have by Theorem 4.9 that R is a simple ring, that
(Do )⊕n is a simple R-module, and that Do = EndR ((Do )⊕n ). Since the simple R-modules are
unique up to isomorphism, it follows that we can recover Do (and hence D) simply from the ring
structure on R. We thus conclude D ∼
= D0 . We then obtain n = n0 by dimension-counting.
As an additional immediate consequence, we observe that the following basic result from linear
algebra is a special case of Theorem 4.9:
Corollary 4.11. Let k be a field, and M ∈ Matn (k) a matrix commuting with all other matrices.
Then M is a scalar matrix.
Proof. This is a direct application of the theorem with D = k, and V = k ⊕n , so that we can think
of M as an element of R. Indeed, if M commutes with all other matrices, it is an element of
EndR (V ), and thus is (the image of) a scalar in k.
We also find several equivalent characterizations of simplicity for finite-dimensional algebras over
a field:
Corollary 4.12. Let k be a field, and R a k-algebra, finite-dimensional over k. Then the following
are equivalent:
a) R is simple;
b) R has no two-sided ideals other than 0 and R;
c) R is isomorphic to a matrix algebra over a division ring containing k.
Proof. a) implies b) without any hypotheses on R, by Theorem 4.8. b) implies c) by Corollary 3.11.
Finally, c) implies a) by Theorem 4.9.
Remark 4.13. The significance of Corollary 4.10 is that if k is a field, then every CSA over k is not
only isomorphic to a matrix ring over a division ring, but this division ring is unique. Thus every
element of the Brauer group has a unique representative by a division ring.
This allows us to define the index of an element of the Brauer group as the dimension of the
division ring over k (note that otherwise, dimension over k is not preserved by the equivalence
relation used to define the Brauer group). Similarly, one can show that the order of an element of
the Brauer group, called the period, is always finite, and in fact always divides the index.
The “period-index problem” asks for what fields the period is equal to the index, and what one
can say more generally (for instance, for which fields the index divides the square of the period,
and so forth). This has been a topic of a great deal of recent research.
10