Download Van Der Vaart, H.R.; (1966)An elementary deprivation of the Jordan normal form with an appendix on linear spaces. A didactical report."

Document related concepts

Matrix (mathematics) wikipedia , lookup

Linear least squares (mathematics) wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Determinant wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Gaussian elimination wikipedia , lookup

Exterior algebra wikipedia , lookup

Euclidean vector wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Jordan normal form wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Vector space wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Matrix multiplication wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Matrix calculus wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

System of linear equations wikipedia , lookup

Four-vector wikipedia , lookup

Transcript
•i;
'
~.
............•..
~',..
I·'.; ·
.':~'
.'Y
.'
i~.
·1.·.
,~·tI
I
,I
AN ELEMENTARY DERIVATION Of THE JORDAN NORMAL FORM
I-lITH AN APPENDIX ON LINEAR SPACES
H. R. van der Vaart
Didactical Report
Work supported by Public Health Service Grant GM-618 from
the National Institute of' General Medical Sciences to the
Biomathematics Training Program, Institute of Statistics
Raleigh Section
Institute of Statistics
Mimeograph Series' ~o. 460
po ,ary, 1966
.·~;t ",
~
~,
',' '.'
~.t-_"
.
~."
-
..:
,..
.'~~:.:~~_ lt~
.....-,."
'~~. ",?.~.
-;l!l"-'.
,"':10.;1 \"'-
0-1
INTRODUCTION
'!he aim of this mimeograph is purely didacticel.
a matrix is generally regarded as an advanced topic.
!be Jordan normal form of
It is hard to find in the
literature a complete, somewhat leisurely expositionl which in all its phases is
essentially based on nothing more than the concepts of linear space and sUbspace,
basis and 'Clirect sum, dimension, and the fundamental idea of mapping.
In this
sense the present exposition is elementary.
Most of the proofs assembled here are essentially known, but the present
account constitutes a consistent effort to use none but the most elementary tools
throughout the entire development.
It is hoped that this report may help dispel
whatever misunderstandings "users" of matrix theory may have about this topic.
0-2
CONTEm'S
Page
1.
2.
A.
L.
Preliminaries
1.1. Representation of a linear mapping by a square matrix
1.2. Change of basis
1.3. Special cases
1-1
1-1
1.4.
1.5.
1-6
1-8
Eigenvalues, eigenvectors, characteristic polynomial
Cayley-Hamilton theorem
Canonical representations
2.1. A singular mapping on ,..,
V is nilpotent on a certain subspace
of ,..,
V
2.2. ,..,
V is a direct sum of generalized eigenspaces
2.3. A diagonal block representation of linear mappings
2.4. A canonical representation of nilpotent linear mappings
2.5. The Jordan normal form of an n X n matrix
1-3
1-1'0,
-"
2-1
2-2
2-4
2-7
2-10
2-16
A-l
Appendix
A.l. Fields
A.2. Complex numbers
A.3. Linear spaces
A.4. Basis, dimension, direct sum
A.5. Linear mappings
A-2l
Some literature on linear algebra
L-l
A-2
A-8
A-9
A-12
1-1
1.
1,1.
Preliminaries
Representation of a linear maRRing by a square matrix
We will consider only such linear mappings
(v,
~
Gay) into the same space
.w._.
(V), £ : V .... V.
~
~
as map u certain linear space
£
\-le could choose one 'b:Isi s in1
the domain of X and unother1:ssis in V as (containing) the range of t ,
'"
us
but
WE;
will discuss only the case in which these two bases are the same.
--- -----
----
Let now (e 1', e ', ••• , e'}
ic of ,...,
V. For each
1'1 be one such'!:as
.
2
bdng a vector in :£, is a linear combination of basis vectors:
j
= 1, ••• ,1'1, £'e.' ,
J
(j =1, ••• ,1'1)
rn-.
.L"u<:::
'
run trJ.x
] i' :=1
••• n
A=[
a..
1
. J.J J =
...... 1'1 completely determines the linear mapping X.
fact, the vector
is carried into, sny,
which can be uniquely repres ented by c = r.
j=l
1'1
1'1
d = ~ 0i e~ = .tc = r. I'!e ~ = E.E e'a 'Y
ij i ij' j •
i=l
j=l j J
e'1
1'1'
constitutes a basis of "1 ,
it follows that
(1.1,2 )
(i
= 1, ••• , u)
which is the familiar formula for the transformation of vectors (such as
given by their co04d~nate61 i.e. in the form caE
rjej,
c) when
or in matrix notation:
j
c
=
•••
I'ri
-I
Instead of the notation used in (1,1..2) we have the well known alternatives:
r
l6
all
(1.112b)
e
rOll
');]2
-I "21 "22
"'J -I'" ...
2I
an
r
all
.'..
•••
I'll
(X2:
••• •••
L ::tn1 0:1'12 ••• ann
i
I
I
i
_I
(1.l,2c)
In
1'1
d
= A c
•
1'2
I ...
I .
i
LI'n.J
or
1-2
~
The matrix A is said to be the matri~ 2!
relative to the basis
the number of columns of
... , e'l.
n
A equal
n,
&
(or the matrix representing ~)
Note that the number of rows of A and
the dimension of
v.
"""
e
1-3
~ Change of basis.
n
H}
l e2, ••• ,en be another basis of
linear combination of the e "? •
Let
{ en ,
H is a
then clearly each e k
1,
)J
n
= 2,=1
E
Now
.t eH
h
=
.£ e /I
=
)J
h
where B is the matrix of
,
.t I: en nth =
t
I: e~
k
I3kh =
n::
12
r. .E
i k
'lt llk
N
(k = 1, • u, n)
}(J
,
ei aU
,
1t),h
e' n I3
i ik kh
but also
,
"} •
••• , en
re the basis
.I:
e~
So obvious ly
nth = Z ~f3kh'
k
z aU
(1.2,2)
t
where
rl , as
or
A
n= nB ,
defined in eq. (1.2,1) is the matrix expressing the mew basis vectors
in terms of the old ones.
~.
As an alternative to eq. (1.2,1) it is also obVious that each
linear combination of the
e
n
g
:
,
=
ellXI
Hence
eIIk
where
8gk
=0
if
g
t
k,
g.2
Two matrices
said to be similar.
different bases.
(t
= 1,
... , n)
e" p n k
g g2 2
g = k.
fl
=
"" e" uk
I:t
~
g g g
Thus, if
(k
1
=,
••,
~
I = [8gk] ,
has an inverse,
-n-
P -
n)
P = [PgtJ
l
•
by
n- l
(1.2,5)
~.
= ~" "~
I ,
(1.2,2)
II
I: eg Pg.>:,1
g
if
(1.2,4)
This allows us to replace
is a
A1-1= B.
A and B satisfying an equation of the form
They represent the same linear transformation .t :
(1.2,5)
are
V ... V
re
~
1-4
~
~.
The concept of similar matrices is easiest grasped by a discussion of
a linear transformation £
bases.
represented by different matrices relative to different
Practical situations are usually less symmetrical in that in practice a
linear transformation is often introduced by its matrix representation to begin with
(re some unidentified basis).
By the same token vectors are often introduced in
matrix form (cf. eq. (1.l,2a», i.e., by their coordinates (re the same unidentified
basis).
Let L be such a matrix, and assume we have n linearly iadependent vectors
------
I-'Y
1'12
722
c2 =
•••
• ••
,
,
•••
c =
n
• ••
'Y nn
'Y2n
scalars
t>ij
ln
'Y2n
such that
(i = 1, 2, ... , n)
= I: c. t>"1
j
J
d
•••
where
etc.
(C l c2 ••• c n ) denotes the matrix with first column cl '
writing C for this matrix we find from (1.2,6):
(1.2,,6a)
L (e l c ••• cn) = (c c ••• c )
l 2
n
2
~ll
~2l
t>12
1322
•••
• ••
f3n2
t3 nl
second column c
Pln
t32n
•••
•••
• ••
•••
• ••
t3 nn
hence
(1.2,6b)
LC=CB,
or
-1
C
LC=B,
In other words, under the assumption (1.2,6),
the transforming matrix
-1
or L = C B C
k and
~
are similar matriges, and
C is bUilt from the column matrices
I'll
ci
=
1'21
•••
I'ni
•
2
,
e .l:..:l...
1-5
Special cuses.
i)
The identity mapping J"
defined by
J v
=v
for each
v
€
1 ,
is represented by one and the same matrix relative to all bases:
0
0
1
n
•••
•••
••• ••• •••
0 ••• 1
0
=
I
1
•
J
Prove:
ii) !
£
carries each
v
re some basis is the null
€
V into the null vector Q iff the matrix representing
fV
~~trix
CJ
o
o
• ••
o
=
The null matrix then represents this mapping J:
particular !
will be denoted by
~)
o ... 0
o ••• 0
•••••••••
o ...
0
,
II
J
re all bases.
ProvG~
(This
•
Define £2 d~.f .t 0 !" 1. e." £2 v =J: (,t v) for each v e:L • Similarly
,gk ~ J: 0 .e k - l
so "by induction" J:s is now defined for every natural s. Now
lii)
-
,
let the matrix L represent .t
relative to some basis.
Prove that the matrix L2 represents J'} re the same basis.
Prove that the matrix LS represents J:s re the same basis,
where
Let <p be a (formal) polynomial"
<p(~)
r
=.E
s
is any natural number.
i
O;i~
•
Then prove that
1.=0
<p(1- )v
=
E 0i .£i v
i
for each v
Note that
e
if ~
is a scalar"
if ~
is a mapping"
if ~
is a matrix,
<p(~)
cp(1))
cp«t)
is a scalar"
is a mapping,
is a matrix •
€ V
1-6
1.4.
Eigenvalues, eigenvectors, characteristic polynomial
When a non-null vector
v
--------
€
V and a scalar
A exist such that
~
(1.4,1)
J:v :: AV
then A is called an ~!6~~Y~~~~ of .C,
v an ~!6~~Y~£~~~ of J:.
1!
Y is
th
are roots of an E:: degree
£-dimensional vector space, the eigenvalues of J:
an
~-
nomial equation.
Proof.
-----
Choose a basis of
(re this basis) of v,
basis.
Y,
Let 7.J. be . the coordinates
v =
re the same
represent .t
Let
Then
!v ::! ~ ej 7 j
=
J
and .tv = AV
~...
I
e i O:i' 7 j
j i
J
~
LI
,
iff
I:E e~ O:i. 7 .
ij J.
J J
(1.4,la)
=
,
E e~ A 7
i
i
J.
i.e., iff
(1.4,lb)
\..
~ (A~ij- a ij ) 7 j "'" 0
)
J
where
if
i = j,
solutions for
0ij"'" 0
if
i:f j .
for all
r
1
= 1,
••• , n
,
Equation (1.4,lb) has non-null
equation (1.4,1) holds for non-null vectors of
v,
iff
A satisfies the equation
::
I
0
f
:: 0
Clearly this determinant is an
det (AI - A)
(1.4,2a)
in which 1.A n
nomial
n-degree polynomial in A,
\. (~)
def X (A)
A
,
is the highest degree term ("leading coefficient is 1 II).
e~, ••• , e~}
the eigenvalues
The poly-.
1s culled the £~~~~£~~!:~2!!~ E~~~2~1l~~~ ~! the ~~~!:z ~.
,
Now choose another basis of ,...
V,
{ai,
say XA(A)
by equation (1.2,1).
A of.t
connected with
Repeating the above argument, we find that
are also to satisfy the equation
1 -
~(X) def det(XI - B)
=0
7
,
(n
O} • One would not like the
el , e0, ••• , en
2
roots of equation (1.4,2) to be different from those of eq. (l.4,2b).
where B represents
re the basis
p
~
Indeed det(XI - B)
det(XI - B)
= det{TT
-1
= det(XI
=det(XI
- A)
for all scalars
-1
- 1T A IT) = det(1T
(AI1T .. AlT)}
= det{TT
-1
=
det(TT
).det(XI - A)odet
.Denote ~(x)
= XA(X)
by X(X).
1T
-1
-1
for, by equation
-1
(XI)'TT - TT" A IT)
(XI - A)TTJ
=det(XI
X;
- A),
=
for all A •
The polynomial expression X(~)
------------~-
---
=
is called the
that one can find it by computing the characteristic polynomial of the matrix
representing £
~.
a produce of
Equal.
--- -----
n factors of form
(X-X ) p •
(X .. X.) ,
J.
n roots of the equation
Assume that
n
p
relative to any basis.
Since our scalars are complex numbers, X(X)
complex valued)
------
p
~
where the Ai
X(X)
=O.
n of them are different.
can always be written as
are the (possibly
Some of these n roots may be
nl
n
Then X(X) = (X-Xl) (X-X2 ) 2 •••
1-8
tit
1.5.
Cayle;¥' - Hamilton theorem.
If
X
Proof.
V -t ....,
V
....,
,
polynomial of the linear maRRing X :
churact~ristic
'1 '1 ,
-t
= f.!) •
X(!)
then
is the
and if A represents
that basis.
XC!) is
According to sec. 1.3, point (iii),
a linear mapping:
re some basis then X(A)
,t
represents
According to sec. 1.3, point (ii), we have to show that
the null matrix.
Since
= det
X(A)
(AI - A)
xC~)
re
X(A) = 0 ,
one might think that a proof could be
conducted as follows
X(A)
However, this is no good, since
= det(AI
- A) = det(D)
det ( ••• )
(AI - A) , that is,
:-cofactor of (AO . - a .. ) in
j
qi·(A)
th
~
J~
J
,= (n-l) degree polynomial;
~
( AI
- A)
-1
matrix (AI - A) =
n-l
i
so Q(A) = Z Q A
i
i=o
,
Q(A)
det(AI-A)
=
so
Q(A).(AI - A) =det(7I.I - A)
write XA(A)
=
n
j
E e. A ,
=1
en
where
J
j=o
the
,
Then eq. (1.5,1) can be written as
n-1
i
n
( E Q1 A ).(AI - A) = E
1=0
j =0
ej
= e0
I
e1
I
-Q1A
+~
=
~A
+ Ql
=~
2
for all scalar
O
A
1
• A
2
• A
•
I
.............................n-,L,
-Q n- 1A + Qn-2 =
n
+
"'£n-1
en- 1 I
=e
n
= ~(A).I.
are functions of A •
Hence
-Q0 A
•
is scalar-valued, but X(A) is a matrix.
Let the matrix Q(A) be the adjoint of
Then for all A we have
=0
•
I.
A
An
+
A.
1-9
th
Thus we have found an n-- degree polynomial, the characteristic polynomial
.c : J. . . J.
X of the linear mapping
e?
V .... ......
V
......
.c
n-dimensional), which at
There are other polynomials, of degree greater than n,
value (:} at.£,
X:
~
has the value
which also have the
but it is far more interesting that some linear mappings
admit polynomials
'if
of degree less than n for which
V(.£)
=
&
Calling a polynomial with at least one coefficient unequal to zero a non-trivial
polynomial, we have the following definition:
leading coefficient 1
value
f)
at !
mapping .£
~
which is of least degree among all polynomials that assume the
(or 0
A)
at
(or of the matrix
~.
the non-trivial polynomial with
is called the minim~ polynomial of the linear
A) •
Without proof we will cite the following statements:
Every linear mapping .£
A) has a unique minimum polynomial •
(or matrix
.c
Every polynomial which at
(or A)
(!J
has the value
(or 0)
is divis ible by
the minimum polynomial (which points up a way to find the minimum polynomial
from the characteristic polynOmia~.
Similar matrices have identical minimum polynomials.
Exam.R.l~.
A =
0
0
0
0
4
1
0
0
0
0
0
0
0
1
0
0
But already A2 =0.
B
=
l!
2
2
o
,•
and indeed
det(AI - A) = X(A) = A ;
?Kample •
0
0
1
3].
vet)
X.
4
0
0
A 0
A -1
0
A
0
0
L
= O.
=~
2
•
Find the characteristic polynomial
X.
2
1-1
Using the second statement in the above
is also
A
I~
0
Hence the minimum polynomial is
Compute, number by number, the matrix
~
AI - A =
-1
X(B) and thus check the Cayley-Hamilton theorem.
~ote
prove that the minimum polynomial of B
2-1
Canonical r~resel1tations*)
===========--============
y.
Again let
re this basis has as "simple" a form as possible.
process will be played by
singular at all.
let
(i
n-dimensional vector space.
Our aim will now be to find a basis of
linear mapping.
;r
be an
Ai
••• , p)
j
vi
€
of ;r
(i
An important role in this
be the eigenvalues of ;r;
define
Xi =;r - Ai
= 1,
AiJ ) vi = Xi vi =
V•
In the process of constructing a simplifying basis
'"
~.
is invariant under
Q
(1
,
V is the direct sum of
"""'
and non-singular on
1
::r,
Le.,
-::r!l1 i
n
6)
j=l
jFi
"', p)
subspaces
p
1
~i
such that
,)
~j
i
~
1, ••• , p •
~ ~i
second property permits us to choose a basis in
re this basis consists of diagonal blocks and zeros
= 1,
need not be
j
(::r -
Since according to the first property £i
~i
V such that the matrix of
X.J. play an important role.
is nilpotent on
~1~
be a
V with
We will first show that
2) ~i
"""
"""
linear mappings, even though
p.:s n)**)
j
is singular on
the mappings
: V~ V
then, as a consequence of the definition of eigenvalue there
exist non-null vectors
i.e.,
j
Singular mappings enter the picture through the following backdoor:
(i = 1, ••• , p
= 1,
~!96~1~~
Let
V such that the matrix
"""'
o~tside
these blocks.
is nilpotent on the subspace
••• , p) , the remaining problem is then to choose bases in ~i
the corresponding matrices of the nilpotent mappings
Xi
are "simple".
such that
The solution
of this problem will enable us to obtain "simple" forms for the above-mentioned
diagonal blocks.
*) Before etart1ng on this' chapter, the reader may want to at least glance
through the Appendix, especially sections A.5, A.4, A.5,in order to get acquainted
with or brush up on matters of definition and notation •.
**)
Some of the roots of the equation X(iI.) = 0 may be equal.
number., say p , of different eigenvalues is less than n.
*--------
Then the total
!
A singular mapping on
Let
consider
:It (£ )"
!
is nilpotent on a certain subspace of
1 be an n-dimensional linear space. Let .c : ,..,y... ,..,V be singular;
t~e
following sequences of null-spaces and image spaces:
:n (! i2 ) , :R(.c 3 )
,
oJ • •
~ (X )" ~ (,£a ) , !R(X 3 ) ~"
•••
WLe:1ever
C
holds as in !n(,es) c ~1(,eS+l}"
'D:l;' ':nsion of SUbspacs \) that
fo:::, all integer
k
> (;:
dim !R(,es)
·r.cllows
jJ:.
(SqC
section A.4" under
< dim ;((~,;2~'1.); ~.:n':le ~(,ek)
these dimensior'J
2.'11.1.6-1.:, f:.~;a.'
}~,c::;:; '~:)1all
l;
V
,..,
n;
or equal to
hence the strict inequ21ity signs cannot go on 1ndef:i.r..:ltely, and m9.ny integers
k
must exist sue::. '·hat ~(J~k) == ~(£k+:t.).
>!)
EES::?£ ~
'~:;'nce
tID+lv ...
Q.
!it(!
m+l
=~Cl: m) ,
)
Let
~
rn
r'v'" G W!lenever
it follows 't.'::.€- t
Hence .,em+s v =! lli+l (.ts-1v)
-----
rr. be the smallest such
=g
~
".'
';)..)
",
• I\
n
,m+s \ c__ ;\ll(ro
J \ . ')+s-l)
1"fi('~"+S)
..... \.~
,
and so on down
c- ;,("m+s-l)
,...
.If" ,J,
= ffi(pID+S-2)
:J~ oe;..
C
_
•••
t~1e ~.l ::1.:,
C
ro(pro+l)
_
;..r" W
_. ~("m
",L
I..
-
cL..
•
From the 1:eginning of this point (1) we alr~ady know that all these inequality
(cf., end
-P ,... .........
.v·
O...-
."
.5) .
A
2
2
iii) 3l(.t ) = .e y
,..",
S:milarly
Since
l;;;!-y
'"
= ~(,c).
'
where the inequality follows from S:'V
~\!3) ~ ffi(!2)" etc.
So:
~(,e) ~ ~(L2) ~
~ •••
dim!R(,ck) + dim
l;
I""J
m(.e 3 )
m(lk) : :
n
for all integer
'Theorem' )" it follows from point (1) that
dim
k>0
m(..r m)
(cf. sec.
:::: dim 9t(,\;m+B)
J
Y (<:=X: V'" V).
~
,....,
A.5,
(of. sec.
I'"W
2-3
hence ~(~m)
A.4 'Dimension of Subspace')
So J;(~(.tm»
.. .tm+1V
=~(!m+l)
!R(.r: m)
:II
=~(~m+s)
211
integer
s >0 •
(cL sec. A.5, under 'Non-
i.e.
I
for
singular vs. singular'):
Note the definition of !
iV)
The intersection ffi(.t k )
in the above point (i) •
n ~(.tk)
though dim ~(.tk) + dim ~(.tk)
!J1(.r) c !Jt(l2) ,
V €
.tv
~(.t2),
€ ~(.t).
Y.
tv
f.
Q,
v
.c 2 v ==
E
V such that
Q,
.tv
m(r) ,
E:
I
if m (as in point (i»
~ !R(.~m) n m(.c m) == [Ql •
m
with .t v .. w;
k.
[~J even
Let, for instance
v·~ ~(£)
1
yet obviously
is the smallest integer with
==.cmw:o:\: 2mv. ,£ mv = w; QED.
./4
...... -.. - -... _-- ..
I;:
m+s
m
(since m(.c
) == !n(t)
for all s > 0 • )
so Q
-_
v)
for all natural
However,
--.-._.
m(,em) == ~(.tm+l)
V E:
=n
then there exists
i.e.,
-
is not necessarily restricted to
_-~.
._ _-
V = ~(.cm) ED :J1(,tm), m defined as in points (1) and (iv) •
'"
~~~~!. ~(.cm) is a linear subspace of ! ,
~(.cm)
is a linear subspace of !
dim 1)t(£ID) + dim !Jt(,tm)
!R(.cm) n J1(Zm)
=
= dim!
,
,
(Q}
Claim follows by sec. A.4: 'Sufficient and necessary condition for
An alternative way of delcribing the same result is, of course
2-4
they will be called 62~:~~!!~~~ :!S~~~~2~~~);
different;
m
for
is nilpotent on
£i
'1
write ~i
m(£ii)
=!Iti
$ ffi
non..singular on
I
i
mi
;
li!It
= (Q)
i
j
J:il)li = mi
n. = dim ~i + dim mi •
j
i
!It
;
In addition we will show
i)
;r.ni
~ !Iti;
!ni
Proof.
v
So all
3' .. images of vectors in
ii) 3'ffi ~ l'Jl
i
i
€
!It
i
i.e. ,
to !Iti;
ffi
j
(1 = 1 ,
is invariant under 3'
i
~
m
£i
1
m
=Q ~
v
3'!It ~ !n
i
3'oC
l
v = 3'Q
i
~i
belong
i
is invariant under 3' •
... --- m By pointm(iii) of section 2.1:
i
Proof.
» =mCt i i )
J~i (m(£i
V €
mi
~
(Jv .. Ai v)
and some vector of
1ii)
mj
'cj
!Tti=!Tt
i
for
=m1
~ oCf!R i
€
mi
mi
i~j;
~
~
~
~
3'v
3'v
mj
oC k oC j
(3''' AiJ )!Ri =!Jti
I
i.e.,
is a linear combination of
€
mi
!Tt
i
v
(€ ~i)
•
= !Tti
for
i131
i/JA,
k·/J.
j
2-5
~!:~2!.
We know that :J!J'i ~ ~1'
hence Xj~i
r;;;;
!ni ,
even if irJ •
Hence if we £~~ ~r2!~ ~~~~ Xj is non-singular on !ni (1~j)
m
follow that l j!R i =!n i = 1,j j !R i == ~i •
it will
m
~~ !~£~, assume v £!R i
mi
(A j - A1 )
1
= !R(X/)
v = (X i - .&j)
and also v
mi
&:
m1
!R(X j).
mi
'!hen:
k roi
m1-k k
v =.1: 1 v + k:l ( .. ) (k ) .£i
.t j v
1
m
where the first term is zero since v £ !n(t i 1 ) , and the second term is
zero since v e !R(,c.) • Since A :} Ai' it follows that whenever
J
m
J
v e !n(1,/) n ~l(,c j) then v = Q. Hence no non-null vector in !)1i is
annihilated by 1,j =:3' .. AjJ;
i.e.
Xj
is non-singular on mi.
(This
rather ingenious proof from Nehring, p. 103, Th. 8.2).
m.
Now, obviously, since Z/!lli =:!Ri (i ~ j) it follows immediately that
•
~
!k
mj
.t j
~
!Ri =.l: k
m
(irJ, jf:k, i/:k).
:Il i ==!R i
Generalization to more "factors"
of the form .tjj is obvious •
iV)
The set of all vectors i n !
m.
mj
Jl
2
all vectors in .t. .t
-
Jl
._--'
._--
j2
.
mi
annihilated by 1,i
is the same as the set of
m
jk
m
••• .l:
! annihilated by .l: i t ; hence the latter
jk
--.
set is still ~ (provided all
Proof.
Because of point (iii) in section 2.2
lJ1
Hence !n.
~
v)
V ==!n G> !R
~
l
2
Proof.
-----
i, jl' .)2' ••• , jk are different)
r;;;;
i
==
mj
.c.J
l.c
l
mj
.)2
mj
m.
mj
J2
1,j 1 x
••• .c k
j
1j 2 k
m
@ •••
m.
2
Jk
••• t jk
!ni
1. •
~
m
@m@(1,P,cPl-l...,clV).
p
p
~
~
According to point (v) of section 2.1 we have:
if
.c
is singUlar
on ,..,
U then U is the direct sum of the set of all vectors in ,....
U
~
annihilated by S,m and the lID-image of U (where m is defined as in
---
point (i) of section 2.1).
~
Thus
eX
for
y
I
.&1 for
l) :
Now take
!:
V=J1IEJf,~V
1
1
r.J
'1
r.J
for ,!l and X2
for l .
Using the reeult of point (iv)
in this section we find
ID2
~
xmlV:!Jl
1"'"
2
(f)
.c~.c~V
2
1.-
:¥. for ;g and .t 3 for .t j etc. '!be claim made then
l
follows from borollary Associativity' at the end of sec. A.4.
Now take .c
vi)
v =~l (jj
2
.t
~2 (£) ••• (f) !ll
;p
E~22f.
IJl.
According to point (iii) of section 2.1 .t i ffi i =m i , i.e.
m
:¥. =Xi i :¥.. By repeated application of this result and of the
Xi Xi 1
commutativity of the IIfactorsll in
p •••
,I,
•••
•••
!. ~.
• ••
it follows that
• ••
n
2
n
···.t2 Xl
1
n
!
==] ,
n
(3' - A2J) 2 (3' - X1J) 1
= X(3') = {fJ is the value of the characteristic polynomial at 3".
.N = {el •
==
Thus
2-7
~
~.
!
~n i
dl!S0nal
b~ock
matrix representation of linear
mappin~s
was defined to be
(i = 1, ••• , p)
defined as in the beginning of section 2.2.
with mi
~i?
But what is the dimension of
We shall answer this question at the same time as we find a diagonal block
~
matrix representation of
Denote dim!lli
re a particular type of
by
(i = 1, ••• , p).
Vi
and sec. A.4, under 'Dimension of a direct sum' ,
ba~iD.
Then because of sec. 2.2, pt. (Vi),
P
E
i=l
vi
=n
= dim V.
,....
Introducing
the notation
=
0 if i
1
(
j.L -., 1-1
i ) 1: ~
if i = 2,
h=l b
l
let
(Xj.Li+1 '
xj.Li~
I
••• ,
\
(i = 1, • tt, p),
••• , p)
Xj.Li+"i}
be a basis of
~i.
Then because of sec. 2.2,
pt. (Vi) :
{XI' .••, Xv1 '
B =
is a basis of
V
"'.I
x +l'···'x + , ••• , x +1'
1i2
1i2 "2
lip
••• /X
+v}
lip
p
•
that is
P
3'~
with agh == 0
unless (g, h)
= E ex b x
g=l
J.Li
Hence the matrix representing
(h
g
=1,
••• , n)
belongs to anyone of the diagonal blocks described by
( 141 + 1 ::: g
t
g
+ 1
~
:s l-1i
.5 h
~ J.Li
+ Vi
1
(i
= 1,
••• , p)
+ Vi )
re the basis
B 1s of the form:
2-8
A
0
• ••
0
0
~
• ••
0
•••
0
•••
• ••
• ••
0
• ••
A
::;
(v.'J. X Vi)
is
where Ai
1\
(i ::; 1,
matrix
,
P
·.. , p)
•
~
Consequently, according to section 1.4 the characteristic polynomial of
is
=iT
det(AI - A)
det(Ali - Ai)' where Ii is (ViX Vi) identity matrix. Note
i=l
is the matrix representing the restriction of ~ to ~i (cf. equation
that Ai
2.3,1), hence det(AI.A.)
is the
J.
J.
:r to :noJ. • On the other hand, (Ao9
characteris~
-~)
polynomial of this restriction of
is singular on ;noJ.
if and only if
A ::; Ai (as is easily seen by means of the argument used in sec. 2.2, pt. (iii»
so the
characteristic polynomial of the restriction of
V
(A - Ai) i
dim ~)
= det(Ali -
-Pr
1=1
Ai)'
~i
2, ••• , P
Hence, the characteristic polynomial of IT
det(Ali - Ai)
eXRonent 2.! the power
=1,
to
(degree of characteristic polynomial of a linear mapping
det(AI - A) ::;
i
~
=
TIi=l (A -' Ai,vi.
Therefore
,
must be
~~ ~ equals
is
Vi = dim
~li =
the
S2!. (" - Ai) 1!l the characteristic polynomial 2! L; for
(In sec. 1.4 we used
n for these exponents).
1
We shall now show that the next step in finding a basis re which IT
has a
tnice' matrix representation depends on the possibility of finding such a basis
for a nilpotent linear mapping.
!
to
:n i
,
In fact, since
relative to a basis of ~i'
Ai
represents the restriction of
we know that
(Ail i - Ai)
represents the
relative to the same basis. Now ("io9 - rr) is nilpotent
mi
def
on ~i' because of (Aio9 -!) ~i = [Q}, so the matrix Aili - A.J. ---- - F.]. is
m
nilpotent, more precisely: Fi i
0i' SUppose we find a basis for ~i re which
restriction of
("io9 - IT)
=
the restriction of
(IT - "io9)
We know that then a matrix
that
C
i
to
mi
has a nice matrix representation, say M1 •
exists such that
2-9
,
hence
1
\
C
2
A. I + M
P P
P
The matrix in the first member is obviously similar to A,
re a suitable basis.
A.ili + Mi
are too.
I
=
•
~
iI
A
p
• C
I
p
that 1s, it represents
If the
M are in a 'nice' form, then certainly the
i
So the next section is about a canonical representation of
nilpotent linear mappings.
The section after that will combine all pieces, and
demonstrate how a canonical form can be constructed for an arbitrary square matrix.
~
~.
A canonical representation of
Let ]
be an
mapping on ]
nil~otent
n-dimensional vector space and X be a niltotent linear
into ] ,
~
> 0 exists such that
i.e., an integer k
!1,U = (o}
'"
for 1,
lipear mappings
~~(X.t) =
U
.-..;
Note that a nilpotent X must be singular:
k.
if XU = "'"
U then
m be the smallest integer
,."
~
=Q
for all t ~ contradiction.
with ~(Xm) = !Jl(Xm+l ), so that
X~]
!Jlee)
C
!R(X 2 ) c •••
C
!R(Xm- l )
Let, as before,
C
~(J:m)
=!R(,cm+s) \; .!!;
Clearly m is also the smallest integer for which !R(X m) = ,...
U
defined integer k
s >_0 •
I
so the above-
equals m.
Now we will construct a basis of ,...
U so that the matrix of X re that
basis is nice and simple.
First observe that dim !R(Xm)
Si
(i
= 1,
••• , m)
f
(
i
(2.4,1)
\
=
dim ,...
U
=n
•
Next observe that all
defined below are strictly positive:
n ..
dim !R(Xm- l ) ~
-
s
dim !Ree i ) - dim !R(.J~i-l) d"ef
0
s1 > 0
~
dim 0t(,r)
>
m
8
1
(i
=2,
••• , m-l}
>0
Clearly
m
I: s
i=l
i
= n
•
Define J
Define J
as the 1 X 1 matrix [0).
l
as the k X k matrix which has all entries equal to
k
in the first superdiagonal which consists of a ~~ing of
=
o
o
o
o
1
0
0 1
0
0
0
0 1
0
0
0
0,
(k - 1) 1'6.
except those
For example
2-11
e
We !!.ll!. prove:
i)
8
m .5 sm_1
.:s
sm_2
.:s ••• .5
6
:s sl
2
ii) A basis of U exists such that the matrix of t
""J
re this basis is of the
following form:
s m-1- smblocks
smbl.o.cks
"
J-~.' .~"\r."'
_-.d.. . . . . ~,.-.':
..........-;:",-
/
- -,......
m •
r
~\
!
\\
I
I
j
•
I
J
\
m-1
I
j
t
I
I
•
I
,
i
f
. /
J
Whenever
-_ _-
Proof.
•.
m
1
/
1+1 = 0 it is understood that there are no blocks J i •
To simplify notation we will discuss a special case. Assume
s1-
= 4,
8
63- s4= l , s2- s3= 1, 8 1 - 8 2 = 2, so n == E 8 i = 15 •
i
Then since ~(13) c ~(.c4) =,Q; and dim !1t(t 3 ) is s4= 2 less than dim U,
we know
64= 2,
(sec. A.4, 'Extension of a linearly independent set to a basis') that
there is (even an infinite number of such sets) a set of s4= 2 vector~
~ ~(.c3) ,
€)n(.t 3 ) ,
€
~(~4~ and
which are mutually independent and no linear combination of which
;
..
Let b 41
4
and b 42 be two such vectors.
1°) £ b 4i = g
and £
k
b 4i
f
Then
k
~,
£-04i
€ ~(!
and
2°)
i
4-k ),
=1,
2 •
The vectors in the first two columns of the following array are
b 41
b 42
£b 41 Xb 42
! 2 b 42 £ 2b
42
X3b 42
£ 3b 41
Indeed, assume
b
!b
31
31
b21
,C~3l
!b
21
3
l: 0:41i £
i=o
1
b 12
·'11
3
i
b 41 + .E 0:42i X b 42
1=0
=Q
Apply X3 to both sides and find
!3(Q:4l0b 41 + 0:420b 42)
Hence 0:410b41 + Q:420b42
=~
:n(X 3 );
but b 41 and b 42 were explicitly assumed to
have the property that no linear combination would belong to ~(£3); so
0:410 = 0
0:411
=0
=0:420'
to
Now apply !2
=0:421 ,etc.
! b
m(x 2 )
,
41
€
Claim under
(2°)
(2°)
is proved.
are linearly independent, they belong to m(t 3 ) , not
42
and no linear combination of them belongs to m(£2) (assume the
and .£ b
opposite, then ! 3b 4l and! 3b 42
point
to both sides and similarly find that
would be linearly dependent, which would contradict
which was just proved).
This proves that
2-13
In our caee
say
s4= 2,
s3= 3.
So there exists one other vector
(again there is a huge choice), such that
,I;
b 41 ' 5': b
independent and no linear combination of them beloDfls to
42
' b
.,l(e 3 ),
€
31
b
31
are linearly
The reader will
jl(Z2).
prove
3°) £\31
40)
€
k
A(J.: 3 - );
C\31~ Jl(S;2-k) for a < k < 2.
('.J1(!0) ~ (9) )
The vectors in the first three columns of the above array are independent.
~. ( ':2) ,
are linearly independent, €:.It
linear combination belongs to ~(£l.
Therefore
s2 ~ s3·
In our case
no
s2- s3 = 1.
So there exists one other vector which shares these features with .~2b41' £~42 '
f q, and also
21
that the vectors in the first four columns of the above array are linearly independent.
~ b
3l ;
call it
b2l •
The reader will prove that
Finally the fact that
4It bl2 '
s - s
1
2 =2
'~21€ ~(!), !b
entails that two more vectors, say b
ll
and
have to be taken in, in order that the bottom line of this array may be a basis
j1er.:) •
for
As a result we have
15
linearly independent vectors, which therefore form a
6 in the bottom line are a basis of Jt(£), the 4 in ihe next
line complement this basis into a basis of ]L(~2); the 3 in the next line comple-
basis of Q;
the
ment this basis into a basis of
~t(.f-3); the 2 at the top complement this basis into
4
A(C ) =Q
a basis of
Note that the number of vectors in each row and in each column depends on
••• , s
... .
~
m
i.e., on the dimensions of the sets
;
not in the least on the matrix representing £.
This means that the lengths
of the rows and columns of this array are the same for all matrices which are similar
to each other.
Now enumerate the vectors of the above array in the following order:
column, bottom to top; second column, bottom to top, etc.
be
v
,
12
b
21
21
v
These vectors
v
(Note that the subscripts of the b are double
l3
are single: the subscript of b21 is two-one; of v
12
would
would be
subscripts, of the
twelve).
For instance £b
first
v
now satisfy the following equations:
is
2-14
! v4
': v
3
= v3
= v2
x v7
! V =v
2
1
! v6 = v
5
.J:
v
5
=
=
vll
vlO
:: v
! v
lO
9
l"
= v6
.c v = Q
~
9
,£
v
=v
12
13
,£
=
v
12
Hence, in accordance with equation (1.1,1) the matrix
'0
1
0
1
000
10 0
I
000
[a ij ]
,£
v1 4 =9
.e
representing .1:
v =e
15
with
(v , v , ... , v } is as follows
1 2
15
respect to the basis
r-
Q
0
0
1
0
010
o 0 1
0
0
000
1
0
0
o
0
and
0
1
0
0
0
0
1
0
0
0
1
0
0
0
at all other entries.
0
0
The actual construction of such a basis is harder than it might seem because
~.
of the condition that, e.g., no linear combination of
b 41 and
b 42 may belong to
~(!3). Although the existence of such vectors can be asserted without having an
actual basis of ~(!3)
available, this checking requires more doing (either one
has first to choose bases for ~(.1:k),
k
= 1,
2, ••• ,
(cf. Nehring, p. 104)
against which this type of conditions can then be checked; or one selects e.g.
b 41
and b 42 on a trial-and-error basis checking if £3(1\ b 41 + t'2b42) ::
t'l = 0 :: t'2'
indeed entails
e
and if it does not, then chooses another b 41 and/or
b 42 ) •
~.
The Segre Characteristic of a nilpotent mapping gives the SUbsequent orders
of the blocks
J
k ;
in the above example
(4, 4, 3, 2, 1, 1).
2-15
.4It
The ~ characteristic of a nilpotent mapping is
the above example
[6, 4, 3, 2}.
{Sl'
2 , 6 3, •.• , sm} , in
Obviously the Weyr characteristic follows from
8
the row lengths of our above array of vectors, the Segre characteristic from its
column lengths.
2-16
2.5.
The Jordan normal form of an
n
X
n
matrix
We shall now combine the results of sections 2.3 and 2.4.
Let
~
be a
linear transformation mapping an 8-dimensional linear space into itself, with
characteristic polynomial
The so-called Jordan normal matrix of d
has then for its main diagonal
-1
-1
-1
-1
-1
2
2
2
= 5 ; n2 = 3)
l
will then be completed by adding
in keeping with the formula at the end of section 2.3. (n
The first diagonal block (5 x 5)
it, where
(=:r + J
M1
M1
to
is a 'nice' representation of the restriction of ! - A1J
in our case)
to ~l'
5-dimensional linear space ~l
Since ;r - A1J
~
We saw that
to
~l
Fl
=A1-
A1Il ,
n~
and on
i.e., of the
•
Suppose in our case ~= 5
the reader will prove that
M1
will consist of zeros except
on the super diagonal, depending on the value of
the dimensions of the null-spaces of the powers of
restriction of d - A1J
is nilpotent on the
(WhiCh has the role of ~ in section 2.4),
can be found by applying sec. 2.4.
possibly for some 1 's
(= 0' + J)
(~+ J)k ~l
(i.e.,
= (Q}
for
2 ~ 5).
dim ~( (~ + J)1) = i, si= i for 1 S i ~ 5.
.
0
~ =
1
0
1
0
1
0
1
0
So
Then
2-17
Suppose ~ = 2,
and dim ~'l( (3' + J) ) = 3.
Show that now
1
o
=
0
o
1
o
0
o
Note that in the first case the scheme connecting Segre and Weyr
and in the second case: . . . .
)
characteristics is
6
w
VI
2
= 2
s = 3
1
•
vi
.. .
·
L
.
I
2
2
I
5
1
s
s
As to the second diagonal block (3
adding
~
to it, where
l' - ""2.J (= l' - 2J)
dim !ll2 = 3 =
8
1
,
X
3), it will have to be completed by
i6 a nice representation of the restriction of
~
to !Jt2.
Suppose m2 = 1 ,
i.e.,
('T - 2J) !Jl2 = [Q}.
and the above pattern is:
and
A transformation
with
• • •
for
•
which combines
~
;
A
..- 2
-1
=2
1
-1
•
has Jordan normal representation:
;
--I
o
-1
1
-1
0
-1
o
2
0
2
0
2
...J
Then
A transformation
• • •
for
•
::r which combines : for (3' -
1
-1
1
-1
1
-1
1
-1
0
2
0
2
L
Exercise.
Ai = -1;
•
has Jordan normal representation:
h,2= 2 ;
-1
Ai"';
0
2
What is the Jordan normal form of the matrix
lJ
-1
1
2
1
1
0
0
2
0
1
0
1
:;
]
and of the matrix
:; 0 0 0 0
·3 o 1-1
0 3 -1 0
0
0
0
0
0
0
o
o
2 0
02
?
with
A-l
A.
=
=====
~endix
1
The principal purpose of this appendix is to make this report to a certain
degree self-contained.
Tb this end we have listed axioms, definitions, and standard
results in so far as they have been used in the main part of this report.
Sometimes
a proof is provided, sometimes a plausibility argument or a heuristic motivation.
For complete proofs see the textbooks listed in the section on literature.
Since in the definition of linear spaces the algebraic concept of a field
plays an essential role, section A.l discusses the axioms of fields.
needed for the derivation of the Jordan normal form consists of the
The field
S~~~!~~ ~~~~E~.
These have many additional properties, of which a few (viz., those needed in section
A.3 and A.4) are given in section A.2.
~~,
Then section A.3 gives the axioms for linear
spaces, section A.4 discusses relevant properties of linear spaces, section A.5 of
linear mappings.
A-2
Fields
•
Denote the set of complex numbers by
The system of complex numbers has
~.
the following properties.
I.l.
A mapping:
~
~ ~
X
~,
Notation a + O = (3.
l
2
1.2.
called addition is defined.
For any a
l
€ ~,
a2 €
~
their sum also
For any a E ~,
l
Associative law of addition.
a2 E
°
(al + 2 ) + a:3=- 0:1+(0:2 + 0: 3 )
1.3.
Existence of solution of certain equations:
there exists !,ileast
~
and at least one
-
E
€
11
E ~
~
with 0: +
with
~
a: E
I
3
..
•
~
•
for any 0:
~
~
€
(3
E ~,
(3 E
~:
}
•
11 + ex = (3
Corollaries.
-----------.-
!.: existence of
~~ !~2!~ 2~~
i.e., for any 0:
-Note.
~
E
right additive identity element, OJ' E cp;
there is at least one
OrE • with 0: + Or
=-0 •
Here the possibility is left open that there might be more
than one right identity element for a given element
of
~ 1
or
different ones for different elements.
Eo:
existence of
~.~ !~~~~ ~~~
i.e., for any a e:
~.
left additive identity element,
CP there is at least one
0t
E ~
0,e
with
E
0t+ ex
CP
1
=0
•
However, because in 1.3 the
Similar to the preceding note.
existence of solutions to both equations is postulated:
right
left }
--- ..
additive identity element for ~ element
-
additive identity element for all elements
Indeed, let
let
be a solution to
T'j
then 0:2 + 0r=
1}
+ 0:1 + Or ..
Similar proof for
0t.
T'j
0
1:
=
11 + 0:1 0:
2
+ 0:1 =0:2
~
CP is a {right}
left
~.
E
Or be a right identity for
E
(Xl + Or" (Xl
1
0:2 1a (Xl ;
0:2 + 0r= 0:2 •
Now we shall prove:
1
A-3
~:
uniqueness of additive identitl element:
identity, only
~
Indeed, let
O~
there is only 2E£ right
left identity, and they are the
0'r
and
0"r
~,
denoted by O.
be two right identity elements, and
some left identity element, then because of corollary £ (elaborate~
0'
r
say.
0"
r = 0 ~ , and
r
Similar proof holds for
= 0~ +
0'
= 0It +
---
O.;(, = 01- + 0r = 0r = 0,
called zero.
say.
0" =
r
""
0' -- 0"i- = Or.
N
It
_0
hence
O~,
,
say.
0' = 0" =0
r
r
Finally,
r
,
is the _additive identity
element,
-
--
such
!.:
g,:
such
uniguenessof additive inverse:
additive inverse, only
denoted by
~ ~
for any
0: E:
~
- _.----
there is only one right
additive inverse, and the two are the
~,
(~).
In fact, let
0: + (~)' =0"
r
0: +
(-0:)" =0" (-o:L +0: =0"
r
""
then
Similar proof for uniqueness of left additive inverse.
Finally:
( -o:),~
= ("'Cl:)"
+0
)J)J
~.
= ("'Cl: )"
;(,
+ a; + (-0: ) r
=0
+ (-0: )r = (-0: ) r = (-0: )
The right and left inverse are the same, even without
commutativity.
Exercise.
~.
Prove that f(o:+~»
fotation:
ex - f)ctq..fo:
= (-~) + ("'Cl:).
+
(-f».
Show that
Prove that (-(-0:»
0: - 0:
=0
•
=0:.
A-4
!!.:
uniqueness
2!
solution of 0 +
uniqueness
2!
solution of
Indeed, given o £
}
,
f3
4> ,
£
let 0+£'=f3 == 0 +
( -a) + 0 + £ I == (-a) + f3 == (-a) + 0 +
'!ben
e
So
Similarly
1.4.
= f3
+0=f3
T}
~
£
= ~"
+ f3
t"
== ~
+ (-a)
== f3
T}
== (-a)
s"
~.
Without commUl:ativity
Note.
1he three postulates 1.1, 1.2, I.' are those of a{n additive)
Commutative law of addition.
£ need not be equal to
For all 0
01 + 02
~.
1
4>,
£
=CX2 + CXl
1') •
itggg.
02E 4> :
•
The four postulates 1.1, 1.2, I.', 1.4 are those of a commutative
group, also called an Abelian group (~~~ ~~4!~!2~).
~
Complex numbers can also be multiplied:
1I.l.
4> x Ii>'"
A mapping:
Notation
CX CX
1 2
For any a l E
11.2.
==
4>,
r
~,
called multiplication is defined.
(alternative cx ·a
'I )
l 2
€ 4> their product also €
:&
O
2
Associative law of multiplication.
~
•
For any alE 4>,
cx
2
€
t
,
C¥3£ • :
(al a 2 )a, == a l (a20,)
The addition and multiplication interact according to:
III.
D1stributive law.
For any a l E
~,
a
2
€
4> , CX, E 4> :
°°
111.1.
a l (a2 + 0,) = 1 2 + ala:,
111.2.
(a2 +
°,)°1 =cx20 1
+ 0,01
Corollaries.
-- .... _-----
.!. a(f3 - r)
III
C$ -
or
for any a E 4>,
f3 E 4>,
r E ~ •
Indeed, both members can be shown to satisfy
(Use definition given of the notation
Corollary ~ to point
(a - f3)
T}
+
ar
III
cxf3 •
at the end of
1.3).
{f3 - .,)a: = (3a - 'YO for any a ~ 4>, f3 E ~, ., E 4> •
Indeed, both members can be shown to satisfy 1') + 'YO
= I9a
•
b. ('1.0 =- 0 for any ex
£ ~ •
:wdeed, a.O. =- a(f) O.a
a
for any a
0
Indeed,
c.
O.a
£
t» _C4'0 .. ~
0
a
t •
= (f3
.. f3)ex .. th ..
t=a
= 0 •
For any a E t,
13
(ooO).f3 .. -(a,,) ,
for both are the additive inverse of
a. (-t3) "" - (a,,) ,
same argument
(-0:).(-")
~.
=o:t3
,
£
t
for both are the additive inverse of
(a,,)
(-0:)." "" -(at3) •
~.
The postulates 1.1, 1.2, 1.3, 1.4, 11.1, 11.2, III are those of a
1! for a
£
t
t3
I
E
.---_.
t, a" cannot be zero unless either a" 0
t3 .. 0 ,
or
_.----- ---- --_._-_.
the ring is said to be without zero divisors.
l!
~~~!~!~~~~!2~ is also ~~~~~!!2 (postulate II.4 below), the ~!~ is said
to be commutative.
------_.---
A ~£~~~2~~!~ !:ing !!~~2~~ !~!:£ ~!!!2£::!! is often called an !~~=6~!! ~2~!~:
II.3.
111e set t \ to} .. t ' ,
say, is
.E2l
empty and. is a sroup under multiplication,
that is:
Existence of solution of certain equations:
-
-
£ E ~' with ag
there exists at least one
and at
that
~ E
£
t,
a
-
least one
~:a 0
0,
~
i.e.,
are solutions..
t3
= t3
t',
"E
~
I
:
}
rp "" t3
a
If
0,
then
III,
oorollary
a .. 0, t3 /: 0, no g
:2. shows
E ~
or
.
respectively, can satisfy these equations (same coroll. !?)
If ex" 0,
t3 "" 0,
every
"1 0,
neither
s
If
E ~'with
for any a E
nor
£
~
E ~,
.~ E
t
satisfies these equations.
can be zero.
---
Corollaries
......... •
!:
existence of!~ ~~!~~ 2!!= ri§ht multiplicative identity element,
1rE ~',
i.e., for any ex E t' there is at least one 1 rE t' with ex.1 r "" a •
-
Note.
:!?:
-
See note in I.3, a
existence of !~ !~!2~ 2~~ ~ multiplicative identity element, 1.e E t' ,
i.e., for any ex E t' there is at least one 1 I.E t' with 1J..a = a •
A-6
~.
See note in I.),
-..
£.
Now because in II.) the existence of
-
solutions to both equations is postulated:
~:
a
{~;~~t}
{~;~t}
multiplicative identity element for
of
-d:
~
multiplicative identity element for
E::~~!.
~
elements
E
is a
t/•
+, and 1 instead
instead of
Replica of I.';" £." with
€.'
element
O.
uniqueness of multiplicative identity element:
identity, only
~
-
there is only one right
left identity, and they are
the~,
denoted by 1 ,
called unity.
+, and 1
instead of
of
~:
!:
o.
there is !~ !~~~~ ~~~ risgt multiplicative inverse,
for any a e t'
a
-1
such that
r
for any a
a.ar..1 =
J
E IP
-1
such that at •
1 •
there is
0:
!~ ~~!2~ 2~~ ~
the two are the saIll;,e" denoted by a
-
Note.
~
multiplicative inverse, and
•
The reader will model the proof after I.),
£.
The right and left inverse are the same, even without
commutativity.
Exercise.
Prove that (013(1
Notation.
%~
.!!2. ~
-1
-
I
for any a e III there is only one
£16ht multiplicative inverse" only 2E£
Proof.
-1
multiplicative .!Everse" at
= 1 •
£: uniqueness of multiplicative inverse:
~:
instead
d,ivisors.
a.~ ..1 j
~.
_+
Let af3 .. O.
our ring has zero divisors)
= f3-1.
0-1 •
Prove
f3 def 0:.f3-1
Assume a
f
0 ,
Then a -1. o~ .. f3 = 0
=
0: •
Show that ~.. 1 •
f3
j
f
0
(i.e., assume
contradiction.
A ring satisfying postulate II.3 is called a division
or a skew-field.
~
We saw that a skew-field cannot have zero divisors.
e
II.4.
Cmmnutative law of multiplication.
0:1(X2
~.
For all
0:
1
E Q)
,
0:
2
:
= 0:2°1
A skew-field satisfying postulate II.4 is called a
rational domain.
E ~
fiel~
or a
(Some authors use field instead of skew-field, and
commutative field instead of field).
A ring satisfying postulate II.;4 is a commutative ring.
~.
'!he postulates I, II, III are shared by complex numbers, real
numbers, rational numbers.
Of the various properties which
~
bas in
addition to I, II, III, we will discuss only those cited in the discussion
of Linear spaces and those .used in deriving the Jordan normal form.
e
A-8
A.2.
Complex numbers
The field axioms listed in section A.l
are satisfied by the systems of
rational numbers, of real numbers, and of complex'numbers, to name a few examples.
We have assumed that the reader is familiar with these various types of numbers.
In the remaining sections of the appendix and in the main part of this report we
need only the following two properties of complex numbers:
!I
The field of com~lex numbers is algebraically closed, i.e., for any
th
positive integer n, for any polynomial, P say, of the n-- degree, there
n
exist
a complex number
p
(s
n)
'1
complex numbers
(1'
{
P positive integers
such that for all complex
S2' ••• ,
n , n , ... , n
l
2
p
~p
with
p
I:
k=l
~
= n
z:
=
p
~
1·
(z - ~k)
k=l
.
1T
(**)
In fact, according to the 'Fundamental Theorem of Algebra' all that needs to be
Pn , is that
this implies the validity of
postulated in order that this equality may hold for all polynomials
the equation
z2+ 1 = 0 has at least one root:
equality (**)
~
A one-to-one correspondence exists between the set of complex numbers and
the set of ordered pairs of real numbers.
numbers
z is:
Z
Let
zl =
The traditional notation for complex
~ + 1~,
=x
+ iy ,
z2 = x2 + iy2'
were
h
x ER ,
Ye R ,
then
zl+ z2 = (xl + x2 ) + i(Yl+ Y2)
zl z2 = (xl~ - Y1 Y2) + i(~Y2 + x2 Yl)
1·2
=-1 •
e
A-9
A.3.
Linear spaces.
-
A linear space (or alternatively:
V,
vector space) is made up of a set
,."
~
whose elements (denoted by lower case Latin letters) are called vectors,
a field,
in our case
0,
scalars,
certain rules according to which the vectors may be related to each
~
whose elements (denoted by lower case Greek letters) are called
other and to the scalars.
single letter
!
Following a common abuse of notation we will use the
not only for the set of vectors, but also for the linear space.
More precisely, we say that
!
linear space
~!!
~ ~
-!..
field
iff
1.1. A mapping: ,...
V X ,...
V~ .....
V, ·called (vector) addition is defined.
V €
2
1,
their sum also
a common abuse of notation, we will use the same sign,
of vectors as for the addition of scalars.
€
Notation
Again following
V •
,."
for the addition
+,
No confusion will arise, and
certain formulae will even look simpler for it.
1.2. Associative law of (vector) addition.
For any
Existence of solution of certain equations:
there exists
and
~
x
€
1
at least one
y
€
I"t<J
~
least
_ _ _
V
l
€!,
for any a
V
€
€!,
v
V, b
€
2
,."
3
€!:
V:
,."
with a + x = b }
V with
y +a
=b
Corollaries
_.. _-------- (proofs same as in sec. A.l, under 1.3; here we cite only the most
important ones):
!!.: unique
ident~ty
element:
there is one and only one element
denoted by Q, called the null vector, such that for any a
e V,
tV
6 .....
V :
a +Q = a
Q
a:
unique inverse:
written (-a),
for any a
+ a
=a
E!, there is one and only one element,
such that
a + (-a)
Notation:
--------
=Q
(-a) + a = Q
def a + (-b) •
a· b
==
A-10
1.4c
-Note.
uniqueness of solution of
a + x =b }
uniqueness of solution of
y + a =b
•
For any vl E!
COIDmutativitl of (vector) addition.
' V2 E! :
vl + v2 = v~/ vl •
V to be a
The postulates 1.1, 1.2, I~3:1 and 1.4 together obviously require ,...
commutative group under addition.
11.1.
A mapping:
called scalar multiplication,
is defined.
-
t X ,...
V~ V:I
,...
Notation:
given a
the value of the mapping is av
E ~ :I
(sometimes for convenience written as va ).
a(x + y)
=ax
+ ay for any a
E ~ ,
yeV.
,...
xe.V,
"'"
Corollaries:
---------- ..
.!.:
'~Q
= Q,
In fact,
1?:
a( -x)
for any a e
x =x +Q
= ~(ax),
In fact,
Q
So a(-x)
•
~
=ax = a(x + Q)
for any a e ~,
-._---- =aQ
e:! .
= ax + aQ
x
= Q •
= aQ = a: (x + (-x» = ax + a(-x) •
is the additive inverse of ax
(a + ~)x = ax + ~x for any a
~
E
=
a(-x)
X E
,
V
,...
= -(ax)
•
•
-------_
Corollaries:
.. _..
.!:
Ox =
for any x e V •
"'"
In fact, 0: = a + 0 = C:~
Q ,
(-o:)x = -(ax),
In fact,
So
(-a)x
II~4.
o(~x) = (a~)x,
11.5.
1x
.!.:
=X
,
Q
for any a
= Ox = (0:
= (0 +
O)x
=c:~_:t_Q~ = Ox
+ (-a»x = ax + (-o:)x •
is the additive inverse of ax
for any x
E
= Q •
E ~ ,
for any a e ~,
~ E ~,
X
= (-o:)x
= -(ax) •
E! .
V•
,...
V•
(-1)x = -x for any x E ,..,
Indeed, by corollary
£.
of 11.3 above:
V
,...
This mapping satisfies the
conditions:
n.2.
E
(-1)x = (-1 x) -. -x •
A-ll
called scalar
Note. Point 11.1 above postulates that a mapping:
x
'"
-multiplication,
is defined, and 11.2 through IIQ5 proceed to postulate certain
~
properties of this mapping.
... V ,
'"
This does not clarify yet how to define scalar
multiplication in specific instances of vector spaces
example:
V
V
'"
over fields
~.
One
let !be the space of ordered triples of complex numbers (and ~ be the
complex number system).
Now let
a
€
Q,
(£1' ~2' £3) E
ag
Sl
0:
£2
£2
~
!,
then the definition
l
0:£2
aS
3
is easily seen to satisfy all postulates 11.1 through 11.5.
e
A-12
!:2t.
Basis, dimension, direct sum
Linear combination
belong to
2! ! vectors. Given
vectors
~) is said to be a !!~2~~ £~~2!~~~!~~ of
If the field
~
then (XlXl + ... + C%kxk
If
X.€
V (i • 1, ... , l(),
1.
~
~,x2' ••• , ~ •
consists of the rational numbers only, the coefficients 0i
only be rational numbers).
Subspace.
k
E
Prove from definition of linear space:
if XiE:
can
Z
! ·
V is a linear space, not every subset
~
See the two figures; in both figures
U of
V is a linear space.
~
~
V is the set of all points
~
(x,
y)
in the
plane, a linear space over the field of real numbers.
or
~ -: .set
3
po i Y'lts 0
o
------11----------
l!
J(
is said to be a (linear) !~s~c:.e_of
Z
iff
U ~ ,..,.
V and
,.,;
U is itself ~ linear space (over the same field as
,..,.
(hence
V)
~
U contains all linear combinations of its own vectors)
~
- - - - -......" - - - - - - - - - _ . %
A-13
Let U be a subspace of V,
Subspace spanned.
vectors in .....
V.
~
~._-_._-
and let B be a finite set of
------
~
Then one says;
(1) each linear combination of vecto~
E B is a vector in .....
U, and
U is spanned by B
.....
U is spanned by the vectors in
.....
each vector in U is equal to at
least one linear.....c ombination of
vectors E B •
B spans Jl
the vectors in B span
-3
"I
I
I
I
;'
,.
-------
Example:
~, 'L
!
/~
(..-> ._e
.!!
1
V
,...,
same as above;
B consists of
B = {Xl' x2 ' x }. flDw:
3
any point in the plane is equal to anyone of
the 3 points indicated:
a number of linear combinations of these
-------.
three vectors:
l3i
B spans
'1.
Thus (Xi
and
can be found such that
(Xl ~ + O:2x2+ C¥3x3 = 131~ + 132x2 + 133x3
'
Whence
Linear independence.
~~~2!~~~!~~ of the
In order that each vector in
1 equal exactly
~~~ ~!~~~t
k vectors
xl' x2 , ••• , xk in B, it is necessary (why not
sufficient?) that rl~+ r 2x2+ ... + r k xk = Q implies r l = 0 = r = ...
r k • '!his
2
occurrence has received a special name. Consider a set B of k vectors
0=
The !!£~2t~
Ilbe 2~~
xl'~' ... , xk
are said to be ~!~~~t~~ !~~~~~W~~~
}
B is said to be ~!~~2;:~~ !W~E~~~~~!
r 1Xl + ••• + r k"1t 9
~
r1= 0
= r2 =
••• = r k '
i.e.,:Jff the only linear combination of these k vectors which equals
trivial one (all coeff. zero).
independent set (whyt)
The null vector
iff
Q
is the
Q is never an element in a linearly
A-14
Basis.
--- ---B of-- vectors
------- is said to be a basis
----- of the linear space
A finite set
.. _---
U!7: V iff
N
"'J
B spans
U,
and
""'-J
_
B is linearly independent.
So each vector of
is a unique linear combination of the vectors in the basis.
~
The coefficients in that linear combination are called the coordinates relative to
that basis.
Let
e , e , ••• , e be the vectors in some basis of
l
k
2
proved that some vector of Q equals:
we can immediately conclude that a
i
= t3i
(i
= 1, ••• ,
k).
~.
If we have
No basis ever contains
the null vector (why?)
Variety of bases.
E~~~! ~~~E~~.
The linear space with ordered pairs of real
B'= (3, 0), (0, -l)} ,
numbers as vectors, and real numbers as scalars has a basis
but also a basis
B"
= {(l,
3), (-1, i)} • §~3~ ~~~!.
'!he linear space with.
ordered triples of real numbers as vectors, and real numbers as scalars has a basis
BX= {(l,O,O), (0,1,0), (0,0,1)} and a basis ~ (1,1,1), (0,1,1), (0,0,1)} •
However, once a basis has been chosen, the coordinates of any vector re that basis
are uniquely determined.
Dimension.
(5, 4, 3) re BII
V has one finite basis
It can be proved that if a linear space .....
consisting of
said to be
Find the coordinates of the vector
a
vectors,
then all its bases consist of
n-dimensional, notation:
Dimension depends on field.
If
n vectors.
!
B ,
V is then
dim .....
V= n •
V is the set of vectors of a linear space, then
"'J
the dimension of the linear space may very well depend on the field
Let .....
V be the set of ordered triples of complex numbers
~.
ExaI!lPl e :-
-------
A-15
~
Show that
in
(1,0,0), (0,1,0), (0,0,1)}
is a basis for the linear space with vectors
1 and scalars in the set of complex numbers. However [(1,0,0), (i,O,O),
(0,1,0), (O,i,O), (0,0,1), (O,O,i)}
is a basis for the linear space with the same
set of vectors, but with real numbers for scalars.
Extension of a linearly independent set to a basis.
Let
A be a set of k
independent vectors in an n-dimensional vector space ......
V.
additional vectors exist which together with the
-------~-- ~------
V.
......
linearly
-
Ifk<n,
then n-k
k vectors in A form a basis of
These additional vectors are by no means unique (think of examples~) It is
clear that no linear combination of the n-k additional vectors can be equal to
any linear combination of the
Dimension of sUbspace.
k vectors in A.
Let ,..,.
U be a sUbspace of a finite-dimensional linear space
dim U < dim V and
Then
{ dim
~: dim ~ iff
"..,
,..."
U =~
V
"""'"
•
If U C V then V must contain at least one vector not in U;
,..,.,,,..,
""'J
,...,
thus we find that a basis of ,..,.V must have at least one more vector than
any basis of
g~~~!!~~l.
Q. (For complete proof see Halmos, p. 18).
Let]
C!
be a subspace of
Then let A be a basis of ......
U.
1
I
dim ,..,.
V =
. n
I
dimU=k<n.
......
Applying "extension of a linearly inde-
pendent set to a basis" we find there are (n-k) linearly independent
to
--
U.
,...,
Sum of two linear spaces.
Let £1 and
defined to be the set of all vectors
itself a linear space.
Ye
be two linear spaces. £1 + J!e
Xl + x2
wi th
Xl E£1 and x2 E.!!e
;
is
it is
A-16
4It
(al , a 2 , ••• , a k }
If
of ~,
then
is
~~~!~ of ~l
5
basis, then 'Yl , ••• , 'Y
k
is a basis
Of course,
need ~2~ be a 2~~!~ (e.g., let ~l and ~
(a l , 52' ••• , ak , b l , b2, ••• , btl
E~~ a
spans ~l+ ~.
(a l , 52' ••• , a k , b l , b2 , ••• , btl
R3 ).
two planes through the origin in
(b l , b , ••• , b~}
2
and
and
_¥.
be
(a , a , ••• , a , b , b , ••• , b.e,}
l
l
k
2
2
51' ••• , 51.
k
is
exist with
X,
1: 'Y i a i "" E 5. b j
i=l
j=l J
such that at least one
~l
n~
'Y
and
one
---
i
contains non-null vectors.
(a l , 52' ••• , ak , b l , b2 , ••• , btl
to hold true is that all
e
5j
(Ei 'Y i a i
E:
~l;
OJ
are ~~~2, and ~l
motivates the distinction of the joint occurrence of
~l
n lk
= {Q}
E:
Ee). ¥.'
however,
(*)
J.
=
~l
n Ee
+
~
=
This
Q.
and
by a special term:
D!rect sum of two linear spaces.
J. "" l!l
(notation:
Ej 5j b j
is ~ 2~~!~J then the only possibility for
and ~!!
'Y i
are not zero (why 'and'?). So in this case
(f)
J.
~!~~£~
is said to be the
~) ~! = ~l + Ee !:~<!. ~I
nEe""
sum of
~l
~
and
U~} •
Uniqueness of vector decomposition relative to direct sums.
Then in order that this sum be
v
~!~:£~,
it is necessary and sufficient that to each
there corresponds 2E!l ~~~ vector
E::t
such that
v
= xl+ x2
~ E: ~l
and 9E!l 2~~
vector
x2 E:
Ee
•
such that
v
= xl+ X2 = x{
+~
belongs to both ~l
•
and ~.
xl - x{ = Q "" x~ - ~,
§~!!!~!:~£l.
£1
n~
xl - x{ = x~ .. ¥2'
So, if l!l
n .2i2
a vector which clearly
= ((~},
it follows that
which proves uniqueness.
(reductio ad absurdum).
contains vectorc other than
+ (X .- u),
2
Then
Q,
If the sum is not direct, Le., if
say u,
which proves non-uniqueness, since
then
~ + u E:
v.::: ~ + x2 "" (xi+ u ) +
Q and
1
x2 - u
E:
lk .
A-17
Variety of direct sums.
direct sum .1!1 $
~
Although we just showed that each vector belonging to a
is the sum of a uniquely determined vector Xl E:.Ql and a
uniquely determined vector
of subspaces l!l and
Ee
U'
U' a...
.-.;1
w..:;;e
the reader should be cautioned that the choice
~E: ~ J
itself is by no means unique.
=•~l a-.w ~
•
.,11
does
.,11
t imply .-.;1
U'
~
.-1'
=~l
Not only:
U' = ~
y.,ll
and ..::e
but even:
"
l'
The situation is entirely similar to the one described under a variety of bases.
In fact, saying that
saying that
Jl. = <~)
space with basis
(~,
x }
2
$
(X2 )
(x }.
1
is a basis of a linear space
,
where
(Xi)
tL
is tantamount to
is the one-dimensional linear
The reader may clarify the situation to himself by drawing
some pictures.
Dimension of a direct sum.
V is a linear space
f'J
If
.Ql ~
1
is a linear space
~
.!
is a linear space
I:;
dim ]1 = ~ }
dim
~=
1 =.!!l
$
then dim! =
~
+ n2 = dim.Ql +. dim
~
•
(finite)
n2
~
a set {xil , xi2 '···' Xini } of ni linearly
U. (i = 1, 2).
Given
independent vectors exists, which is a basis of f'JJ,
E!:22! • Since dim ~
V E:
= ni '
! , the preceding result shows that exactly one Xl
E £1'
and exactly
one x E ~ exist such that v = ~ + x2 ' Now Xi can be represented
2
(uniquely) as a linear combination of basis vectors in .Qi (i = 1, 2).
So
v can be represented (uniquely) as a linear combination of the vectors
ll ' ~, ... , Xl~' x21 I x22 " •• , x2n2 ' Hence these vectors do ~
They are also linearly independent, as follows along the lines of the
X
.! ·
A-18
argument around equation
vectors
(*)
on page
A-16
x ll ' • •• , Xl~' x21 ' ... , ~~.
..
Consequently the
form a basis of
'1,
n + n
l
2
i.e.,
V=u
Sufficient and necessary condition for ,...,
",1 @U
,;:;e •
v
,...,
1
are finite-dimens1onal linear
If
u
s;;;;
J!l
n~
,;:;e
V
'"
,Jthen
V =.!!l
EE>.!!e.
The
tconverse is obviously true.
= (Q}
dim '1 = dim.!!l + dim
Proof.
Let
of .!!l
and a hasis
.!!e
dim.!!l = n
l
,
dim ~
= n2
•
Then a basis
[x , x12 '
ll
{X , x ' ••• , x
} of U exist. The set
22
21
2n
I'" 2
2
x12' ••• , ~nl' x2l ' x22 ' ••• , x2~1 of ~+ n2 vectors is linearly
independent (see argument -around equation (*) on page A..16
spans .!!l+
~ ~
'1,
hence is a basis of .!!l+
~
), and clearly
So
= n l + n = dim J!l+ d1mt~ = dim JL. According to the previous
2
result on the dimension of subspaces it follows that J!l+ .!!e= JL. But we
dim
(!h +.!!e)
y., n ~c;
P"n
know that
Commutativity.
'-.l..
= {Q}.
J!l+ ~ =.!!e + J!l
Commutativi t~. .!!l
@
~
Associativity.
Associativity.
@
(Property
r.4
of section
V= U
,...,1
'"
@
U
..::e
A.,)
Ee= .!!e @ J!l
J!l (f) ~ =.!!l + ~
Proof.
Hence (definition of direct sum)
.!!l =
lIe
+.!!l
iff in addition J!l n ~ = {Q} •
iff in addition ~ n.!!l = (Q J
= (.!!l+~) + £3 (Property
.!!l @(l1e (f) .2,) = (.!!l (f) l1e) @.2, in the
l!l+ (Ee+ ~,)
I.2
•
of'$ection A.3)
sense that i f the (f)-signs
are valid in one member of the equality, they are valid in the other member as
well.
A-19
!:::S!S!!•
We shall assume the validity of
y £ £1
take
@
(lk ® 1!,).
sums is unique" xl £ 1!1
y =
condition that
~
~,
~, j
£(lk ($) 1!,)
again x2 £
=~
+ x2 + x,
= (xl+ x2 )
+ x:;
(21 + Ee)
write
+ 2:; is direct:
= [Q}
(!!l
($)
we may 'Write
follows from £1
.!k) $ £,
associativity.
Notat~£n
and
x, £.!:!,
~
hence
j
n
y
£:;;
= {Q},
£
Ee
I
="
+ (~+ x:;) =
and
x:; £ l!, are
2
(El + Ee) ~
~
£.!!l'
y = (~+ x ) + x:; J
(Ee ($) £,)
instead of
are uniquely
-~----.
(" + ~) eEl+.!1e
hence
j
uniquely determined by the condition that
n Ee
lk
are ~g,~~~l g~~~~~~~~ by the condition that
1!,
--------
.!:!l
are uniquely determined by the
x2 , = x2 + x,
determined by the condition that
x, £
Then
Since vector decomposition relative to direct
and
+
in the first member.
($)
-------
so the sum
tinally
whence we are allowed to
(£1 + !!e) $], •
As is usual in cases where associativity holds we will omit
parentheses altogether and 'Write ]1 ~
Ee $ 2,
for ]1 $
(!!e $ £,)
just proved the parentheses may be put up anyway we want to).
(since it was
The preceding proof
suggests a handy definition for:
Direc:t.....sum of
21, l:k..
!-
linear spaces.
J}2k
(notation
and in addition for any y
0
••
determines the vectors
:t is said to be the
:t = 21 e .!1e $
€!
Xi e
2i
g~::~~! ~~
of
:t
.!!k" iff
=.!h + Et2 + ... + 2k
the condition that y = ~ + x + •• " + x k uniquely
2
(i = 1, 2, ••• , k).
...
($)
21 e (!e e (£, $ 24» =1!1 e Ee e 2, e.!:!4 •
(l!4 e (.!!, ~.P6 }») = 1!1 S Ee ~ 1!, e l!4 ~ ~ e.P6.
Q2!~:sr associativity.
21 e (Ee
$
(£, 19
Proof.
etc •
Similar to roughly the first half of the proof for three linear
spaces summed directly.
~.
For ]1 +
sufficient that
lIe + ... +.!!k
1!t n 2j = [QJ
to be a direct sum of
for all pairs (i, j)
21" .•• ,
with
i
¥k
I: j
-
it is not
,
see figure:
A-20
Ii
.\"~3
"-
--
'" "'..
I
tv
" ~\
One (necessary and) sufficient condition in terms of intersections can be
gathered from the above corollary:
in case k
=4
e
A-21
&.2..
Linear mappings
'!he first condition for a mapping 1, :
be linear spaces.
Definition.
Let
V... W to be linear is that
,..""
,.,..,
V and
,ow
W
,....,
For our purposes we need only consider the case that ,.,.
W =,.,.
V •
1
be a linear space over a field
~
•
'lhe
~~~!~
t :! ... !
-_.---
is said to be linear iff
=~.t~
1, (al ~ + a 2 x2 )
for any Xl E
1'
~E
1,
(Xl E ~
+ (X2.e~
, Ct E ~ •
2
-
Note.
Image of a set.
e
Let .....
U
~
U under .e is defined by
.....V, then the image of .....
.eU
= (y E V j there exists x c ,....U ~ucb that y. 1,x)
,....,....
Inverse image of a.point.
under
.c
Let bE.!,
then tbe (complete) inverse image of b
is defined by
.c.~ • (x
E
l . tx = b
'!he range (or range space, or image set) of t
The null-space (nucleus, kernel) of t
~(!,)
By definition,
m(.t )
is m(.c) def tv
....,
•
is ~(!,) def .t-li.
are subsets of ,....
V.
'lhey are even sUbspaces:
is a subspace of ,..,.
V.
~:~~! .
Let
such that
(X1Y + (X2 Y2
l
y1 E ~ (.c ) and y2
Yl = .txl ,
E
!!t(t).
linear spaces.
!Jl (.c )
and ~(.t)
)
Y2
= !,x2
Xl E 1 and
x2
' hence ~Yl + (X2 Y2 = .c(<;.~+ (X2x2)'
so
E
!!t (.t ) ,
then there exist
This takes care of the properties
I.l
and
II.l
Q,
hence
The reader will check the other properties.
is a subspace of ,..,.
V.
~22f'
Let
Xl E ~(!,),
.r. (a 1 Xl + a 2x2 )
x2 E ~(!,) ,
= al'1:xl + a2t~
=~ ,
then
t~
= ~,
so (O:l~ + Ct2x~
Same remark as at the end of the preceding proof.
tX
2
E
=
~(t)
•
of
E
1
e
A-22
Suppose dim m(.t) = k
k
~
(for some mappings
I
k > 0,
for others
or rather
1). So we can choose a set of k independent vectors serving as a basis of
01(!).
(n-k)
Then we can extend this linearly independent set to a basis for
J.,
adding
vectors such that no linear combination of them is equal to any linear
combination of the original
additional vectorst
dim
k =0
~(!)
Theorem.
= n-k t
What about the !-image of the (n-k)
(n-a)
Evidently the
images span .tV (whyt)
(OW
Is
dim.tV =
(OW
The answer is yes:
dim SR ex) + dim
Proof.
k vectors.
m(! ) = n = dim.z .:.
Still calling dim
me!)
= k,
let
(zl' ••• , Zk}
be a basis of
me!) •
{zl' ••• , zk' xl' ••• , xn _k } of !. No
linear combination of xl' ••• , x _ could then equal any linear combination
h k
of zl' ••• , zk (unless all coefficients be zero). For any V €J. I there
Extend this basis to a basis
exist (unique)
and
t3i
Ct
k
n-k
v = 1: t3 zi + 1:
i=l i
j=l
n-k
So Xv = 1: OJ .1':xj
j=l
such that
j
Ct
j
xj
which proves the above argument that
{,c~, Xx , ".J
2
!Xn _ } spans !l = ~(X). Now let r j be such that
k
n-k
j:l r j !xj = !(1:j r j X j ) = Q, i.e.,
such that
of the
rj
1:
zi'
j
xj
€ ~(!).
rj
xj
but we know that this implies
This means that the !x.
J
= n - dim !n(!)
~.
Then 1:j
I
would equal a linear combination
rj~
0
for
j
= 1, ••• , n-k •
are linearly independent, hence dim ~(!) = n - k
which is what we wanted to prove.
Section 2.1, point (iv), shows that not necessarily ~(!)
hence not necessarily V = !Jl(.c)
(OW
e lJl(.c)
•
n !n(!) = {Q} ,
A-23
Non-singular vs. singular.
iff m(l)
=l-l
Q • {Q},
is non-singular iff
dim1,V
,.,...,
:! ... !
A linear mapping l
singular
--------
iff
-
< n = dim V. Because
= V,
,...,
l
~
singular iff 1,V
,..,.,
subspace of
1,
l.
If
ll:!
C;
C
l:!
~
l
: V -+ V is non-singular iff
V (in which case l
l:!
is said to be 2!~!:!~~ on l:!
In general,
Nilpotent.
such that
l
1,
l~
,...
= {Q}.
,-w
maps many-to-one).
is said to be invariant under
iff !,l:!
CQ
I
is said to be non-singular on
is said to be
~
V, singular iff dim ~(1,) =
'"
maps ,...,,...,,,
V into V, !V
,..,., ~ V, so that, because
,-w
1
~
= dim 1,V = n = dim
dim !R(l)
'"
,."",
~2~:~!~!:!~E
> 1. Equivalently X : V -+ V
dim :n(l)
of the result about dimensions of subspaces,
!V
is said to be
-----------nilpotent on
------_.-
U iff
~
{v E U;
~
Let
U be a
#"IItJ
U,
'"
U iff lU
=U •
"'"
,...,...
X •
1,v
For such
= Q} = (Q} •
U iff some natural number
k
exists
,..."
(For definition of lk
see point
(iii) of section 1.3.)
L-l
SOME LITERATURE ON LINEAR ALGEBRA *)
Gel'fand, I.M. (1961), Lectures on linear algebra (transl.); New York, Interscience;
ix + 185 pp.
Halmos, Paul R. (1958), Finite-dimensional vector spaces; 2nd ed., Princeton, N.J.,
Van Nostrand;v1i + 200 pp.
KowalsJg, Hans-Joachim (1963), Lineare Algebra, Berlin, W. de Gruyter and Co.;
340 pp. (GOschen' s Lehroucherei, 1. Gruppe, Reine und angewandte Mathematik,
Bd. 27).
Mal'cev, A.I. (1963), Foundations of linear algebra (transl.); San Francisco,
Freeman; xi + 304 pp.
Mostow, George D., Sampson, Joseph H., Meyer, Jean-Pierre (1963), Fundamental
structures of algebra; New York, McGraw-Hill; xvi + 585 pp.
Nerin~,
Ever D. (1963), Linear algebra and matrix theory;
2 9 pp.
New York, Wiley; xi +
Thrall, Robert M. and Tbrnheim, Leonard (1957), Vector spaces and matrices; New
York, Wiley;
318 pp.
Waerden, B.L. van der (1959), Algebra II (4. Aufl. der ~dernen Algebra); Berlin
Springer; x + 275 pp. (Section 111 has a sbort derivation of the Jordan normal
form on tbe basis of more advanced conceptsj:- ---------me
Zsmansky, Marc (1963), Introduction l'alg~bre et l'analyse modernes; 2 edition;
Paris, Dunod; xvi + 435 pp.
-.------ --------
a
*)OnlY those titles are mentioned which have actually influenced our
presentation.