Download a pdf file

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Field (mathematics) wikipedia , lookup

Factorization of polynomials over finite fields wikipedia , lookup

Bra–ket notation wikipedia , lookup

Basis (linear algebra) wikipedia , lookup

Cartesian tensor wikipedia , lookup

Quadratic form wikipedia , lookup

Fundamental theorem of algebra wikipedia , lookup

Capelli's identity wikipedia , lookup

System of linear equations wikipedia , lookup

Modular representation theory wikipedia , lookup

Four-vector wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Determinant wikipedia , lookup

Jordan normal form wikipedia , lookup

Linear algebra wikipedia , lookup

Gaussian elimination wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Matrix calculus wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Matrix multiplication wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Transcript
Linear Algebra and Finite Fields
Thomas Pasko
Dr. Hurd
Math 495
Abstract. The subject of linear algebra mainly focuses on vector spaces
and operations on matrices to advance the study of the subject. Specific
topics of importance include determinants, inverses, eigenvalues,
eigenvectors, transposes, orthogonality and permutations, and linear maps.
The purpose of this paper is to connect the Modern Algebra topics like
rings and fields, to matrices and this paper studies the 2-by-2 and 3-by-3
matrices with entries from a finite field. In doing this the inverse,
determinants, eigenvalues, and eigenvectors will be looked at.
Part I. Introduction
What can one say about the linear algebra of 2-by-2 and 3-by-3 matrices when the
usual numbers are replaced with entries from a finite field?
This simple question is
enough to open up seemingly endless doors. In order to begin it may help to look back
on the history of some of these topics.
The first thing to examine would be the subject of linear algebra. Then once you
think of matrices you think of some other areas such as the inverses and determinants of
matrices. As surprising as it may sound though, the chicken did come before the egg in
this case. Determinants have been around longer than matrices. It was the discovery of
determinants and Cramer’s Rule for solving systems of linear equations that began use of
matrices. Galois and Gauss used finite fields before the study of field theory itself.[5]
The following terms will be discussed:
Ring: <R, +, *> A set R together with binary operators +,* defined on R  the following
axioms are satisfied:
- <R,+> is an abelian group.
- Multiplication is associative but not necessarily commutative.
-  a,b,c  R, the left distributive law a*(b+c), and the right distributive law
(a+b)*c = (ac)+(bc) both hold.
Field: A ring F is said to be a field provided that the set F – {0} is a commutative group
under the multiplication of F (the identity of this group, the unity element, will be written
as 1) [2 and 4].
1
A matrix is a rectangular array of elements taken from a set R, usually enclosed in
parentheses or square brackets [1]. In this paper R will always be a commutative ring
with unity. In the case of a square matrix the number of rows matches the number of
columns, and this number n is the order of the matrix. These are usually referred to as n
x n matrices, and we will write Mn(R) for the set of square matrices of order n I with
entries from the ring R. We write GF(q) for the finite field with q elements and M2
(GF(5)) means that the set of 2-by-2 matrices with entries in GF(5). We use the notation
aij to denote the (i,j) entry (row i, column j) of a matrix D, and we write D = (aij ). Given
a matrix D, Dij denotes the (n-1)-by-(n-1) formed from D by deleting the row i and
column j. When p is prime, Zp and GF(p) denote the same field, and in any case q will
always denote a power of a prime.
The determinant of a square matrix with entries from a ring R is a function from the set of
all n-by-n matrices Mn(R) into R. Proceeding with induction, the determinant of a 1-by-1
matrix (a) is a . For n > 1 we define, for any matrix D of order n,
n
det(D) =
a
j 1
ij
* det( Dij )( 1) j 1 .
In this paper we will restrict n ≤ 3. The definition implies
a b 
det 
 = ad – bc
c d 
and
a
det  d
 g
b
e
h
c
e
f  = a * det 
h
i 
f
d
- b * det 

i
g
f
d
+ c* det 

i
g
e
h
Suppose u is the unity element of a ring R. The matrix with u on the main diagonal and
zeroes elsewhere is called the identity matrix and will be denoted In. A 2-by-2 or 3-by-3
matrix is invertible if there exists a matrix C such that C*A=A*C=In. We will follow
custom and denote the unity of GF(q) with 1. The inverse of A is denoted by A-1. If A is
not invertible, it is singular[1].
2 5
Ex: Let A = 
 , and C =
1 3
3 2
6 2 with entries in Z7.


Then
2 5 3 2 36 14 1 0
AC = 



  I2.
1 3 6 2  21 8  0 1
We have used the facts that 36 ≡ 8 ≡ 1 (mod 7) and 21 ≡ 14 ≡0 (mod 7).
2
Let A be an n x n matrix. A scalar λ is an eigenvalue of A if there is a nonzero column
vector v in Rn such that Av = λ. The vector v is then an eigenvector of A corresponding
to λ. The characteristic value is given as the equation det (Av – λI) = 0.
2 2 1 4
1
Ex. 
    4 



3 1 1 4
1
1
 2 2
Thus the vector   is an eigenvector of the matrix 
 corresponding to the
1
3 1 
eigenvalue 4.(1)
An n x n matrix A is diagonalizable if there exists an invertible matrix C such that C-1AC
= D, a diagonal matrix. The matrix C is said to diagonalize the matrix A.
Part II Method
Now we look at some 2-by-2 matrices with entries from GF(2) or GF(3):
1 0
0 0 


det N= 0, N-1 = DNE
0 1 
-1
0 0 det N= 0, N = DNE


0 0 
1 0


det N= 0, N-1 = DNE
0 0 
-1
0 1 det N= 0, N = DNE


1
0

1
0

1
0
0
1
det N= 0, N-1 = DNE
1 0
det N= 1, N-1 = 

0 1 
1
1

0
0

0
det N= 0, N-1 = DNE

0
1
det N= 0, N-1 = DNE

1
0 0 
1 1


det N= 0, N-1 = DNE
0 1 
-1
1 0 det N= -1, N =


1 1
0 1


1  1
det N= 1, N-1 = 

0 1 
1  1
0 1
-1 
1 1 det N= -1, N = 1 0 




1 0
1 1


 1 0
det N= 1, N-1 = 

 1 1
0 1
1 1
-1 
1 0 det N= -1, N = 1 1




1 1
1 1


det = 0, N-1 = DNE
0 1 
1 0


0 0 
-1
0 0 det = 0, N = DNE


3
In GF(2) we note -1 = +1, and in GF(3) we note -1 = 2. But the determinant calculations
show that, just as in real-valued matrices, the usual theorem on inverses is true in this
setting (if det(A) = 0 then A-1 does not exist).
Theorem: For any N in M 2 (Z 2 ) if det N = 0, then N-1 does not exist.
Conversely, if det(N) = 1, then N-1 exists.
a b 
Let matrix A = 
  M 2 (Z p )
c d 
Then det A = 0 if and only if ad ≡ bc (mod p)
If ad ≡ bc (mod p), then in the field Zp, ad = bc or ad-bc = 0. Thus we can leave out the
congruence relation whenever possible.
Theorem: A matrix A  M 2 (G q ) is invertible if and only if det(A) ≠ 0.
Proof:
Suppose det(A) ≠ 0. Then ad-bc is a non-zero element in GF(q) and thus
1
=r
ad  bc
 rd  rb 
GF(q) for some element r since GF(q) is a field. But then 
 is the inverse
 rc ra 
a b 
of 
.
c d 
We know that det A = ad-bc. Now if (a  d )  (b  c) contains a zero then you
would be left with (a * 0)-(b * 0) or some similar combination. This of course would
leave you with just det A = 0, in which case A-1 does not exist.
1 3
Example: Let A = 
 with entries from Z4.
1
2


Then det A = (2*1) – (1*3) = -1 = 3
1
and A-1 = -1 
1
1 3 2
so 
*
1 2 1
3  2 3 
2 3
= 
or 



2  1  1
1 3
3 1 0
=
3 0 1
Thus Z4 has divisors of zero, yet A  M 2 (Z 4 ) has an inverse. So the previous theorem
can be made true for M2 (R) even if R is only a commutative ring with unity provided we
replace “det R ≠ 0” with “det R is a unit.”.
4
Now we can take a look at 3-by-3 matrices. Since there are 29 or 512 different
combinations of 3-by-3 matrices, only about 72 different combinations were observed for
this. They include, but are not limited to, examples of the following patterns:
1 0 0
0 0 0 det N= 0, N-1 = DNE


0 0 0
1 1 1
1 0 0 det N = -1, N-1 =


0 0 1
1 0 0
0 1 0 det = 1, N-1 =


0 0 1
1 0 0
0 1 0 


0 0 1
0  1 1 0
0 1
1  1  1 

-1

 1 1 0 det N= 0, N = DNE
0 0
1  0 0 1
a b c 
Theorem: Let matrix B =  d e f   M3(Gq).
 g h i 
-1
In order for det B ≠ 0 and B to exist the following axioms must be satisfied:

e f 
d f 
d e  
 a * det 
1.)

b
*
det

c
*
det

g i 
 g h  ≠ 0

h
i







2.)
d
If c* det 
g
e
If a * det 
h
d
If b * det 
g
e
e
= 0, then a * det 

h
h
and
f
d
= 0, then c * det 

i
g
and
f
e
= 0, then a * det 

i
h
a b c 
Proof: Let matrix B =  d e f 
 g h i 
From the definition above, we know that
e f 
d f 
d
det B = a * det 
b
*
det
+
c
*
det

g i 
g
h i 



First check axiom 1. Assume the contradiction:

e f 
d f 
d e  
 a * det 
 b * det 
 c * det 


  = 0

h i 
g i 
 g h 

5
e
h
f
d
≠ b * det 

i
g
f
i 
e
d
≠ b * det 

h
g
f
i 
f
d
≠ c * det 

i
g
e
h
then
e
a *det 
h
f
d
- b*det 

i
g
f
d
+ c*det 

i
g
e
= 0 and B-1 Does not exist

h
Now look at axiom 2. Again assume the contradiction:
d e 
e f 
d f 
A.) If c*det 
= 0, then a *det 
= b*det 



h i 
g i 
 g h
e f 
d f 
d e 
Then a *det 
- b*det 
= 0 and since c*det 


 = 0, det B
h i 
g i 
 g h
= 0. Thus B-1 Does not exist.
e f 
d e 
d f 
B.)
If a *det 
= 0, then c*det 
= b*det 



h i 
g i 
 g h
d f 
d e 
e f 
Then - b*det 
+ c*det 
= 0 and since a *det 

 = 0,

g i 
h i 
 g h
1
det B = 0. If det B = 0 then B-1 =
Does not exist since det B = 0.
det B
Hence there is no inverse
1 1 1
Example: Let B = 1 0 0 with entries from Z2


0 0 1
Then det B = 1[(0*1)-(0*0)] – 1[(1*1)-(0*0)] + 1[(1*0)-(0*0)]
= 1[0] – 1[1] + 1[0]
Thus det B = -1
Now let us calculate B-1.
0
0 1

B-1  1  1  1
0 0
1 
Next we will examine the eigenvalues and eigenvectors of a finite field. To do this we
will first find the eigenvalues of matrix A containing integers in Z3.
1 1
Let matrix A = 

2 1
Using the characteristic equation defined above we see that
1 
1 1  0  1  

A – λI = 
= 



2 1  0    2 1   
6
det (A – λI) = (1 – λ)2 – 2(1) = (1 – λ)2 – 2
1 – 2λ + λ2 – 2 = λ2 – 2λ – 1
λ2 – 2λ – 1 is irreducible in Z3. So let α be a root of λ2 – 2λ – 1. We therefore add α to Z3.
This gives us Z3(α) or GF(9) which is a finite field with 9 elements.
If we extend Z3 by α then we get the field GF(9) which shows the following
characteristics

0
1
2
α
α+1
α +2
2α
2α +1
2α+2
0
0
1
2
α
α+1
α +2
2α
2α +1
2α+2
1
1
2
0
α+1
α +2
α
2α +1
2α+2
2α
2
2
0
1
α +2
α
α+1
2α+2
2α
2α +1
α
α
α+1
α +2
2α
2α +1
2α+2
0
1
2
α+1
α+1
α +2
α
2α +1
2α+2
2α
1
2
0
α +2
α +2
α
α+1
2α+2
2α
2α +1
2
0
1
0
0
0
0
0
0
0
0
0
0
1
0
1
2
α
α+1
α +2
2α
2α +1
2α+2
2
0
2
1
2α
2α+2
2α +1
α
α +2
α+1
α
0
α
2α
2α +1
2α +2
2α
α +2
2α+2
2
α+1
0
α+1
2α+2
2α +2
α +2
2α
2
α
2α +1
α +2
0
α +2
2α +1
2α
2α
2
2α+2
1
α +2
2α
2α
2α +1
2α+2
0
1
2
α
α+1
α +2
2α +1
2α +1
2α+2
2α
1
2
0
α+1
α +2
α
2α+2
2α+2
2α
2α +1
2
0
1
α +2
α
α+1
And

0
1
2
α
α+1
α +2
2α
2α +1
2α+2
2α
0
2α
α
α +2
2
2α+2
2α +1
α+1
1
2α +1
0
2α +1
α +2
2α+2
α
1
α+1
2
2α
2α+2
0
2α+2
α+1
2
2α +1
α +2
1
2α
α +2
Here we observe that GF(9) has a multiplicative inverse. Hence GF(9) is a field
To find any eigenvalues and eigenvectors we need to multiply out
 x
 x
 x
A       where x,y  GF(9). The results of A   are as follows:
 y
 y
 y
1 1 0 0
2 1 0  0

   
1 1 0 1
2 1 1  1

  
7
1 1 0 2
2 1 2  2

   
1 1  0   
2 1     

   
1 1  0    1
2 1   1    1
 



1 1  0    2
2 1   2    2
 



1 1  0  2 
2 1 2   2 

   
1 1  0  2  1
2 1 2  1  2  1
 



1 1  0  2  2
2 1 2  2  2  2
 



1 1 1 1
2 1 0  2

   
1 1 1 2
2 1 1  0

  
1 1 1 0
2 1 2  1

   
1 1  1     1 
2 1      2


  
1 1  1    2
2 1   1    
 



1 1  1    
2 1   2    1
 



1 1  1   2  1
2 1 2   2  2


  
1 1  1  2  2
2 1 2  1   2 
 



1 1  1   2 
2 1 2  2  2  1
 



1 1 2 2
2 1 0  1

   
1 1 2 0
2 1 1  2

   
1 1 2 1
2 1 2  0

   
1 1  2    2
2 1       1 


  
1 1  2    
2 1   1    2
 



1 1  2    1
2 1   2    
 



1 1  2  2  2
2 1 2    2  1


  
1 1  2   2 
2 1 2  1  2  2
 



1 1  2  2  1
2 1 2  2   2 
 



1 1     
2 1  0   2 

   
8
1 1      1 
2 1  1   2  1


  
1 1      2 
2 1  2   2  2


  
1 1   2 
2 1     0 

   
1 1    2  1
2 1   1   1 
 



1 1    2  2
2 1   2   2 
 



1 1     0 
2 1 2    

   
1 1     1 
2 1 2  1    1
 



1 1     2 
2 1 2  2    2
 



1 1   1    1 
2 1  0   2  2
 



1 1   1   2
2 1  1    2 
 



1 1   1   
2 1  2   2  1
 



1 1   1 2  1
2 1      2 
 



1 1   1 2  2
2 1   1   0 
 



1 1    1 2 
2 1   2   1 
  


1 1   1  1 
2 1  2     2
 



1 1    1   2 
2 1 2  1   
  


1 1    1   0 
2 1 2  2    1
 



1 1   2    2 
2 1  0   2  1
 



1 1   2   
2 1  1   2  2
 



1 1   2   1
2 1  2    2 
 



1 1   2 2  2
2 1      1 
 



1 1   2 2 
2 1    1   2 
  


1 1   2 2  1
2 1   2   0 
 



1 1   2  2 
2 1  2     1
 



1 1    2   0 
2 1 2  1    2
 



1 1    2   1 
2 1 2  2   
  


9
1 1 2  2 
2 1  0     

   
1 1 2  2  1
2 1  1      1 


  
1 1 2  2  2
2 1  2      2 


  
1 1 2   0 
2 1     2 

   
1 1  2   1 
2 1   1  2  1
 



1 1  2   2 
2 1   2  2  2
 



1 1 2   
2 1 2    0 

   
1 1  2    1
2 1 2  1   1 
 



1 1  2    2
2 1 2  2   2 
 



1 1 2  1 2  1
2 1  0      2 
 



1 1 2  1 2  2
2 1  1     
 



1 1 2  1  2 
2 1  2     1
 



1 1 2  1  1 
2 1     2  2
 



1 1 2  1  2 
2 1    1   2 
  


1 1 2  1  0 
2 1    2   2  1
 



1 1 2  1   1
2 1  2    2 
 



1 1 2  1   2
2 1 2  1   0 
 



1 1  2  1  
2 1 2  2   1 
  


1 1 2  2 2  2
2 1  0      1 
 



1 1 2  2  2 
2 1  1     2
 



1 1 2  2 2  1
2 1  2     
 



1 1 2  2  2 
2 1     2  1
 



1 1 2  2  0 
2 1    1   2  2
 



1 1 2  2  1 
2 1    2   2 
  


1 1 2  2   2
2 1  2    1 
 



1 1 2  2  
2 1  2  1   2 
  


10
1 1 2  2   1
2 1 2  2   0 
 



After finding these results the next thing to do is to find any eigenvalues and
eigenvectors. One solution found is illustrated by:
1 1  2   2 
 x
 x
2 1 2  1  2  2 . Now we show that λ  y  = A  y  .
 



 
 
 2   2   2 
 2 
Notice α 
, for our eigenvalue α and 
 2




 is an
2  1 2    2  2
2 1
eigenvector. For this calculation, recall  2  2  1. After looking at the other
possibilities we see there is only one eigenvalue; α. This means that matrix A cannot be
diagonalized in the field GF(9). In order to make this possible we must extend GF(9).
To do this we look at
2  2  1. Applying the quadratic formula the solution is
λ=
2  4  4(1)(1) 2  8

 1 2
2
2
So the two eigenvalues of GF(9) are
λ1= 1 +
2
λ2 = 1 –
2
and the two needed corresponding eigenvectors which we found are
 1    1
 2  2  .
  
To test this we look at
1 1  1  1  2 
.
2 1  2   

   2  2 
So now we look at the eigenvalues to test them
1  2  12   12 


11

2
.
2
And
1  2  21  


2  1  2  2 

.
 2  2  1  2 
So now the diagonalization is given by conjugation by a matrix whose column are
eigenvectors from distinct eigenvalues. We illustrate with matrix A. We claim that
1
2


 1
-1
2
4  and the diagonal
and
C
=

2 
2
1
 2
4 
 1 2
0 
-1
matrix of eigenvalues D = 
 . When we expand CDC we get
1 2 
 0
1
2




 1  1 1  2
0
2
4   1 1 = A.



 2


2   0
2  2 1
1  2   1

4 
 2
Therefore the matrix A in GF(9) is diagonalizable, as it is with the linear algebra with
matrices of real numbers.
1
A = CDC-1, where the matrix C = 
 2








12
Sources:
1) Fraleigh Beauregard, Linear Algebra, 3rd Edition, Addison-Wesley, New
York, 1995
2) David Burton, Abstract and Linear Algebra, Addison Wesley, Philippines,
1972
3) Cyrus Colton Mac Duffee, An Introduction to Abstract Algebra, John Wiley
& Sons, New York 1940
4) David Burton, Introduction to Modern Abstract Algebra, Addison-Wesley,
Massachusetts, 1967
5) David Burton, The History of Mathematics: An Introduction, Allyn & Bacon,
Boston, 1985
6) A. Jones, S. Morris, K. Pearson, Abstract Algebra and Famous
Impossibilities, Springer-Verlag, New York, 1991
13