Download 12. AN INDEX TO MATRICES --- definitions, facts and

Document related concepts

Basis (linear algebra) wikipedia , lookup

Capelli's identity wikipedia , lookup

Tensor operator wikipedia , lookup

Bra–ket notation wikipedia , lookup

System of linear equations wikipedia , lookup

Quadratic form wikipedia , lookup

Linear algebra wikipedia , lookup

Rotation matrix wikipedia , lookup

Cartesian tensor wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Determinant wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Jordan normal form wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Four-vector wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Matrix calculus wikipedia , lookup

Matrix multiplication wikipedia , lookup

Transcript
249
12. AN INDEX TO MATRICES
--- definitions, facts and rules --This index is based on the following goals and observations:
¯ To give the user quick reference to an actual matrix definition or rule,
index form is preferred. However, the index should to a large
extent be self-explaining.
the
¯ The contents is selected in relation to the importance for matrix formulations in
solid mechanics.
¯ The existence of good computer software for the numerical calcula-
tions, diminishes the need for details on specific procedures.
¯ The existence of good computer software for the formula manipula-
tions means that extended analytical work is possible.
¯ The index is written by a non---mathematician (but hopefully without errors), and is written for readers with a primary interest in
applying the matrix formulation without studying the matrix theory
itself.
¯ Available chapters or appendices in books on solid mechanics are
not found extensive enough, and good classic books on linear algebra are found too extensive. For further reference, see e.g.
Pauli Pedersen: 12. An index to matrices
250
Gantmacher, F.R. (1959) ‘The Theory of Matrices’,
Chelsea Publ. Co., Vol. I, 374 p., Vol. II, 276 p.
Gel’fand, I.M. (1961) ‘Lectures on Linear Algebra’,
Interscience Publ. Inc., 185 p.
Muir, T. (1928) ‘A Treatise on the Theory of Determinants’,
Dover Publ. Inc., 766 p.
Noble, B. and Daniel, I.W. (1988) ‘Applied Linear Algebra’,
Prentice ---Hall, third ed., 521 p.
Strang, G. (1988) ‘Linear Algebra and its Applications’,
Harcourt Brace Jovanovich, 505 p.
Strang, G. (1986) ‘Introduction to Applied Mathematics’,
Wellesley ---Cambridge Press, 758 p.
It will be noticed that the rather lengthy notation with [ ] for matrices
and { } for vectors (column matrices) is preferred for the more simple
boldface or underscore notations. The reason for this is that the reader
by the brackets is constantly reminded about the fact that we are dealing
with a block of quantities. To miss this point is catastrophic in matrix calculations. Furthermore, the lengthy notation adds to the possibilities
for direct graphical interpretation of the formulas.
Cross-reference in the index is symbolized by boldface writings. The
preliminary advices from colleagues and students are very much appreciated, and I shall be grateful for further critics and comments that can
improve the index.
Pauli Pedersen: 12. An index to matrices
251
ADDITION
Matrices are added by adding the corresponding
of matrices
[C ]
= +
[A ]
[B] with C
ij
=A +B
ij
The matrices must have the same order.
ANTI-- METRIC or
ANTI-- SYMMETRIC
See skew---symmetric matrix.
BILINEAR FORM
For a matrix [A] we define the bilinear form by
matrix
{X} [A]{Y}
T
Pauli Pedersen: 12. An index to matrices
elements
ij
252
BILINEAR
INEQUALITY
For a symmetric, positive definite matrix [A] we have by definition for
the following two quadratic forms:
{X a} T [A]{X a}
= ua > 0 for {Xa} ≠ {0}
{X b} T [A]{X b} = u b > 0 for {X b} ≠ {0}
The bilinear form fulfills the inequality
{X a} T [A]{X b} ≤ 1 (u a + u b)
2
i.e. less than or equal to the mean value of the values of the quadratic
forms.
This follows directly from
Ꮛ{X a} T–{X b} TᏐ[A]Ꮛ{X a}–{X b}Ꮠ
and only equality for {X a}
tions
ua
T
because [A]
Pauli Pedersen: 12. An index to matrices
≥0
= {Xb} . Expanding we get with the defini-
+ ub–2{X a}T[A]{Xb} ≥ 0
= [A] .
253
BIORTHOGONALITY
conditions
From the description of the generalized eigenvalue problem (see this)
with right and left eigenvectors {Φ} and {Ψ} we have
i
i
{Ψ} Ꮛ[A] – λ [B]Ꮠ{Φ}
T
j
i
i
and
{Ψ} Ꮛ[A] – λ [B]Ꮠ{Φ}
T
j
j
i
=0
=0
which by subtraction gives
(λ i – λ j)Ꮛ{Ψ} j
T
For different
Φ} iᏐ
[B ]{
=
0
eigenvalues
λi
≠λ
j
this implies
{Ψ} [B]{Φ}
i
{Ψ} [A]{Φ}
i
T
j
=0
and thus also
T
j
=0
which is termed the biorthogonality conditions.
For a symmetric eigenvalue problem {Ψ}
conditions).
Pauli Pedersen: 12. An index to matrices
i
= {Φ}
i
(see orthogonality
254
CHARACTERISTIC
POLYNOMIUM
From the determinant condition
|[A]λ 2
(generalized)
+
[B ]λ
+
[C]|
=
0
with the square matrices [A] , [B] and [C] all of
a polynomium of order 2n in
order
n we obtain
λ . This polynomium is termed the char-
acteristic polynomium of the triple ([A] , [B] , [C]).
Specific cases as
|[A]λ 2
|[I]λ
+
+
[C]|
[C]|
=
=
0
0
are often encountered.
CHOLESKI
See
factorization of a matrix.
See
elements of a matrix.
factorization /
triangularization
COEFFICIENTS
of a matrix
COFACTOR
The cofactor of a matrix element is the corresponding
of a matrix element
appropriate sign. If the sum of row and column indices for the matrix
minor with an
element is even, the cofactor is equal to the minor. If this sum is odd the
cofactor is the minor with reversed sign, i.e.
Cofactor (A ij)
Pauli Pedersen: 12. An index to matrices
=
(–1)
i
+
j
Minor
(A ij)
255
COLUMN
A column matrix is a matrix with only one column, i.e. order m × 1 .
The notation { } is used for a column matrix. The name columnvector
or just vector is also used.
CONGRUENCE
A congruence transformation of a square matrix [A] to a square matrix
[B] of the same order is by the regular transformation matrix [T] of
the same order
matrix
transformation
[B] = [T] T[A][T]
Matrices [A] and [B] are said to be congruent matrices, they have the
same rank and the same definiteness, but not necessarily same eigenvalues. A congruence transformation is also an equivalence transformation.
CONJUGATE
TRANSPOSE
The conjugate transpose is a transformation of matrices with complex
elements. Complex conjugate is denoted by a bar and transpose by a
superscript T . With a short notation (from the name Hermitian) we
denote the combined transformation as
[A] H = [A] T
Pauli Pedersen: 12. An index to matrices
256
CONTRACTED
NOTATION
For a symmetric matrix, a simpler contracted notation in terms of a row
or column matrix is possible. Of the notations which keep the orthogo--for a symmetric matrix nal transformation, we choose the form with Ꭹ2 ---factors multiplied to
the off diagonal elements in the matrix, i.e.
{B} from [A] with
for i = 1, 2, ..., n
B = A
B = 2 A for j > i
 +
i
n
ii
...
ij
(The ordering within {B} symbolized by n+... is not specified).
Pauli Pedersen: 12. An index to matrices
257
CONVEX SPACE
by positive
definite matrix
For a symmetric, positive definite matrix [A] we have by definition for
the following two quadratic forms:
{X a} T [A]{X a}
= u a ; 0 < ua
{X b} T [A]{X b} = u b ; 0 < u b ≤ u a
The matrix [A] describes a convex space such that for
{X α} = α{X a} + (1 – α){X b} ; 0 ≤ α ≤ 1
we have for all values of α
{X α} T [A]{X α} = u α ≤ u a
Inserting directly we h
Ꮛ
ave with [A]
α{X a}T
= [A ]
T
+ (1 – α){X }
b
T
α { X a}
Ꮠ [A]Ꮛ
+ (1 – α){X }
b
Ꮠ
= α {X } [A]{X } + (1 – α) {X } [A]{X } + 2α(1 – α){X } [A]{X }
2
a
T
2
a
b
T
a
b
T
= α u + (1 – α) u + 2α(1 – α){X } [A]{X }
2
2
a
a
b
T
b
From the bilinear inequality we have
{Xa}T[A]{Xb}
and thus with u b
≤u
a
a
b
we can substitutive greater values and obtain
{Xα} T[A]{X α}
Pauli Pedersen: 12. An index to matrices
≤ 12 (u + u )
≤ α u + (1 – α) u + 2α(1 – α)u = u
2
a
2
a
a
a
b
258
DEFINITENESS
For a symmetric matrix the notions of: are used if, for the matrix:
all eigenvalues are positive
eigenvalues non---negative
¯ all eigenvalues are negative
¯ eigenvalues non---positive
¯ both positive and negative eigenvalues
¯
positive definite
¯
¯
positive semi---definite
¯
¯
negative definite
¯
negative semi---definite
¯
indefinite
See specifically
positive definite, negative definite
alternative statement of these conditions.
Pauli Pedersen: 12. An index to matrices
and indefinite for
259
DETERMINANT
of a matrix
The determinant of a square matrix is a scalar, calculated as the sum of
products of elements from the matrix. The symbol of two vertical lines
det ([A]) = |[A]|
is used for this quantity.
For a square matrix of order two the determinant is
A 11 A 12




|[A]| =
=A A – A A

A 21 A22
 11 22 12 21
For a square matrix of order three the determinant is

A 11 A12 A13


|[A]| =A 21 A 22 A 23=

A 31 A32 A33

A 11A 22A 33 + A 12A 23A 31 + A 13A 21A 32 – A 31A 22A 13 – A 32A 23A 11 – A 33A 21A 12
We note that for each product the number of elements is equal to the
order of the matrix, and that in each product a row or a column is only
represented by one element. Totally for a matrix of order n there are
n! terms to be summed.
For further calculation procedures see determinants by minors/cofactors.
Pauli Pedersen: 12. An index to matrices
260
DETERMINANTS
BY MINORS /
COFACTORS
A determinant can be calculated in terms of cofactors (or minors), by
expansion in terms of an arbitrary row or column.
As an example, for a matrix of order three expansion of the third column yields:

A 11 A 12 A13

A 21 A 22 A 23
= A13Minor(A13) – A23Minor(A23) + A33Minor(A33)





A 31 A 32 A33
See determinant of a matrix for direct comparison.
DETERMINANT
OF AN INVERSE
matrix
The product of the determinants for a regular matrix [A] and its
inverse [A] – is equal to 1
1
|[A] | = 1|[A]|
–1
DETERMINANT
OF A PRODUCT
of matrices
The determinant of a product of square matrices is equal to the product
of the individual determinants, i.e.
Pauli Pedersen: 12. An index to matrices
|[A][B]| = |[A]||[B]|
261
DETERMINANT
The determinant of transposed square matrix is equal to the deter--OF A TRANSPOSED minant of the matrix itself, i.e.
matrix
|[A] | = |[A]|
T
DIAGONAL
matrix
A diagonal matrix is a matrix where all off diagonal elements have the
value zero
[A] a diagonal matrix when A
ij
= 0 for i ≠ j
and at least one diagonal element is non---zero. This definition also
holds for non---square matrices, as by singular value decomposition.
DIFFERENTIAL
See functional matrix.
DIFFERENTIATION
Differentiation of a matrix is carried out by differentiation of each
matrix
of a matrix
element
[C ]
DIMENSIONS
See
= d([A])db with C = d( )
order of a matrix.
of a matrix
Pauli Pedersen: 12. An index to matrices
ij
A
ij
db
262
DOT PRODUCT
See scalar product of two vectors.
DYADIC PRODUCT
The dyadic product of two vectors {A} and {B} of the same order
n results in a square matrix [C] of order n × n , but only with rank
1
[C] = {A}{B} T with C = A B
of two vectors
of two vectors
ij
i
j
Dyadic products of vectors of different order can also be defined,
resulting in a matrix of order m × n .
EIGENPAIR
The eigenpair
λ i , {Φ} i
Φ}
eigenvector {
i
corresponds to the eigenvalue
EIGENVALUES
The eigenvalues
of a matrix
standard form for the
([A] –
which gives a
Pauli Pedersen: 12. An index to matrices
is a solution to an eigenvalue problem. The
λi
of a square matrix [A]
λi .
are the solutions to the
eigenvalue problem, with
λ i[I]){Φ} i
=
{0}
⇒
|[A] –
characteristic polynomium.
λ i[I]|
=
0
263
EIGENVALUE
PROBLEM
With [A] and [B] being two square matrices of order n , the general---
ized eigenvalue problem is defined by
Ꮛ
or by
[A] – λ [B]Ꮠ{Φ}
i
i
= {0} for i = 1, 2, ..., n
{Ψ} Ꮛ[A] – λ [B]Ꮠ = {0} for i = 1, 2, ..., n
T
T
i
i
The pairs of eigenvalue, eigenvectors are λ , {Φ} and λ , {Ψ} with
{Φ} as right eigenvector and {Ψ} as left eigenvector. The eigenvalue
problem has n solutions with possibility for multiplicity.
T
i
i
i
i
i
i
With [B] being an identity matrix we have the
standard form for an
eigenvalue problem, while for [B] not being an identity matrix the
name generalized eigenvalue problem is used.
EIGENVECTOR
Φ}
An eigenvector {
i
is the vector ---part of a solution to an eigenvalue
problem. The word eigen reflects the fact that the vector is transformed
into itself except for a factor, the eigenvalue λ .
i
ELEMENTS
The elements of a matrix [A] are the individual entries A ij . In a matrix
of a matrix
of order
EQUALITY
of matrices
there are mn elements A , for i = 1, 2, ..., m ,
j = 1, 2, ..., n .Elements are also called the members or the coefficients
of the matrix.
m
×n
ij
Two matrices of the same order are equal if the corresponding elements
of each of the matrices are equal, i.e.
[A] = [B] if A
Pauli Pedersen: 12. An index to matrices
ij
=B
ij
for all ij
264
EQUIVALENCE
transformations
An equivalence transformation of a matrix [A] to a matrix [B] (not
necessarily square matrices) by the two square, regular transformation
matrices [T 1] and [T 2] is
[B] = [T 1][A][T 2]
Matrices [A] and [B] are said to be equivalent matrices and have the
same rank.
EXPONENTIAL
The exponential of a square matrix [A] is defined by its power series
of a matrix
expansion
e
= +
[A]t :
[I]
[A]t
+
[A]
2 t2
2!
+
[A]
3 t3
3!
+
The series always converges, and the exponential properties are kept,
i.e.
e
[A]t e [A]s
Pauli Pedersen: 12. An index to matrices
=
e
[A](t+s) , e[A]t e [A](–t)
=
=
[I] , dᏋe[A]tᏐ dt
[A]e [A]t
265
FACTORIZATION
A symmetric, regular matrix [A] of order n can be factorized into the
of a matrix
product of a lower
triangular matrix
T
the upper triangular matrix [L]
[A]
In a
A
=
[L] , a
diagonal matrix
all of the order n
[L][B][L]
T
Gauss factorization the diagonal elements of
Choleski
[B] and
[L] are all 1 .
factorization is only possible for positive semi ---definite
matrices, and then [B]
=
[I] and we get
[A]
=
[L][L]
T
with L ii not necessarily equal to 1 .
FROBENIUS
The Frobenius norm of a matrix [A] is defined as the square root of
norm of a matrix
the sum of the squares of all the elements of [A] .
For a
square matrix of order
Frobenius
2 we get
= +
A 211
A 222
+
A 212
+
A 221
and thus for a symmetric matrix equal to the squareroot of the invariant
I3 .
For a square matrix of order 3 we get
Frobenius
=
Ꮛ(A 2
11
+ +
2
A 21
2
A 31)
+
2
(A 22
+
2
A 12
+
2
A 32)
+
2
(A 33
+
2
A 13
+
2
A 23)Ꮠ
½
and thus for a symmetric matrix equal to the squareroot of the invariant
I4 .
Pauli Pedersen: 12. An index to matrices
266
FULL RANK
See rank of a matrix.
FUNCTIONAL
MATRIX
The functional matrix [G] consists of partial derivatives --- the partial
derivatives of the elements of a vector {A} of order m with respect
to the elements of a vector {B} of order n . Thus the functional matrix
is of the order m × n
[G] = ∂{{A}} with G
∂B
ij
= ∂∂AB
i
j
The name gradient matrix is also used. A square functional matrix is
named a Jacobi matrix, and the determinant of this matrix as the Jacobian.
GAUSS
See factorization of a matrix.
GENERALIZED
EIGENVALUE
PROBLEM
See eigenvalue problem.
GEOMETRIC
A vector of order two or three in an Euclidian plane or space. See vec--tors. By a geometric vector we mean a oriented piece of a line (an
“arrow”).
factorization /
triangularization
vector
Pauli Pedersen: 12. An index to matrices
267
GRADIENT
See functional matrix.
HERMITIAN
A square matrix [A] is termed Hermitian if it is not changed by the
conjugate transpose transformation, i.e.
matrix
matrix
[A] H
= [A]
Every eigenvalue of a Hermitian matrix is real, and the eigenvectors are
mutually orthogonal, as for symmetric real matrices.
HESSIAN
matrix
A Hessian matrix [H] is a square, symmetric matrix containing second
order derivatives of a scalar F with respect to the vector {A}
[H]
Pauli Pedersen: 12. An index to matrices
= ∂{A∂}∂F{A}
2
with H
= ∂A∂ ∂FA
2
ij
i
j
268
HURWITZ
determinants
The Hurwitz determinants up to order eight are defined by
a1


a



H :=






0
a3 a5 a7
a2 a4 a6 a8
a1 a3 a5 a7
a0 a2 a4 a6 a8
a1 a3 a5 a7
i
a0 a2 a4 a6 a8
a1 a3 a5 a7
a0 a2 a4 a6











a
8
to be read in the sense that H i is the determinant of order i defined
in the upper left corner (principal submatrix). More specifically,
H1
H2
H3
·
·
=
=
=
a1
a 1a 2 – a 0a 3
H 2a3 – (a 1a 4 – a 0a 5)a 1
If the highest order is n , then a m
=
0 for m
highest Hurwitz determinant is given by
Hn
Pauli Pedersen: 12. An index to matrices
=
H n–1an
>
n , and therefore the
269
IDENTITY
An identity matrix [I] is a square matrix where all
matrix
have the value one and all off diagonal elements have the value zero
=
[I] :
[A] with A ii
diagonal elements
= 1, A = 0 for i ≠ j
ij
The name unit matrix is also used for the identity matrix.
INDEFINITE
matrix
A square, real matrix [A] is called indefinite if positive as well as nega--tive values of {X} [A]{X} exist, i.e.
T
{X} [A]{X} >
<0
T
depending on the actual vector (column matrix) {X} .
INTEGRATION
of a matrix
The integral of a matrix is the integral of each element
ጺ
[C] = [A]dx with C
INVARIANTS
of similar matrices
ij
= ጺ A dx
ij
For matrices which transforms by similarity transformations we can
determine a number of invariants, i.e. scalars which do not change by
the transformation. The number of independent invariants are equal to
the order of the matrix, and as any combination is also an invariant
many different forms are possible. To mention some important invariants we have eigenvalues, trace, determinant, and Frobenius norm. The
principal invariants are the coefficients of the characteristic polynomium.
Pauli Pedersen: 12. An index to matrices
270
INVARIANTS
For the square, symmetric matrix [A] of order 2 we have
of symmetric, similar
matrices of order 2
[A]
with invariants being the
I1
and the
=
trace
11 A 12
A 1 A 22
Ꮖ
I 1 by
A 11 + A 22
determinant
I2
= ᏁA 2
I 2 by
= A 11A 22 – A 212
Taking as an alternative invariant I 3 by
I3
= (I 1) 2 – 2I 2 = A 211 + A 222 + 2A 212
we get the squared
length of the vector
{A}
T
{A}
contracted from
[A] by
= {A 11 , A 22 , 2 A 12}
Setting up the polynomium to find the eigenvalues of [A] we find
λ 2 – I 1λ + I 2 = 0
and again see the importance of the invariants I 1 and I 2 , termed the
principal invariants.
Pauli Pedersen: 12. An index to matrices
271
INVARIANTS
For the square, symmetric matrix [A] of order 3 we have
of symmetric, similar
matrices of order 3
A 11
[A] =A 12
A 13
with invariants being the
I1
trace
=
A 13
A 22
A 23
A 23
I 1 by
+
A 11


A 33

A 12
A 22
+
A 33
the norm I 2 by
I2
and the
=
ᏋA
2 Ꮠ
11A 22 – A 12
determinant
+
ᏋA
I 3 by
I3
These three invariants are the
2 Ꮠ
22A 33 – A 23
=
+
ᏋA
2 Ꮠ
11A 33 – A 13
|[A]|
principal invariants and
they give the
characteristic polynomium by
λ 3 – I 1λ 2
The squared
length of the vector
{A}
is
I4
+
T
I 2λ – I 3
{A}
=0
contracted from
[A] by
= ᎷA 11 , A 22 , A 33 , 2 A 12, 2 A 13 , 2 A 23Ꮌ
= A 211 + A 222 + A 233 + 2A 212 + 2A 213 + 2A 223
related to the principal invariants by
I4
=
(I 1) 2 – 2I 2
and therefore another invariant, equal to the squared Frobenius norm.
Pauli Pedersen: 12. An index to matrices
272
VERSE
IN
of a matrix
The inverse of a square, regular matrix is the square matrix, where the
–1
product of the two matrices is the identity matrix. The notation [ ]
is used for the inverse
[A]
INVERSE OF A
PARTITIONED
–1
[A]
From the matrix product in
=
[A][A]
–1
=
[I]
partitioned form
matrix
[A]
[C]

[E]
[G]
[D]

=[I]
 
[H] [0]
[B]
[F]
follows the four matrix equations
[A][E]
[C][E]
+
+
[B][G]
[D][G]
=
=
[I] ; [A][F]
[0] ; [C][F]
+
+


[I]
[0]
[B][H]
[D][H]
=
=
[0]
[I]
Solving these we obtain (in two alternative forms)
[E] = Ꮛ[A] – [B][D]–1[C]Ꮠ
[F] = – [E][B][D] –1
[G] = – [D] –1[C][E]
[H] = [D]–1 – [D]–1[C][F]
–1
[H] = Ꮛ[D] – [C][A]–1[B]Ꮠ
[G] = – [H][C][A]–1
[F] = – [A]–1[B][H]
[E] = [A]–1 – [A]–1[B][G]
–1
The special case of an upper triangular matrix, i.e. [C] = [0] gives
[E ]
[F]
[G]
[H ]
= [A ]
= – [A ]
= [0]
= [D]
–1
–1
–1
Pauli Pedersen: 12. An index to matrices
[B][D]–1
[H ]
[G]
[F]
[E ]
= [D]
= [0]
= – [A ]
= [A ]
–1
–1
–1
[B][D]–1
273
The special case of a symmetric matrix, i.e.
[E] = Ꮛ[A] – [B][D]–1[B]TᏐ
[F] = – [E][B][D] –1 = [G]T
[G] = – [D] –1[B]T[E]
[H] = [D] –1 – [D] –1[B]T[F]
[C ]
=
[B]
T
gives
[H] = Ꮛ[D] – [B] T[A]–1[B]Ꮠ
[G] = – [H][B] T[A]–1 = [F] T
[F] = – [A]–1[B][H]
[E] = [A]–1 – [A]–1[B][G]
–1
–1
The matrices to be inverted, are assumed to be regular.
INVERSE OF
A PRODUCT
The inverse of a product of square, regular matrices is the product of
the inverse of the individual multipliers, but in reverse sequence
([A][B])
–1
=
[B]
–1
[A]
–1
It follows directly from
–1
–1
([B] [A] )([A][B])
INVERSE OF
ORDER TWO
=
[I]
The inverse of a matrix of order two is given by
Ꮑ
A 11
A 12
A 21
A 22
Ꮖ = Ꮑ–AA
–1
Ꮖ
–A 12 1
21 A 11 |[A]|
22
with the determinant given by
|[A]| = A 11A 22 – A 21A 12
Pauli Pedersen: 12. An index to matrices
274
INVERSE OF
ORDER THREE
The inverse of a matrix of order three is given by
–1
A 11 A12 A13
A 21 A22 A23 =
A 31 A32 A33
(A 22A33 – A32A 23) , (A32A13 – A 12A 33) , (A12A 23 – A22A13)
1
(A 31A23 – A21A 33) , (A11A33 – A 31A 13) , (A21A 13 – A11A23)|[A]|
(A 21A32 – A31A 22) , (A31A12 – A 11A 32) , (A11A 22 – A21A12)
With the determinant given by
|[A]| =
A 11A 22A 33 + A 12A 23A 31 + A 13A 21A 32 – A 31A 22A 13 – A 32A 23A 11 – A 33A 21A 12
INVERSE OF
TRANSPOSED
The inverse and the transpose transformations can be interchanged
matrix
([A] T)
–1
= ([A] –1) T = [A]–T
from which follows the definition of the symbol [ ] –T .
JACOBI
matrix
The Jacobi matrix [J] is a square functional matrix. We define it here
as the matrix containing the derivatives of the elements of a vector {A}
with respect to the elements of a vector {B} , both of order n
[J] = ∂{{A}} with J
∂B
Pauli Pedersen: 12. An index to matrices
ij
= ∂∂AB
i
j
275
JACOBIAN
determinant
The Jacobian J is the determinant of the Jacobi matrix, i.e.
J = |[J]|
and thus a scalar.
JORDAN BLOCKS
A Jordan block is a square upper---triangular matrix of order equal to
the multiplicity of an eigenvalue with a single corresponding eigenvector. All diagonal elements are the eigenvalue and all the elements of the
first upper codiagonal are 1 . Remaining elements are zero. Thus the
Jordan block [J λ] of order 3 corresponding to the eigenvalue λ is
λ
[J λ] =0
0
1 0
λ 1
0 λ
Multiple eigenvalues with linear independent eigenvectors belongs to
different Jordan blocks.
Jordan blocks or order 1 are most common, as this results for eigenvalue problems described by symmetric matrices.
JORDAN FORM
The Jordan form of a square matrix [A] is the similar matrix [J] consisting of Jordan blocks along the diagonal (block diagonal), and with
remaining elements equal to zero.
Only when we have multiple eigenvalues with a single eigenvector will
the Jordan form be different from pure diagonal form. Jordan forms
represent the closest---to---diagonal outcome of a similarity transformation.
Pauli Pedersen: 12. An index to matrices
276
LAPLACIAN
EXPANSION
See determinants by minors/cofactors.
LEFT
The left eigenvector {Ψ} T
eigenvector
λi
of determinants
is defined by
{
see
LENGTH
of a vector
(row matrix) corresponding to eigenvalue
Ψ} i
T
([A ] –
λ i[B])
=
{0}
T
eigenvalue problem.
The length |{A}| of a vector is the square---root of the scalar product
of the vector with itself
|{A}| = {A} T{A}
A geometric vector has an invariant length, but this do not hold for all
algebraic vector definitions.
LINEAR
DEPENDENCE /
LINEAR
INDEPENDENCE
Consider a matrix
{A} for i
i
=
[A] of order m
×
n , constituting the n vectors
1, 2, ..., n . Then if there exist a non ---zero vector {B} of
order n such that
[A]{B}
=
[{A} 1{A} 2
{A} n]{B}
=
{0}
then the vectors {A} i are said to be linear dependent. The vector {B}
contains a set of linear combination factors.
If on the other hand
[A]{B}
Pauli Pedersen: 12. An index to matrices
=
{0} only for {B}
=
{0}
277
then the vectors {A} i are said to be linear independent.
Pauli Pedersen: 12. An index to matrices
278
MEMBERS
See elements
MINOR
The minor of a matrix element is a determinant, i.e. a scalar.
of a matrix
of a matrix element
of a matrix.
The actual square matrix corresponding to this determinant is obtained
by omitting the row and column corresponding to the actual element.
Thus, for a matrix of order 3, the minor corresponding to element A 12
become
Ꮑ
Ꮖ
 A 21 A 23 

= A21A33 – A 31A23

A
A
 31 33 
Minor(A 12) =

MODAL
matrix
The modal matrix corresponding to an eigenvalue problem is a square
matrix constituting all the linear independent eigenvectors
[Φ ]
= [{Φ} {Φ} {Φ} ]
1
2
n
and the generalized eigenvalue problem can then be stated as
[A][Φ] – [B][Φ][Γ]
= [0 ]
Note that the diagonal matrix [Γ] of eigenvalues must be post---multiplied.
Pauli Pedersen: 12. An index to matrices
279
MULTIPLICATION
of two matrices
The product of two matrices is a matrix, where the resulting element
ij is the scalar product of the i---th row of the first matrix with the j---th
column of the second matrix
[C] = [A][B] with C
= ᒑA B
K
ij
k
=
ik
kj
1
The number of columns in the first matrix must be equal to the number
of rows in the second matrix (here K) .
MULTIPLICATION
BY SCALAR
A matrix is multiplied by a scalar by
scalar
[C ]
MULTIPLICITY
OF EIGENVALUES
=
multiplying each element by the
b[A] with C ij
=
bA ij
In eigenvalue problems the same eigenvalue may be a multiple solu --tion, mostly (but not always) corresponding to linear independent
eigenvectors. As an example a bimodal solution is a solution, where two
eigenvectors correspond to the same eigenvalue. Multiplicity of eigenvalues is also named algebraic multiplicity.
For non ---symmetric eigenvalue problems multiple eigenvalues may
correspond to the same eigenvector. We then talk about, e.g., a double
eigenvalue/eigenvector solution (by contrast to a bimodal solution,
where only the eigenvalue is the same). This multiplicity is described by
the geometric multiplicity of the eigenvalue. For a specific eigenvalue
we have
1
≤
geometric multiplicity
≤
algebraic multiplicity
Note that the geometric multiplicity of an eigenvalue counts the number of linear independent
eigenvectors for this eigenvalue, and not the number of times that the eigenvector is a solution.
Pauli Pedersen: 12. An index to matrices
280
NEGATIVE DEFINITE A square, real matrix [A] is called negative or negative definite if for
matrix
any non---zero vector (column matrix) {X} we have
{X} T[A]{X} < 0
The matrix is called negative semi---definite if
{X} T[A]{X} ≤ 0
NORMALIZATION
of a vector
Eigenvectors can be multiplied with an arbitrary constant (even a
complex constant). Thus we have the possibility for a convenient scaling, and often we choose the weighted norm. Here we scale the vector
{A} to the normalized vector {Φ}
i
i
{Φ}
= {A} Ꭹ{A} [B]{A}
T
i
i
i
i
by which we obtain
{Φ} [B]{Φ}
T
i
i
=1
Alternative normalizations are by other norms, such as the 2---norm
{Φ}
or by the
T
i
i
∞ ---norm
{Φ}
Pauli Pedersen: 12. An index to matrices
= {A} {A} {A}
i
i
= {A} (Max|A |)
i
j
i
281
NULL
matrix
A null matrix (symbolized [0]) is a matrix where all elements have the
value zero
= [A] with A = 0 for all ij
[0] :
ij
A null matrix is also called a zero matrix. The null vector is a special
case.
ONE
A one matrix (symbolized [1]) is a matrix where all elements have the
matrix
value one
=
[1] :
[A] with A
ij
=
1 for all ij
The one vector is a special case. Note the contrast to the identity (unit)
matrix [I] , which is a
diagonal matrix.
ORDER
The order of a matrix is the (number of rows)
of a matrix
Usually the letters m
1
×
n
×
×
(number of columns) .
n are used, and a row matrix then has the order
while a column matrix has the order
m
×
1 . For square
matrices a single number gives the order. The order of a matrix is also
called the
Pauli Pedersen: 12. An index to matrices
dimensions or the size of the matrix.
282
ORTHOGONALITY
conditions
=
{0} with symmetric
For an eigenvalue problem ([A] – λ [B]){Φ}
matrices [A] and [B] the biorthogonality conditions simplifies to
i
i
= 0 , {Φ} [A]{Φ} = 0
for non---equal eigenvalues, i.e. λ ≠ λ .
{Φ} [B]{Φ}
T
j
T
j
i
i
i
j
For standard form eigenvalue problems with [A] symmetric this further simplifies to
{Φ} {Φ}
T
j
= 0 , {Φ} [A]{Φ} = 0 for λ ≠ λ
T
i
j
i
i
j
Using normalization of the eigenvectors we can obtain
{Φ} [B]{Φ}
T
i
= 1 or {Φ} {Φ} = 1
T
i
i
and thus
{Φ} [A]{Φ}
T
i
i
=λ
i
i
Orthogonal, normalized eigenvectors are termed orthonormal.
Pauli Pedersen: 12. An index to matrices
283
ORTHOGONAL
transformations
An orthogonal transformation of a square matrix [A] to a square
matrix [B] of the same order is by the orthogonal transformation
matrix
[T] –1 = [T] T
and thus the transformation is both a congruence transformation and
a
similarity transformation
[B]
=
T
[T] [A][T]
=
[T]
–1
[A][T]
Matrices [A] and [B] are said to be orthogonal similar, and have same
rank, same eigenvalues,
invariants).
same
trace
determinant
and same
(same
If matrix [A] is symmetric, matrix [B] is also symmetric, which do not
hold generally for similar matrices.
ORTHONORMAL
A orthonormal set of vectors {X} i fulfill the conditions
T
{X} i [A]{X}
j
Pauli Pedersen: 12. An index to matrices
=Ꮇ
0 for
1 for
≠j
i=j
i
284
PARTITIONING
of matrices
Partitioning of matrices is a very important tool to get closer insight and
overview. By the example
[A]
=[AA]
[ ] 11 [A] 12


21 [A] 22
we see that the submatrices are given indices exactly like the matrix elements themselves.
Multiplication on submatrix level is identical to multiplication on element level. For example see inverse of a partitioned matrix.
POSITIVE DEFINITE
matrix
A square, real matrix [A] is called positive or positive definite if for any
non---zero vector (column matrix) {X} we have
{X} T[A]{X}
>0
The matrix is called positive semi---definite if
{X} T[A]{X} ≥ 0
Pauli Pedersen: 12. An index to matrices
285
POSITIVE DEFINITE The conditions for a square matrix [A] to be positive definite can be
matrix conditions
stated in many alternative forms. From the Routh---Hurwitz---Lienard---Chipart teorem we can directly in terms of Hurwitz determinants
obtain the necessary and sufficient conditions for eigenvalues with positive real part.
Ꮑ
For a matrix of order 2 we get that
Ꮖ
A A
[A] = A11 A12
21
22
has positive real part of all eigenvalues if and only if
(A 11
+ A ) > 0 and A
22
11A 22
– A 12A 21
and the conditions for a symmetric matrix (A 21
definite is then
A 11
A11 A12 A13
[A] =A21 A22 A23
A31 A32 A33
> 0 , A > 0 and A
22
=
– A 212
12)
to be positive
>0
For a matrix of order 3 we get that
has positive real part of all eigenvalues if and only if
=
+
=
I1
I2
11A 22
=A
>0
Ꮛ(A
11A 22 – A 21A 12)
I3
(A 11
+
+
A 22
A 33)
(A 22A 33 – A 32A 23)
|[A]|
>
+
>
0
(A 11A 33 – A 31A 13)Ꮠ
0 and I 1I 2 – I 3
>
>
0
0
and the conditions for a symmetric matrix to be positive definite will
then be
A 11
Pauli Pedersen: 12. An index to matrices
>
0 , A 22
>
0 , A 33
>
0
286
A 11A 22 – A 212
Pauli Pedersen: 12. An index to matrices
>
0 , A 22A 33 – A 223
>
0 , A 11A 33 – A 213
>
0 , |[A]|
>
0
287
POSITIVE DEFINITE
SUM
of matrices
Assume that the two square, real matrices [A] and [B] of the same
order are positive definite, then their sum is also positive definite.
Using the symbol
[A]
for positive definite, we have
0 , [B]
⇒
([A]
0
+
[B])
0
It follows directly from the definition
T
{X} ([A]
+
[B]){X}
=
T
+
≠
{X} [A]{X}
because both terms are positive for {X}
T
{X} [B]{X}
>
{0} .
From this also follows directly that
Ꮛ
α[A]
+
(1 –
which implies that [A]
α)[B]Ꮠ
0 for 0
≤α≤
1
0 is a convex condition.
Identical relations hold for negative definite matrices.
POWER
The power of a square matrix [A] is symbolized by
of a matrix
[A]
[A ]
[A]
Pauli Pedersen: 12. An index to matrices
0
=
–p
p
=
=
[A][A]
[A]
p
–1
[I] ; [A] [A]
[A]
r
=
–1
[A] (p times)
[A]
[A]
(p
+
–1
r)
(p times)
p r
; Ꮛ[A] Ꮠ
=
[A]
pr
0
288
PRINCIPAL
INVARIANTS
PRINCIPAL
SUBMATRIX
The principal invariants are the coefficients of the characteristic poly---
nomium for similar matrices.
The principal submatrices of the square matrix [A] of order n , are the
n squared matrices of order k (1
≤ ≤
k
n) found in the upper left
corner of [A] .
PRODUCT
See
multiplication of two matrices.
of two matrices
PRODUCTS
Three different products of vectors are defined. The scalar
of two vectors
dot product resulting in a scalar. The
product or
vector product or cross product
resulting in a vector, and especially used for vectors of order three.
Finally, the
dyadic product resulting in a matrix.
PROJECTION
A projection matrix different from the
matrix
singular matrix that is unchanged when multiplied by itself
[P][P]
Pauli Pedersen: 12. An index to matrices
=
[P] , [P]
–1
identity matrix
non–existent
[I] is a square
289
PSEUDOINVERSE
of a matrix
The pseudoinverse [A +] of a rectangular matrix [A] of order m × n
always exists. When [A] is a regular matrix the pseudoinverse is the
same as the inverse. Given the singular value decomposition of [A] by
[A] = [T 1][B][T ]
T
2
then with the diagonal matrix [C] of order n × m defined from the
diagonal matrix [B] of order m × n by
[C] from C
ii
= 1B
ii
for B
ii
≠ 0 (other C = 0)
ij
the pseudoinverse [A +] is given by the product
[A +] = [T ][C][T ]
2
T
1
Case 1: [A] is a n × m matrix where n > m . The solution to
[A]{X} = {B} with the objective of minimizing the error
Ꮛ{e} T{e} , {e} = [A]ᎷXᎼ − {B}Ꮠ , is given by
Ꮇ
−1
XᎼ = Ꮛ[A] T[A]Ꮠ [A] T{B}
Case 2: [A] is a n × m matrix where n < m . The solution to
[A]{X} = {B} with the objective of minimizing the length of the soluT
tion ᏋᎷXᎼ ᎷXᎼᏐ , is given by
ᎷXᎼ
Pauli Pedersen: 12. An index to matrices
=
[A ]
T
T
Ꮛ[A][A] Ꮠ
−1
{B}
290
QUADRATIC
FORM
By a
symmetric matrix
[A ]
of order n we define the associated qua ---
dratic form
T
{X} [A]{X}
that gives a homogeneous, second order polynomial in the n parameters constituting the vector {X} . The quadratic form is used in many
applications, and thus knowledge about its transformations,
definite-
ness etc. is of vital importance.
RANK
of a matrix
linearly independent
rows (or columns) of the matrix. The rank is not changed by the transpose transformation.
The rank of a matrix is equal to the number of
From a matrix [A] of order (m
×
n) we can, by omitting a number
of rows and/or a number of columns, get
square matrices of any order
from 1 to the minimum of m,n . Normally there will be several different matrices of each order.
The rank r is defined by the largest order of these square matrices, for
which the determinant is non ---zero, i.e. the order of the “largest” regu-
lar matrix we can extract from
Only a
zero matrix has the rank
[A] .
0.
The rank of any other matrix will be
1
≤ ≤
r
min (m, n)
If r = min(m,n) we say that the matrix has
Pauli Pedersen: 12. An index to matrices
full rank.
291
REAL
EIGENVALUES
With [A] and [B] being two real and symmetric matrices, then for the
eigenvalue problem
([A] – λi [B]){Φ} i
¯ if λ i
= {0}
is complex, then {Φ} is also complex ( [A] and [B] regular)
i
¯ if λ i , {Φ} i
is a complex pair of solution, then the complex conjugated pair λ , {Φ}
is also a solution.
i
i
The condition derived under biorthogonality conditions for these two
pairs is
(λ i –
λ i)({Φ} i [B]{Φ} i)
T
=
0
which expressed in real and imaginary parts are
Φ}T)[B] Re({Φ} ) + Im({Φ}
2 Im (λ i)ᏋRe({
i
i
T
i
Φ
)[B] Im({ } i)Ꮠ
=
It now follows that if [B] is a positive definite matrix, then Im(λi)
0
=
0
and we have real eigenvalues.
REGULAR
A non---singular matrix, see singular matrix.
RIGHT
The right eigenvector {Φ} (column matrix) corresponding to eigen--values λ is defined by
matrix
eigenvector
i
i
([A] – λi [B]){Φ} i
Pauli Pedersen: 12. An index to matrices
=
{0}
292
see eigenvalue
Pauli Pedersen: 12. An index to matrices
problem.
293
ROTATIONAL
For two dimensional problems we shall list some important orthogonal
transformation matrices. The elements of these matrices involves
trigonometric functions of the angle θ defined in the figure. For short
notation we also define
transformation
matrices
c 1 = cos θ s 1 = sin θ
c2
c4
= cos 2θ s = sin 2θ
= cos 4θ s = sin 4θ
2
θ
4
The two Cartesian coordinate systems with the definition of the angle θ .
We then have for rotation of a geometric vector
{V} of order 2
{V} y = [Γ]{V} x
Ꮑ
Ꮖ
c ,s
with [Γ] = –s11 , c11 ; [Γ] –1 = [Γ] T
For a symmetric matrix [A] of order 2 × 2 , contracted with the
2 ---factor to the vector {A} T = {A , A , 2 A } we have
11 22
12
{A} y = [T]{A} x
Pauli Pedersen: 12. An index to matrices
294
1 + c2 , 1 – c2 , 2 s 2 
–1
T
with [T] = 12
 1– c2 , 1+ c2 , – 2 s2
 ; [T] = [T]
– 2 s2 , 2 s2 , 2c2 
For a symmetric matrix [B] of order 3 × 3 , contracted with the 2 ---
factor to the vector {B} T = {B 11 , B 22, B 33 , 2 B 12, 2 B 13 , 2 B 23} we
have
{B} y = [R]{B} x
–1
T
with [R] = [R] and [R] = 1 ·
8
3 + 4c2 + c4 , 3 – 4c2 + c4 , 2 – 2c4 , 2 – 2 c4 , 4s2 + 2s4 , 4s2 – 2s4 
 3 – 4c2 + c4 , 3 + 4c2 + c4 , 2 – 2c4 , 2 – 2 c4 , – 4s2 + 2s4 , – 4s2 – 2s4
 2 – 2c4 , 2 – 2c4 , 4 + 4c4 , – 2 2 + 2 2 c4 , – 4s4 , 4s4 
 2 – 2 c4 , 2 – 2 c4 , – 2 2 + 2 2 c4 , 6 + 2c4 , – 2 2 s4 , 2 2 s4 
 – 4s2 – 2s4 , 4s2 – 2s4 ,
4s4
2 2 s 4
,
, 4c 2 + 4c 4 , 4c 2 – 4c 4 

2 s
–
4s
+
2s
4s
+
2s
–
4s
–
2
4c
–
4c
4c
+
4c
,
,
,
,
,
2
4
4
4
2
4
2
4
 2 4
Note that the listed orthogonal transformation matrices [Γ] , [T] and
[R] only refer to two dimensional problems, where the rotation is specified by a single parameter (the angle θ) .
ROW
matrix
A row matrix is a matrix with only one row, i.e. order 1 × n . The nota--tion { } T is used for a row matrix ( { } for column matrix and T for
transposed). The name row---vector or just vector is also used.
Pauli Pedersen: 12. An index to matrices
295
SCALAR PRODUCT The scalar product of two vectors {A} and {B} of the same order n
of two vectors
(standard
Euclidean norm)
results in a scalar C
C = {A
} T{
B} =
ᒑAB
n
i
=
i
i
1
The scalar product is also called the dot product.
SCALAR PRODUCT The scalar product of two complex vectors {A} and {B} of the same
of two complex vectors order n involves the conjugate transpose transformation
(standard norm)
C
=
{A }
H{B}
=ᒑ
n
i
=
Ꮛ Re(A
i
) – i Im(A i)ᏐᏋRe(B i)
+
i Im(B i)Ꮠ
1
With this definition the length of a complex vector {A} is obtained by
|{A}|
2
=
= ᒑᏋ
n
H
{A} {A}
i
SIMILARITY
transformations
=1
Ꮛ Re(A
i
)Ꮠ
2
+
ᏋIm(A
i
)Ꮠ
2
Ꮠ
A similarity transformation of a square matrix [A] to a square matrix
[B] of the same order is by the regular transformation matrix [T] of
the same order
[B] = [T] –1[A][T]
Matrices [A] and [B] are said to be similar matrices, they have the
same rank and the same eigenvalues, i.e. the same invariants, but different eigenvectors, related by [T] . A similarity transformation is also
an equivalence transformation.
Pauli Pedersen: 12. An index to matrices
296
SINGULAR
A singular matrix is a square matrix for which the corresponding
matrix
determinant has the value zero, i.e.
[A] is singular if |[A]|
=
0 , i.e. [A]
If not singular, the matrix is called
SINGULAR VALUE
DECOMPOSITION
–1
does not exist
regular or non---singular.
Any matrix [A] of order m × n can be factorized into the product of
an orthogonal matrix [T 1] of order m , a rectangular, diagonal matrix
[B] of order m × n and an orthogonal matrix [T 2] T of order n
[A] = [T 1][B][T 2] T
The r singular values (positive values) on the diagonal of [B] are the
square roots of the non---zero eigenvalues of both [A][A] T and
[A] T[A] ,and the columns of [T 1] are the eigenvectors of [A][A] T and
the columns of [T 2] are the eigenvectors of [A] T[A] .
SIZE
of a matrix
See order of a matrix.
Pauli Pedersen: 12. An index to matrices
297
SKEW
matrix
A skew matrix is a specific skew symmetric matrix of order 3, defined
to have a more workable notation for the vector product of two vectors
of order 3 . From the vector {A} the corresponding skew matrix is
defined by
0 –A 3 A 2 

~
[A] = A 3 0 –A 1
–A2 A1 0 
~
by which {A} × {B} = [A]{B} .
The tilde superscript is normally used to indicate this specific matrix.
From {B} × {A} = – {A} × {B} follows
~
]{B}
[B~ ]{A} = – [A
SKEW SYMMETRIC
matrix
A square matrix is termed skew---symmetric if the transposed trans--formation only changes the sign of the matrix
[A] T = – [A] , i.e. A
ji
=–A
ij
for all ij (A
ii
= 0)
The skew symmetric part of a square matrix [B] is obtained by the difference 12 ([B]–[B] ) .
T
Pauli Pedersen: 12. An index to matrices
298
SPECTRAL
DECOMPOSITION
For a symmetric matrix a spectral decomposition is possible. The
eigenvalues λ i
of a symmetric matrix
of the matrix [A] are factors in this decomposition
[A ] = ᒑ
n
i
=1
[B] = ᒑ
n
λi
i
i
=
λ i{Φ} i{Φ} i
T
1
where {Φ} is the eigenvector corresponding to λ (orthonormal
eigenvectors).
i
SQUARE
i
matrix
A square matrix is a matrix where the number of rows equals to the
number of columns, thus the order of the matrix is n n or simply
STANDARD FORM
The standard form for an eigenvalue problem is
for eigenvalue problem
×
n.
[A]{Φ}
or
i
= λ {Φ}
i
{Ψ} [A] = λ {Ψ}
T
i
i
i
T
i
see eigenvalue problem.
SUBTRACTION
of matrices
Matrices are subtracted by subtracting the corresponding elements
[C] = [A] – [B] with C
ij
The matrices must have the same order.
Pauli Pedersen: 12. An index to matrices
=A
ij
–B
ij
299
SYMMETRIC
EIGENVALUE
PROBLEM
With [A] and [B] being two symmetric matrices of order n , the left
eigenvectors will be equal to the right eigenvectors. From the descrip--tion of eigenvalue problem this means
{Ψ}
i
= {Φ}
i
and thus the biorthogonality conditions simplifies to the orthogonality
conditions. The symmetric eigenvalue problem have only real eigenvalues and real eigenvectors.
SYMMETRIC
matrix
A square matrix is termed symmetric if the transposed transformation
does not change the matrix
[A] T = [A] , i.e. A
=A
ji
ij
for all ij
The symmetric part of a square matrix [B] is obtained by the sum
1 ([B] + [B] ) .
2
T
TRACE
of a square matrix
The trace of a square matrix [A] of order n is the sum of the diagonal
elements
trace([A]) =
ᒑA
n
i
Pauli Pedersen: 12. An index to matrices
=
1
ii
300
TRANSFORMATION The different transformations like equivalence, congruence, similarity
matrices
and orthogonal are characterized by the involved square, regular transformation matrices. The equivalence transformation of
[B] = [T 1][A][T ]
2
is a congruence transformation if [T ] = [T ] and it is a similarity
transformation if [T ] = [T ] . The orthogonal transformation,
which at the same time is a congruence and a similarity transformation,
thus assumes [T ] = [T ] = [T ] .
T
1
2
–1
1
2
T
1
TRANSPOSE
of a matrix
2
–1
2
The transposed of a matrix is the matrix with interchanged rows/
columns. The superscript T is used as notation for this transformation
[B] = [A] with B
T
ij
=A
ji
for all ij
The transposed of a row matrix is a column matrix, and vise versa.
The transposed matrix of a transposed matrix is the matrix itself
([A T]) T
TRANSPOSE
OF A PRODUCT
=
[A]
The transposed of a product of matrices is the product of the trans --posed of the individual multipliers, but in reverse sequence
([A] [B]) T
It follows directly from
Pauli Pedersen: 12. An index to matrices
=
T
[B] [A]
T
301
C =ᒑA B
K
ij
k
TRIANGULAR
matrix
=
ik
and C = ᒑ A B = ᒑ B A
K
kj
ji
1
k
=
K
jk
ki
1
k
=
ki
jk
1
A triangular matrix is a square matrix with only zeros above the diago--nal (lower triangular matrix)
[L] with L
ij
= 0 for j > i
or below the diagonal (upper triangular matrix)
[U] with U
ij
= 0 for j < i
TRIANGULARIZA-TION
See factorization of a matrix.
UNIT
See identity matrix.
VECTORS
As a common name for row matrices and column matrices, the name
vector is used.
of a matrix
matrix
Some authors distinguish between geometric vectors (oriented piece of
a line) of order two or three and algebraic vectors. Algebraic vectors are
column matrices and row matrices of any order.
Pauli Pedersen: 12. An index to matrices
302
VECTOR PRODUCT The vector product of two vectors {A} and {B} , both of the order 3
of two vectors
is a vector {C} defined by
C1 A2B3 – A 3B2
  

{C} = {A} × {B} with C 2 = A 3B 1 – A 1B 3
C  A B – A B 
 3  1 2 2 1
The vector product is also called the cross product. See skew matrix for
an easier notation.
ZERO
matrix
See null matrix.
Pauli Pedersen: 12. An index to matrices