Download Rank (in linear algebra)

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Linear least squares (mathematics) wikipedia , lookup

Matrix completion wikipedia , lookup

Rotation matrix wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Determinant wikipedia , lookup

Four-vector wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Jordan normal form wikipedia , lookup

System of linear equations wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Matrix calculus wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Matrix multiplication wikipedia , lookup

Gaussian elimination wikipedia , lookup

Transcript
Rank (in linear algebra)
Check also the links!
In linear algebra, the column rank (row rank respectively) of a matrix A with real
entries is defined to be the maximal number of columns (rows respectively) of A
which are linearly independent.
The column rank and the row rank are indeed equal; this common number is simply
called the rank of A. It is commonly denoted by either rk(A) or rank A.
Alternative definitions
The maximal number of linearly independent columns of the m-by-n matrix A with
entries in the field F is equal to the dimension of the column space of A (the column
space being the subspace of Fm generated by the columns of A).
Alternatively and equivalently, we can define the rank of A as the dimension of the
row space of A.
If one considers the matrix A as a linear map
f : Fn → Fm
with the rule
f(x) = Ax
then the rank of A can also be defined as the dimension of the image of f (see linear
map for a discussion of image and kernel). This definition has the advantage that they
can be applied to any linear map without need for a specific matrix. The rank can also
be defined as n minus the dimension of the kernel of f; the rank-nullity theorem states
that this is the same as the dimension of the image of f.
Properties
We assume that A is an m-by-n matrix with real entries and describes a linear map f as
above.





only the zero matrix has rank 0
the rank of A is at most min(m,n)
f is injective if and only if A has rank n (in this case, we say that A has full
column rank).
f is surjective if and only if A has rank m (in this case, we say that A has full
row rank).
In the case of a square matrix A (i.e., m = n), then A is invertible if and only if
A has rank n (we say that A has full rank).

If B is any n-by-k matrix, then the rank of AB is at most the minimum of the
rank of A and the rank of B.
As an example of the "<" case, consider the product
Both factors have rank 1, but the product has rank 0.



If B is an n-by-k matrix with rank n, then AB has the same rank as A.
If C is an l-by-m matrix with rank m, then CA has the same rank as A.
The rank of A is equal to r if and only if there exists an invertible m-by-m
matrix X and an invertible n-by-n matrix Y such that
where Ir denotes the r-by-r identity matrix.

The rank of a matrix plus the nullity of the matrix equals the number of
columns of the matrix (this is the "rank theorem" or the "rank-nullity
theorem").
Computation
The easiest way to compute the rank of a matrix A is given by the Gauss elimination
method. The row-echelon form of A produced by the Gauss algorithm has the same
rank as A, and its rank can be read off as the number of non-zero rows.
Consider for example the 4-by-4 matrix
We see that the second column is twice the first column, and that the fourth column
equals the sum of the first and the third. The first and the third columns are linearly
independent, so the rank of A is two. This can be confirmed with the Gauss algorithm.
It produces the following row echelon form of A:
which has two non-zero rows.
When applied to floating point computations on computers, basic Gaussian
elimination (LU decomposition) can be unreliable, and a rank revealing
decomposition should be used instead. An effective alternative is the singular value
decomposition (SVD), but there are other less expensive choices, such as QR
decomposition with pivoting, which are still more numerically robust than Gaussian
elimination. Numerical determination of rank requires a criterion for deciding when a
value, such as a singular value from the SVD, should be treated as zero, a practical
choice which depends on both the matrix and the application.