Download Document 9170472

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Daniel Zhou
[email protected]
Basic matrix products
(For the sake of this introductory class, we will consider only the case for entries that are real
numbers. Cases for complex entries may also be discussed if we find the time for it.)
If A is an m x n (m rows and n columns) matrix and B is an n x p matrix, then AB is an m x p
matrix. This also makes it undefined when the number of columns in matrix A is not equal to the
number of rows in matrix B. (For simplicity, think: number of entries in each row/column of a
matrix = number of columns/rows)
Taking the dot product of this partitioned matrix follows naturally.
Matrix partitioning, which can be done with sub-matrices within the larger matrix, is
commonly done to simplify calculations in real-life computing. However, keep in mind that any
partitioning for a defined product of large matrices must have defined sub-products for the submatrices. Taking the determinant of the larger matrix works as well for the sub-matrices, as long
as they are square matrices. (Elementary row operations often do not work because the
partitioned matrices are of different dimensions.)
Start by considering 1 x 1 matrix multiplication (scalar multiplication). Noting this, adding any
number entries to the row of the second matrix does not change the fact that the product is
defined:
The same happens when we add any number of entries to the column of the first matrix/ So by
considering this type of multiplication, we can see it as a series of scalar multiplications. This is
very much the case for scalar multiplication of vectors, with which we could relate the above
examples to:
Scalar multiplication of vectors, as seen above, is usually extended to matrices by means of
column vector representation. (There are other reasons for doing so, such as a way of
representing a differentiation of linear functionals, which map vectors onto scalars, but that is
beyond the scope of this course.)
A scalar multiplication of vectors (commonly called a dot product or inner product (the reason
for this definition will be made in the next section on tensor products)) yields 1 x 1 matrix, which
we commonly just write as a scalar. Hence, we have a product of a 1 x n matrix and an n x 1
matrix:
The outer product can be thought of both as a series of column vectors scaled and as a series of
row vectors scaled. Most basic linear algebra problems use the former definition.
We now revisit the dot product and, recalling that we can add as many rows to the left-hand
matrix to produce a vector that can be scaled, demonstrate this in the form of a linear
combination of (column) vectors.
Note that common row-by-column matrix multiplication yields a series of dot products that
correspond to the entries of the new vector.
On the other hand, we may use the reverse process of building up to partition the columns of the
matrix into single column vector entries:
Matrix partitioning can also demonstrate elementary row operations by means of matrix
multiplication. (Feel free to email me for a comprehensive proof of this that uses the exact same
principles or do it yourself.)
LU (Lower-Upper) and/or UL (Upper-Lower) factorizations follow from elementary row
operations, by multiplication of any matrix with an elementary row operation and its inverse,
partitioning the matrix into an upper triangular and a lower triangular matrix. Both are generally
done with row-addition operations to the rows strictly below or above the designated row on
which it is operated.
We can represent it as such:
(Neither differs much from the other due to row interchanges and varied functions.)