* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download – Matrices in Maple – 1 Linear Algebra Package
Quadratic form wikipedia , lookup
Cross product wikipedia , lookup
System of linear equations wikipedia , lookup
Determinant wikipedia , lookup
Symmetry in quantum mechanics wikipedia , lookup
Gaussian elimination wikipedia , lookup
Tensor operator wikipedia , lookup
Non-negative matrix factorization wikipedia , lookup
Orthogonal matrix wikipedia , lookup
Vector space wikipedia , lookup
Euclidean vector wikipedia , lookup
Laplace–Runge–Lenz vector wikipedia , lookup
Singular-value decomposition wikipedia , lookup
Jordan normal form wikipedia , lookup
Covariance and contravariance of vectors wikipedia , lookup
Cayley–Hamilton theorem wikipedia , lookup
Basis (linear algebra) wikipedia , lookup
Bra–ket notation wikipedia , lookup
Perron–Frobenius theorem wikipedia , lookup
Linear algebra wikipedia , lookup
Cartesian tensor wikipedia , lookup
Matrix multiplication wikipedia , lookup
Eigenvalues and eigenvectors wikipedia , lookup
– Matrices in Maple –
1
October 24, 1994
Linear Algebra Package
The useful linear algebra operations are contained in a special package that is loaded
with the command
> with(linalg);
The list of newly defined and redefined procedures gives an idea of what the package
contains. Use the help command, as in ?dotprod to learn more about what the
procedure does and how to use it.
2
Vectors and Arrays
To create an array, use
> a := array( [ [3,-1,1], [4,-5,4], [4,-7,6] ] );
[ 3
[
a := [ 4
[
[ 4
-1
-5
-7
1 ]
]
4 ]
]
6 ]
Vectors may be defined by listing their elements:
> y := array([3,-5,6]);
y := [ 3, -5, 6 ]
You can also use vector in place of array. With the usual scientific conventions,
these are actually column vectors, despite the display. That can be checked by
multiplying the matrix times the vector as follows.
To multiply a matrix and a vector, use either
> evalm(a &* y);
[ 20, 61, 83 ]
or use multiply(a,y);.
To access an individual vector or array element, use square brackets for the
subscript:
1
> a[1,2],y[3];
-1, 6
The inverse is computed from
> ainv := evalm(a^(-1));
[ 1/3
[
ainv := [ 4/3
[
[ 4/3
1/6
-7/3
-17/6
-1/6 ]
]
4/3 ]
]
11/6 ]
or using inverse(a);
Note that evalm interprets arithmetic operations in matrix language. There is
one subtlety, however. Sometimes we want to multiply a matrix by a scalar and
sometimes we want to multiply a matrix by a matrix or a matrix by a vector. So we
use &* to specify matrix multiplication and * to specify multiplication by a scalar.
To multiply matrices, use
> evalm(a &* ainv);
[ 1
[
[ 0
[
[ 0
0
1
0
0 ]
]
0 ]
]
1 ]
To multiply every element by a scalar or subtract a scalar from the diagonal
elements, use
> evalm(2*a-1);
[ 5
[
[ 8
[
[ 8
2
-2
-11
-14
2 ]
]
8 ]
]
11 ]
The determinant is found from
> det(a);
-6
and the eigenvalues are found using eigenvals(a).
> evalf(eigenvals(a));
3, 2, -1
Dot products can be computed as well, but not with the &* operation:
> w := array([2,1,-1]);
w := [ 2, 1, -1 ]
-------------------------------------------------------------------------------> evalm(w &* y);
Error, (in linalg[multiply]) vector dimensions incompatible
-------------------------------------------------------------------------------> dotprod(w,y);
-5
The problem with &* is that it specifies multiplying two column vectors as though
they were matrices.
Another way to get the dot product is to take the transpose first, as in
> evalm(transpose(w) &* y);
-5
The outer product, on the other hand, is found by multiplying in the opposite order:
> evalm(y &* transpose(w));
[ 6
[
[ -10
[
[ 12
To get a vector crossproduct, use
3
3
-5
6
-3 ]
]
5 ]
]
-6 ]
> crossprod(y,w);
[ -1, 15, 13 ]
which works only for three-dimensional vectors.
Now let’s solve the linear system ax = y with the matrix a and the column
vector y as defined above. We do this easily using
> linsolve(a,y);
[ -5/6, 71/3, 175/6 ]
The evalm operation also handles scalar multiplication and addition. As an
example, we may check the Cayley-Hamilton theorem that a matrix satisfies its own
characteristic equation as follows.
First we construct the characteristic polynomial:
> charpoly(a,lambda);
2
(lambda - 3) (lambda - lambda - 2)
which is the same as det(evalm(a-lambda));. Remember, the roots of the characteristic polynomial are the eigenvalues of the matrix.
Then we replace the scalar variable lambda with the array a using
> subs(lambda=a,");
2
(a - 3) (a
- a - 2)
and finally ask that the equation be read as an equation for matrices, substituting
the actual matrix a we have been using:
> evalm(");
[ 0
[
[ 0
[
[ 0
To get the eigenvectors of a matrix a, use
4
0
0
0
0 ]
]
0 ]
]
0 ]
> eigenvects(a);
[-1, 1, {[ 0, 1, 1 ]}], [2, 1, {[ -1, 0, 1 ]}], [3, 1, {[ 1, 1, 1 ]}]
The result needs a little explaining. There are three items in the list. In each item
are two scalars and a vector. The vector is the actual eigenvector. The scalars
specify the eigenvalue and its multiplicity. In this case there are three distinct
eigenvalues with multiplicity 1. The first in the list with eigenvalue -1 corresponds
to the eigenvector [0,1,1] and so on.
For a final example, let’s explore the power method of generating the leading
eigenvalue and eigenvector. With this method we start with an arbitrary trial vector
x0 and act on it with the matrix a. We keep multiplying by a until the vector is
driven into the subspace corresponding to the largest eigenvalue. From the list above,
we see that the third one with eigenvalue 3 is the largest. Since it has multiplicity
1 the subspace should be simply a constant times the vector [1,1,1], so it should
be easily recognizable.
Let’s start with a unit vector and multiply away:
> x0 := vector([1,0,0]);
x0 := [ 1, 0, 0 ]
-------------------------------------------------------------------------------> evalm(a &* x0);
[ 3, 4, 4 ]
-------------------------------------------------------------------------------> evalm(a &* ");
[ 9, 8, 8 ]
-------------------------------------------------------------------------------> evalm(a &* ");
[ 27, 28, 28 ]
-------------------------------------------------------------------------------> evalm(a &* ");
[ 81, 80, 80 ]
-------------------------------------------------------------------------------5
> evalm(a &* ");
[ 243, 244, 244 ]
-------------------------------------------------------------------------------> evalm(a &* ");
[ 729, 728, 728 ]
We see that the vector is gradually approaching a constant times [1,1,1]. Furthermore, each successive multiplication by a is nearly the same as multiplying by
3, the corresponding eigenvalue.
6