* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Chapter 1
Linear least squares (mathematics) wikipedia , lookup
Determinant wikipedia , lookup
Jordan normal form wikipedia , lookup
Matrix (mathematics) wikipedia , lookup
Perron–Frobenius theorem wikipedia , lookup
Non-negative matrix factorization wikipedia , lookup
Euclidean vector wikipedia , lookup
Laplace–Runge–Lenz vector wikipedia , lookup
Gaussian elimination wikipedia , lookup
Singular-value decomposition wikipedia , lookup
Cayley–Hamilton theorem wikipedia , lookup
Eigenvalues and eigenvectors wikipedia , lookup
Vector space wikipedia , lookup
System of linear equations wikipedia , lookup
Covariance and contravariance of vectors wikipedia , lookup
Matrix multiplication wikipedia , lookup
Orthogonal matrix wikipedia , lookup
Four-vector wikipedia , lookup
582702451 1.6 Linear Transformation An operator denoted L will be called a linear function or linear operation if the following relationship holds: L [a + b] = L (a) + L (b) where , are real numbers and a, b are vectors in the vector space. For real-valued continuous functions and and scalars, the linear operation can be written as L [f(t) + g(t)] = L [f(t)] + L [g(t)] Operations such as differentiation, integration, Laplace and Fourier transforms are linear. The d derivative operator L = is linear since dt d df (t ) dg (t ) [f(t) + g(t)] = + dt dt dt Matrix multiplication of a linear combination of vectors is a linear operation since A(x + y) = Ax + Ay where L = A and and are scalars. In general, an operation that transforms a vector in Rn (vector with n real components) to a vector in Rm is linear if and only if it coincides with multiplication by some mn matrix. The function (transformation) that converts a vector in Rn to a vector in Rm f Rn Rm is said to have domain in Rn and range in Rm. When the transformation is linear, there exists a unique mn matrix AT such that L(x) = ATx A subset S of a vector space V is a linear subspace if every linear combination of elements of S is also in S. The linear subspace is usually called a subspace. The rotation of a vector x through an angle to become the vector x’ is a linear transformation x’ = ATx 34 582702451 where the transformation matrix AT is called the rotation matrix R. Figure 1.6 shows the rotation of a two-dimensional vector with length r. We will determine the ration matrix R for this transformation. x2 (x',y') (x,y) a x1 Figure 1.6 Rotation of a two-dimensional vector r From the geometry, the coordinates of x’ in terms of the angle and are x’ = rcos( + ) = [rcos()]cos() [rsin()]sin() y’ = rsin( + ) = [rsin()]cos() + [rcos()]sin() The terms in the square bracket [rcos()] and [rsin()] are replaced by x and y to give x’ = xcos() ysin() y’ = ycos() + xsin() = xsin() + ycos() or x ' cos y ' = sin sin cos x y cos The transformation matrix for the plane rotation is then R = sin angle is zero the rotation matrix is simply the identity matrix I. sin . When the rotation cos In three-dimensional rotations, the axis of rotation must be specified. The subscripts for a rotation matrix will indicate the axis of rotation and the angle. 1. Rotation by an angle about the x-axis: 0 1 Rx, = 0 cos 0 sin 0 sin cos 2. Rotation by an angle about the y-axis: 35 582702451 cos Ry, = 0 sin 0 sin 1 0 0 cos 3. Rotation by an angle about the z-axis: cos Rz, = sin 0 sin cos 0 0 0 1 The order in which the 3D rotations are performed is important since the 3D rotational matrices do not commute in general. The rotation matrices are orthogonal matrices, so the columns (or rows) must be orthonormal. 36