Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Lie sphere geometry wikipedia , lookup
Riemannian connection on a surface wikipedia , lookup
History of geometry wikipedia , lookup
Symmetric cone wikipedia , lookup
Four-dimensional space wikipedia , lookup
Euclidean geometry wikipedia , lookup
Covariance and contravariance of vectors wikipedia , lookup
Metric tensor wikipedia , lookup
Cartesian tensor wikipedia , lookup
LECTURE 17 AND 18: BASIC ALGEBRA AND GEOMETRY OF EUCLIDEAN SPACE Recall that n-dimensional Euclidean space Rn is the set Rn = R × · · · × R = (x1 , . . . , xn ) : xj ∈ R for 1 ≤ j ≤ n . Geometrically, one can envision R1 = R as the real line, R2 as a plane and R3 as 3dimensional space, where the position of points are described via the three (x, y, z) co-ordinates. Of course, in higher dimensions these objects become more difficult to visualise, but most of the properties of these spaces which we will discuss will remain essentially the same. The set Rn has a special algebraic struture. First note that one can define a natural addition operation on this set, given by for all (x1 , . . . , xn ), (y1 , . . . , yn ) ∈ Rn . (x1 , . . . , xn )+(y1 , . . . , yn ) = (x1 +y1 , . . . , xn +yn ) This addition operation satisfies all the properties one would expect. In particular, for all x, y, z ∈ Rn we have: • (Commutative) x + y = y + x; • (Associative) x + (y + z) = (x + y) + z; • (Additive identity) If 0 := (0, . . . , 0), then x + 0 = 0 + x = x; • (Additive inverse) If x = (x1 , . . . , xn ), then defining −x := (−x1 , . . . , −xn ) one has x + (−x) = −x + x = 0. Unless n = 1, there is no natural multiplicative operation, but one can define scalar multiplication as follows for all λ ∈ R and (x1 , . . . , xn ) ∈ Rn . λ(x1 , . . . , xn ) = (λx1 , . . . , λxn ) When one has this kind of addition and scalar multiplication operations it is customary to call the points of the set vectors,1 and as such we’ll often refer to some x ∈ Rn as a vector. There is also a natural geometry of Euclidean space: that is, one can define a way to measure distances between points and angles between lines in Rn . Definition. For x = (x1 , . . . , xn ), y = (y1 , . . . , yn ) ∈ Rn define • The norm |x| of x by |x| = n X x2j 1/2 . j=1 We think of the norm as measuring the distance of x to the origin. In general |x − y| is the distance between x and y. • The inner product hx, yi of x and y by hx, yi = n X xj yj . j=1 Relating the norm and inner product is the following fundamental inequality. 1There is a general abstract notion of a vector space (and more generally a module) as a set endowed with an addition and scalar multiplication operation, but we will not give the abstract definition here. 1 2 LECTURE 17 AND 18: BASIC ALGEBRA AND GEOMETRY OF EUCLIDEAN SPACE Theorem (Cauchy-Schwarz inequality). For all x, y ∈ Rn , |hx, yi| ≤ |x||y| Theorem (Properties of the norm). If x, y ∈ Rn and λ ∈ R then 1) |x| = 0 if and only if x = 0; 2) (triangle inequality) |x + y| ≤ |x| + |y|; 3) |λx| = |λ||x|. We will be interested in mappings which preserve this algebraic structure of Rn ; these are called linear transformations (or linear maps), and their study forms the basis of linear algebra. Definition. A function T : Rn → Rm is a linear transformation2 if it satisfies the following properties: i) T (λx) = λT (x) for all λ ∈ R and x ∈ Rn ; ii) T (x + y) = T (x) + T (y) for all x, y ∈ Rn . Example. When we’re working with only one variable linear maps are very simple. Indeed, a function T : R → R is a linear map if and only if there exists some a ∈ Rm such that T (x) = ax for all x ∈ R. It is easy to see any mapping of this form is linear; for the converse, note that by part i) of the definition T (x) = T (x.1) = xT (1) for all x ∈ R so taking a = T (1) completes the proof. To study maps between spaces of arbitrary dimensions, it is useful to express vectors in a special form. Definition. The standard basis vectors e1 , . . . , en ∈ Rn are defined by ej := (0, . . . , 1, . . . 0) with the 1 in the jth position. Clearly, any vector x = (x1 , . . . , xn ) ∈ Rn can be written as x = x1 e1 + · · · + xn en . n If T : R → R that m is a linear transformation, then by properties i) and ii) we have T (x) = x1 T (e1 ) + · · · + xn T (en ) and so T is completely determined by the values T (e1 ), . . . , T (en ). On the other hand, since each T (ej ) ∈ Rm , it can be expressed in terms of the standard basis vectors in Rm . In particular, for each 1 ≤ j ≤ n, writing T (ei ) = (a1j , . . . , amj ) we have that T (ej ) = a1j e1 + · · · + amj em . Substituting this into our formula for T (x) we see that T (x) = = = n X j=1 n X xj T (ej ) xj m X j=1 m X i=1 n X i=1 j=1 aij ei aij xj ei . 2In the abstract setting, linear transformations can be defined between any pair of vector spaces - again we won’t persue this here. LECTURE 17 AND 18: BASIC ALGEBRA AND GEOMETRY OF EUCLIDEAN SPACE Thus, the ith component of T (x) is given by n X 3 aij xj . The numbers (aij ) which j=1 determine our linear mapping can be arranged in an array a11 . . . a1n .. A := ... . am1 . . . amn to form what is know as an n × m matrix. Each aij is then (i, j)-component of the matrix A. Clearly from our discussion correspondence between linear maps and matrices. Moreover, operation of the matrix A on vectors x ∈ Rn as follows Pn a11 . . . a1n x1 j=1 aij xj .. .. .. = . . . . P .. n am1 . . . amn xn j=1 aij xj refered to as the there is a precise we can define an so that Ax = T x. The point is that in the matrix notation one obtains the ith component of Ax simply by taking the product of the jth term of the ith column of A and the jth entry of x and then summing these products in j. This gives a simple way to represent the action of the linear map. Given linear maps T : Rn → Rm and S : Rm → Rp , it is easy to see their composition S ◦ T : Rn → Rp must also be linear. From the above discussion, we know that there exists an n × m matrix M [T ] with components aij ∈ R and an m × p matrix M [S] with components bki for 1 ≤ i ≤ m, 1 ≤ j ≤ n and 1 ≤ k ≤ p such that T x = M [T ]x = Sy = M [S]x = m n X X i=1 p X m X k=1 i=1 aij xj ei , j=1 for all x ∈ Rn and all y ∈ Rm . bki yi ek Consequently, S ◦ Tx = Sy = p m X X bki i=1 k=1 p X m n X X k=1 j=1 i=1 n X aij xj ek , j=1 bki aij xj ek for all x ∈ Rn . Thus, the (k, j)-component mkj of the matrix M [S ◦ T ] which correponds to S ◦ T is given by m X mkj := bki aij . i=1 This motivates a natural notion of multiplication of matrices; in particular, Pm Pm b11 . . . b1m a11 . . . a1n ... i=1 b1i ai1 i=1 b1i ain .. .. .. .. = . .. . . . . P .. . P m m bp1 . . . bpm am1 . . . amn b a . . . b a i=1 pi i1 i=1 pi in so that M [S ◦ T ] = M [S]M [T ]. 4 LECTURE 17 AND 18: BASIC ALGEBRA AND GEOMETRY OF EUCLIDEAN SPACE Example. Consider the matrices 2 3 B := 0 1 A := 1 −1 4 2 . Then BA = 2 0 3 1 1 . −1 4 2 = 2.1 + 3.(−1) 0.1 + 1.(−1) 2.4 + 3.2 0.4 + 1.2 = −1 −1 14 2 . Thus, the matrix product definition provides an efficient method of determining the composition of two linear transformations. Jonathan Hickman, Department of mathematics, University of Chicago, 5734 S. University Avenue, Eckhart hall Room 414, Chicago, Illinois, 60637. E-mail address: [email protected]