* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Vector Space
Non-negative matrix factorization wikipedia , lookup
Orthogonal matrix wikipedia , lookup
Gaussian elimination wikipedia , lookup
Perron–Frobenius theorem wikipedia , lookup
Cayley–Hamilton theorem wikipedia , lookup
Cross product wikipedia , lookup
Jordan normal form wikipedia , lookup
Matrix multiplication wikipedia , lookup
Singular-value decomposition wikipedia , lookup
Exterior algebra wikipedia , lookup
Eigenvalues and eigenvectors wikipedia , lookup
Laplace–Runge–Lenz vector wikipedia , lookup
System of linear equations wikipedia , lookup
Euclidean vector wikipedia , lookup
Vector space wikipedia , lookup
Matrix calculus wikipedia , lookup
Vector Space A vector space (or linear space) over a field F is a nonempty set V of elements 𝒖, 𝒗, 𝒘... (called vectors) together with two algebraic operations. These operations are called vectors addition: 𝑽 × 𝑽 → 𝑽, and multiplication of vectors by scalars: 𝐹 × 𝑉 → 𝑉 that is element of F In the list below, let 𝒖, 𝒗, 𝒘 be arbitrary vectors in 𝑉, and 𝑛1 𝑛2 ,s be scalars in F. Axiom Associativity of addition Commutativity of addition Identity element of addition Inverse elements of addition Distributivity of scalar multiplication with respect to vector addition Identity element of scalar multiplication Signification v1 + (v2 + v3) = (v1 + v2) + v3. v1 + v2 = v2 + v1. There exists an element 0 ∈ V, called the zero vector, such that v + 0 = v for all vєV. For all vєV, there exists an element -v ∈V, called the Additive inverse of v, such that v + (-v) = 0. s (v1 + v2) = sv1 + sv2. 1s = s, where 1 denotes the multiplicative identity in F. Vector addition: Vector addition associates with every ordered pair (u, v) of vectors a vector u+v, called the sum of u and v, in such a way that the following properties hold. Vector addition is commutative and associative, that is, for all vectors we have 𝒖+𝒗 = 𝒗+𝒖 𝒖 + (𝒗 + 𝒘) = (𝒗 + 𝒖) + 𝒘; Identity element of addition There exists an element 0, called the zero vector, 0 ∈ 𝑉 such that 0 + 𝑣 = 𝑣 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑣 є 𝑉; Additive inverse: For every𝑣 ∈ 𝑉, there exists an element -𝑣 ∈ 𝑉 𝑠𝑢𝑐ℎ that 𝑉 + (−𝑣) = 0 Multiplicative identity: Multiplication by scalars associates with every vector u and scalar 𝛼 a vector 𝛼𝑢, called the product of 𝛼 and u, in such a way that for all vectors 𝑢, 𝑣 and scalars 𝛼, 𝛽 we have 𝛼 (𝛽𝑢) = (𝛼𝛽) 𝑢 1𝑢 = 𝑢 And the distributive laws α (u+v) = αu + αv (α+β) u = α u + βu Usually, a vector space over R is called a real vector space and a vector space over C is called a complex vector space. The elements v ∈ V of a vector space are called vectors. Example 1. Consider the set 𝐹 𝑛 of all n-tuples with elements in F. This is a vector Space. Addition and scalar multiplication are defined component wise. That is, for 𝑢 = (𝑢1, 𝑢2 … 𝑢𝑛 ), 𝑣 = (𝑣1 , 𝑣2 … 𝑣𝑛 ) ∈ F And a ∈ F, and a ∈ F, we defined 𝒖 + 𝒗 = (𝒖𝟏 + 𝒗𝟏 , 𝒖𝟐 + 𝒗𝟐 … . 𝒖𝒏 + 𝒗𝒏 ), 𝒂𝒖 = (𝒂𝒖𝟏 , 𝒂𝒖𝟐 , … , 𝒂𝒖𝒏 ) It is easy to check that all properties of vector space are satisfied. In particular the additive identity 0 = (0, 0. . . 0) and the additive inverse of u is −𝑢 = (−𝑢1 , −𝑢2 , … , −𝑢𝑛 ) Example 2. Let 𝐹 ∞ be the set 𝐹 ∞ = {(𝑢1 , 𝑢2,… …) | 𝑢𝑗 ∈F for j = 1, 2 . . .}. Addition and scalar multiplication are defined as expected (𝒖𝟏, 𝒖𝟐 … )+ (𝒗𝟏 , 𝒗𝟐 … ) = (𝒖𝟏 + 𝒗𝟏 , 𝒖𝟐 + 𝒗𝟐 … . ) 𝒂(𝒖𝟏 , 𝒖𝟐 , … )= (𝒂𝒖𝟏 , 𝒂𝒖𝟐 , …) we should verify that with these operations 𝐹 ∞ becomes a vector space. Example 3. Let 𝑃(𝐹) be the set of all polynomials 𝑝 ∶ 𝐹 → 𝐹 with coefficients in𝐹. More Precisely, 𝑝(𝑧) is a polynomial if there exist 𝑎𝑜 , 𝑎1 . . . , 𝑎𝑛 ∈ 𝐹 such that 𝒑(𝒛) = 𝒂𝒏 𝒛𝒏 + 𝒂𝒏−𝟏 𝒛𝒏−𝟏 + ⋯ + 𝒂𝟏 𝒛 + 𝒂𝟎 . Addition and scalar multiplication are defined as (𝟏) (𝒑 + 𝒒)(𝒛) = 𝒑(𝒛) + 𝒒(𝒛), (𝒂𝒑)(𝒛) = 𝒂𝒑(𝒛), where 𝑝, 𝑞 ∈ 𝑃(𝐹) 𝑎𝑛𝑑 𝑎 ∈ 𝐹. For example, if 𝒑(𝒛) = 𝟓𝒛 + 𝟏 𝒂𝒏𝒅 𝒒(𝒛) = 𝟐𝒛𝟐 + 𝒛 + 𝟏, then (𝒑 + 𝒒)(𝒛) = 𝟐𝒛𝟐 + 𝟔𝒛 + 𝟐 𝒂𝒏𝒅 (𝟐𝒑)(𝒛) = 𝟏𝟎𝒛 + 𝟐. Again, it can be easily verified that 𝑃(𝐹) forms a vector space over𝐹. The additive identity in this case is the zero polynomial, for which all coefficients are equal to zero. The additive inverse of 𝑝(𝑧) in (1) is −𝒑(𝒛) = −𝒂𝒏 𝒛𝒏 – 𝒂𝒏−𝟏 𝒛𝒏−𝟏 − … − 𝒂𝟏 𝒛 − 𝒂𝟎 . Example 4. Let be any set and consider the space of functions as and scalar multiplication of a function Then of all functions by . Define addition as becomes a vector space. Vector subspaces. A subset of a vector space is a vector subspace of if the following two axioms are satisfied The zero vector of V is in , , is closed under vector addition that is for each and in , so the sum is in , , is closed under scalars multiplication. that is for each in ,and each scalar vector in then so is in , , Improper subspace A special subspace of V is the improper subspace W=V Proper subspace Every other subspace of 𝑉(≠ {0}) is called proper subspace Lemma. Let 𝑈 ⊂ 𝑉 be a subset of a vector space V over F. Then U is a subspace of 𝑉 If and only if 1. Additive identity: 0 ∈ 𝑈; 2. Closure under addition: 𝑢, 𝑣 ∈ 𝑈 implies 𝑢 + 𝑣 ∈ 𝑈; 3. Closure under scalar multiplication: 𝑎 ∈ 𝐹, 𝑢 ∈ 𝑈 implies that 𝑎𝑢 ∈ 𝑈. Proof. 1 implies that the additive identity exists. 2 implies that vector addition is well-defined 3 ensure that scalar multiplication is well-defined. All other conditions for a vector space are inherited from V since addition and scalar multiplication for elements in U are the same viewed as elements in U or V . Examples: The set consisting of only the zero vector in a vector space V is a subspace of V , called the zero subspace and written as [0] The vector space 𝑹𝟐 is not a subspace of 𝑹𝟑 𝑏ecause 𝑹 𝟐 𝑖s not even a subset of 𝑹𝟑 (the vectors in 𝑹𝟑 all have three entries, whereas the vectors in 𝑹𝟐 have only two) see fig. the set 𝑠 𝑊 = {⟦ 𝑡 ⟧ : 𝑠 𝑎𝑛𝑑 𝑡 𝑎𝑟𝑒 𝑟𝑒𝑎𝑙} 0 𝑥3 𝑥2 𝑥1 (The𝑥1 𝑥2 − 𝑝𝑙𝑎𝑛𝑒 𝑎𝑠 𝑎 𝑠𝑢𝑏𝑠𝑝𝑎𝑐𝑒 𝑜𝑓𝑹𝟑 ) Is the subset of 𝑹𝟑 that “looks” and “acts” like𝑹 𝟐 , although it is logically distinct from 𝑹 𝟐 .show that W is subspace of𝑹𝟑 ? Solution The zero vector is in W , and W is closed under vector addition and scalar multiplication because these operation on vectors in W always produce vector whose third entries are zero (and so belong to W). Thus W is a subspace of 𝑹𝟑 Determine if the given set is a subspace of the given vector space. (a) Let W be the set of all functions such that 𝒇(𝟔) = 𝟏𝟎 Is this a subspace of we have ? where In this case suppose that we have two elements from W, 𝒇 = 𝒇(𝒙) and. 𝒈 = 𝒈(𝒙) this means that 𝒇(𝟔) = 𝟏𝟎 and 𝒈(𝟔) = 𝟏𝟎. In order for W to be a subspace we’ll need to show that the sum and a scalar multiple will also be in W. In other words, if we evaluate the sum or the scalar multiple at 6 we’ll get a result of 10. However, this won’t happen. Let’s take a look at the sum. The sum is, (𝒇 + 𝒈)(𝟔) = 𝒇(𝟔) + 𝒈(𝟔) = 𝟏𝟎 + 𝟏𝟎 = 𝟐𝟎 ≠ 𝟏𝟎 and so the sum will not be in W. Likewise, if c is any scalar that isn’t 1 we’ll have, (𝒄𝒇)(𝟔) = 𝒄𝒇(𝟔) = 𝒄(𝟏𝟎) ≠ 𝟏𝟎 and so the scalar is not in W either. Therefore W is not closed under addition or scalar multiplication and so is not a subspace. 𝟎 (b)Let W be the set of matrices of the form [𝒂𝟐𝟏 𝒂𝟑𝟏 𝒂𝟏𝟐 𝒂𝟐𝟐 ]is this a subspace of𝑴𝟑𝟐 ? 𝒂𝟑𝟐 Solution: Let u and v be any two matrices from W and c be any scalar then, Both and cu are also in W and so W is closed under addition and scalar multiplication and hence is a subspace of . Linear combination A linear combination of vectors𝒗𝟏 , 𝒗𝟐 , … 𝒗𝒏 of a vector space V is an expression of the form ∝𝟏 𝒙𝟏 +∝𝟐 𝒙𝟐 +. . +∝𝒏 𝒙𝒏 Example: The vector v = (−7, −6) is a linear combination of the vectors v1 = (−2, 3) and v2 = (1, 4), Since v = 2 v1 − 3 v2. The zero vector is also a linear combination of v1 and v2, since 0 = 0 v1 + 0 v2. In fact, it is easy to see that the zero vector in R n is always a linear combination of any collection of vectors v1, v2… v r from R n. A subspace spanned by a set Span The set of all linear combinations of a collection of vectors 𝒗1, 𝒗2, … , 𝒗 𝑟 from R n is called the span of { 𝒗1, 𝒗2, … , 𝒗 𝑟 }. This set, denoted span {𝑣1 ,𝒗2, … , 𝒗 𝑟 }, is always a subspace of R n , since it is clearly closed under addition and scalar multiplication (because it contains all linear combinations of v1, v2,…, v r ). If V = span { v1, v2… v r }, then V is said to be spanned by v1, v2… v r . Example: The span of the set {(𝟐, 𝟓, 𝟑), (𝟏, 𝟏, 𝟏)} is the subspace of R3consisting of all linear combinations of the vectors 𝒗𝟏 = (𝟐, 𝟓, 𝟑) 𝒂𝒏𝒅 𝒗𝟐 = (𝟏, 𝟏, 𝟏). this defines a plane in R3. Since a normal vector to this plane in n = v1 x v2 = (2, 1, −3), the equation of this plane has the form 2 x + y − 3 z = d for some constant d. Since the plane must contain the origin—it's a subspace— d must be 0. Example: The subspace of R2 spanned by the vectors i = (1, 0) and j = (0, 1) is all of R2, because every vector in R2 can be written as a linear combination of i andj: 𝒔𝒑𝒂𝒏{𝒊, 𝒋} = 𝑹𝟐 Let 𝒗𝟏 , 𝒗𝟐,…, 𝒗𝒓−𝟏 , 𝒗𝒓 , v r be vectors in R n . If v r is a linear combination of 𝒗𝟏 , 𝒗𝟐,…, 𝒗𝒓−𝟏 then 𝒔𝒑𝒂𝒏{𝒗𝟏 , 𝒗𝟐,…, 𝒗𝒓−𝟏 , 𝒗𝒓 } = 𝒔𝒑𝒂𝒏{𝒗𝟏 , 𝒗𝟐,…, 𝒗𝒓−𝟏 } That is, if any one of the vectors in a given collection is a linear combination of the others, then it can be discarded without affecting the span. Therefore, to arrive at the most “efficient” spanning set, seek out and eliminate any vectors that depend on (that is, can be written as a linear combination of) the others. Example: Let 𝒗𝟏 = (2, 5, 3), 𝒗𝟐 = (1, 1, 1), and 𝒗𝟑 = (3, 15, 7). Since 𝒗𝟑 = 4 v1− 5 v2, 𝒔𝒑𝒂𝒏 {𝒗𝟏 , 𝒗𝟐, 𝒗𝟑 } = 𝒔𝒑𝒂𝒏 {𝒗𝟏 , 𝒗𝟐 } That is, because v3 is a linear combination of 𝒗𝟏 and 𝒗𝟐 , it can be eliminated from the collection without affecting the span. Geometrically, the vector (𝟑, 𝟏𝟓, 𝟕) lies in the plane spanned by 𝒗𝟏 𝒂𝒏𝒅 𝒗𝟐, so adding multiples of v3to linear combinations of 𝒗𝟏 and 𝒗𝟐 would yield no vectors off this plane. Note that 𝒗𝟏 is a linear combination of 𝒗𝟐 and 𝒗𝟑 (𝒔𝒊𝒏𝒄𝒆 𝒗𝟏 = 𝟓/𝟒 𝒗𝟐 + 𝟏/𝟒 𝑣𝟑), And v2 is a linear combination of 𝒗𝟏 and v3 (𝒔𝒊𝒏𝒄𝒆 𝒗𝟐 = 𝟒/𝟓 𝒗𝟏 − 𝟏/𝟓 𝒗𝟑 ). Therefore, any one of these vectors can be discarded without affecting the span: 𝒔𝒑𝒂𝒏 {𝒗𝟏 , 𝒗𝟐, 𝒗𝟑 } = 𝒔𝒑𝒂𝒏 {𝒗𝟏 , 𝒗𝟐 } = 𝒔𝒑𝒂𝒏 {𝒗𝟐 , 𝒗𝟑 } = 𝒔𝒑𝒂𝒏 {𝒗𝟏 , 𝒗𝟑 } Example: Let 𝒗1 = (2, 5, 3), 𝒗2 = (1, 1, 1), 𝑎𝑛𝑑 𝒗3 = (4, −2, 0). because there exist no constants k1 and k2 such that v3 = k1 v1 + k2 v2, v3 is not a linear combination of v1 and v2. Therefore, v3 does not lie in the plane spanned by v1and v2, as shown in Figure 1 : Figure 1 Consequently, the span of v1, v2, and v3 contains vectors not in the span of v1and v2 alone. In fact, 𝒔𝒑𝒂𝒏 {𝒗𝟏 , 𝒗𝟐 } = (𝒕𝒉𝒆 𝒑𝒍𝒂𝒏𝒆 𝟐𝒙 + 𝒚 − 𝟑𝒛 = 𝟎) NULL SPACES: The null space of an 𝑚 × 𝑛 matrix A, written as Nul A, is the set of all solutions to the homogeneous equation Ax = 0. 𝑁𝑢𝑙𝑙 𝐴 = {𝑥: 𝑥 𝑖𝑠 𝑖𝑛 𝑅 𝑛 𝑎𝑛𝑑 𝐴𝑥 = 0} THEOREM: The null space of an 𝒎 × 𝒏 matrix A is a subspace of𝑹𝒏 . Equivalently, the set of all solutions to a system 𝑨𝒙 = 𝟎 of m homogeneous linear equations in n unknowns is a Subspace of 𝑹𝒏 . Proof: 𝑁𝑢𝑙𝑙 𝐴 is a subset of 𝑅 𝑛 since 𝐴 has 𝑛 column We must show that 𝑁𝑢𝑙𝑙 𝐴 satisfies the three properties of a subspace. 1. Show that 0 is in𝑁𝑢𝑙 𝐴. Since , 𝑨𝟎 = 𝟎 2. If u and v are in 𝑁𝑢𝑙 𝐴, show that u + v is in 𝑁𝑢𝑙 𝐴. Since u and v are in 𝑁𝑢𝑙 𝐴, 𝒕𝒉𝒆𝒏 𝑨𝒖 = 𝟎 𝒂𝒏𝒅 𝑨𝒗 = 𝟎 Therefore 𝑨(𝒖 + 𝒗) = 𝑨𝒖 + 𝑨𝒗 = 𝟎 + 𝟎 = 𝟎 ∴ 𝒖 + 𝒗 ∈ 𝑵𝒖𝒍 𝑨 3. If u is in 𝑁𝑢𝑙 𝐴 and c is a scalar, show that cu in 𝑁𝑢𝑙 𝐴: 𝑨(𝒄𝒖) = 𝒄𝑨(𝒖) = 𝒄𝟎 =𝟎 ∴ 𝑨(𝒄𝒖) ∈ 𝑵𝒖𝒍𝑨 𝑺𝒊𝒏𝒄𝒆 𝒑𝒓𝒐𝒑𝒆𝒓𝒕𝒊𝒆𝒔 𝟏, 𝟐 𝒂𝒏𝒅 𝟑𝒉𝒐𝒍𝒅, 𝑨 𝒊𝒔 𝒂 𝒔𝒖𝒃𝒔𝒑𝒂𝒄𝒆 𝒐𝒇 𝑹𝒏 1 −3 Example: Let 𝑨 = [ −5 9 5 −2 ] and let 𝑢 = [ 3 ] 1 −2 Determine if u belongs to the null space of A. Solution: To test if 𝑢 satisfies𝐴𝑢 = 0, simply compute 5 1 −3 −2 5−9+4 0 𝐴𝑢 = [ ] [3] = [ ]= [ ] −5 9 1 0 −35 + 27 − 2 −2 Thus 𝒖 is in 𝒏𝒖𝒍𝒍 𝑨 An Explicit Description of 𝑵𝒖𝒍 𝑨: There is no obvious relation between vectors in 𝑁𝑢𝑙 𝐴 and the entries in A. we say that 𝑁𝑢𝑙 𝐴 is defined implicitly, because it is defined by a condition that must be checked. No explicit list or description of the element in 𝑁𝑢𝑙 A is given. However, solving the equation Ax = 0 amount to producing an explicit description of 𝑁𝑢𝑙𝑙 𝐴. Solving 𝑨𝒙 = 𝟎 yields an explicit description of 𝑵𝒖𝒍 𝑨. EXAMPLE: Find an explicit description of 𝑵𝒖𝒍 𝑨 where A= 3 6 6 6 12 13 3 0 9 3 Solution: Row reduce augmented matrix corresponding to Ax = 0: A= 3 6 6 6 12 13 3 0 9 0 ~…~ 3 0 1 2 0 13 33 0 0 0 1 -6 -15 0 𝒙𝟏 −𝟐𝒙𝟐 −𝟏𝟑𝒙𝟒 −𝟑𝟑𝒙𝟓 𝒙𝟐 𝒙𝟐 𝒙𝟑 = 𝟔𝒙𝟒 + 𝟏𝟓𝒙𝟓 𝒙𝟒 𝒙𝟒 [𝒙𝟓 ] [ 𝒙𝟓 ] −𝟐 −𝟏𝟑 −𝟑𝟑 𝟏 𝟎 𝟎 = 𝒙𝟐 𝟎 + 𝒙𝟒 𝟔 + 𝒙𝟓 𝟏𝟓 𝟎 𝟏 𝟎 [ 𝟏 ] [𝟎] [ 𝟎 ] Then 𝑵𝒖𝒍 𝑨 = 𝒔𝒑𝒂𝒏{𝒖, 𝒗, 𝒘} All linear combination of these three vectors 𝒖, 𝒗, 𝒘 are the solution to 𝐴𝑥 = 0 Observations: {𝒖, 𝒗, 𝒘 } is the Spanning set of 𝑁𝑢𝑙 𝐴 and note that𝑥2 𝑢 + 𝑥4 𝑣 + 𝑥5 𝑤 can be 0 only if the waights 𝑥2 , 𝑥4 𝑎𝑛𝑑 𝑥5 are all zero. If 𝑁𝑢𝑙 𝐴 ≠ {0}, the the number of vectors in the spanning set for 𝑁𝑢𝑙 𝐴 equals the number of free variables in 𝐴𝑥 = 0. Remark: The spanning set generated by the above method consists of linearly independent vector. The number of vectors in the spanning set of 𝑁𝑢𝑙 𝐴 is equal to the number of free variables in the system 𝐴𝑥 = 0 Column Space The column space of an 𝑚 × 𝑛 matrix 𝐴 (𝐶𝑜𝑙 𝐴) is the set of all linear combinations of the columns of𝐴. If 𝐴 = [𝑎1 … 𝑎𝑛 ]. then 𝐶𝑜𝑙 𝐴 = 𝑠𝑝𝑎𝑛{𝑎1 … 𝑎𝑛 } = {𝑏: 𝑏 = 𝐴𝑥 𝑓𝑜𝑟 𝑥 𝑖𝑛 𝑅 𝑛 } Theorem: The column space of an 𝒎 × 𝒏 matrices is a subspace of 𝑹𝒎 Proof: We know that “if𝒗𝟏 …, 𝒗𝒑 are in a vectors space V, then Span{𝒗𝟏 …, 𝒗𝒑 } is a subspace of V” Note that a typical vector in Col A can be written as Ax for some x because the notation Ax stands for a linear combination of the columns of A. That is 𝑪𝒐𝒍 𝑨 = {𝒃: 𝒃 = 𝑨𝒙𝒇𝒐𝒓 𝒙 𝒊𝒏 𝑹𝒏 } Example: find a matrix 𝑨 such that 𝑾 = 𝑪𝒐𝒍 𝑨 where 5𝑎 − 2𝑏 𝑊 = {[−2𝑎 + 4𝑏 ] 𝑎, 𝑏 ∈ 𝑅} 3𝑏 Solution: −2 5𝑎 − 2𝑏 5 [−2𝑎 + 4𝑏] = 𝑎 [−2] + 𝑏 [ 4 ] 3 3𝑏 0 −2 5 𝑊 = {𝑎 [−2] + 𝑏 [ 4 ] : 𝑎, 𝑏 ∈ 𝑅} 3 0 −2 5 𝑠𝑝𝑎𝑛 {[−2] , 𝑏 [ 4 ]} 3 0 5 −2 𝑡ℎ𝑒𝑟𝑒𝑓𝑜𝑟𝑒 𝑊 = 𝐶𝑜𝑙 [−2 4 ] By theorem which is stated as 0 3 “The column space of an m × n matrix A is all of 𝑹𝒎 if and only if the equation 𝑨𝒙 = 𝒃 has a solution for each b in𝑹𝒎 ” Note: The column space of an m × n matrix A is all of 𝑹𝒎 if and only if The column A span 𝑅 𝑚 Every vector in 𝑅 𝑚 can be written as linear combination of column of 𝐴 Ax=b is consistent for each 𝑏 ∈ 𝑅 𝑚 The Contrast Between 𝑵𝒖𝒍 𝑨 and 𝑪𝒐𝒍 𝑨: 𝟑 −𝟐 𝟏 𝟑 −𝟐 𝟕 𝟑], 𝒖 = [ ] , 𝒗 = [−𝟏] −𝟏 −𝟖 𝟔 𝟑 𝟎 If the 𝑪𝒐𝒍 𝑨 is a subspace of 𝑹𝒌 ,what is𝒌? If the 𝑵𝒖𝒍 𝑨 is a subspace of 𝑹𝒌 ,what is𝒌? Find a nonzero vector in𝑪𝒐𝒍 𝑨? Find a nonzero vector in𝑵𝒖𝒍𝑨? Determine if 𝒖 is in𝑵𝒖𝒍𝑨. Could 𝒖 be in𝑪𝒐𝒍𝑨? Determine if 𝒗 is in𝑪𝒐𝒍𝑨. Could 𝒗 be in𝑵𝒖𝒍𝑨? 𝟐 𝟒 Example: Let𝑨 = [−𝟐 −𝟓 𝟑 𝟕 a) b) c) d) e) f) Solutions: a) The column of 𝐴 each have three entries. So 𝐶𝑜𝑙 𝐴 is a subspace of 𝑹𝒌 ,where 𝑘 = 3. b) A vector such that A is defined must have four entries, so 𝑁𝑢𝑙 𝐴 𝑖s a subspace of 𝑹𝒌 ,where 𝑘 = 4 2 c) 𝐶𝑜𝑙 𝐴 = [−2] 3 d) To find a nonzero vector in Null A, we have to do some work. We row reduced the augmented matrix[𝐴 0] to obtain 𝟏 [𝑨 𝟎]~ [𝟎 𝟎 𝟎 𝟏 𝟎 𝟗 𝟎 −𝟓 𝟎 𝟎 𝟏 𝟎 𝟎] 𝟎 Thus, if x satisfies Ax=0, then 𝑥1 = −9𝑥3 , 𝑥2 = 5𝑥3 , 𝑥4 = 0, 𝑎𝑛𝑑 𝑥3 is free. Assigning a non zero value to 𝑥3 (say). 𝑥3 = 1 We obtain a vector in 𝑁𝑢𝑙𝑙𝐴 , namely , 𝑥 = (−9,5,1,0) e) An explicit description of 𝑁𝑢𝑙𝑙 𝐴 is not needed here. Simply compute the product 𝐴𝑢 𝟐 𝟒 𝐴𝑢 = [−𝟐 −𝟓 𝟑 𝟕 𝟑 −𝟐 𝟏 0 0 −𝟐 𝟕 𝟑] [ ] = [−3] ≠ [0] −𝟏 −𝟖 𝟔 3 0 𝟎 Obviously, u is not a solution of 𝐴𝑥 = 0 ,so u is not in 𝑁𝑢𝑙𝑙𝐴. Also, with four entries, 𝑢 could not possibly in be in 𝐶𝑜𝑙 𝐴 ,since 𝐶𝑜𝑙𝐴 𝑖s a subspace of 𝑅 3 f) Reduce 𝟐 [𝑨 𝒗]=[−𝟐 𝟑 𝟒 −𝟓 𝟕 −𝟐 𝟏 𝟑 𝟐 𝟕 𝟑 −𝟏 ] ~ [𝟎 −𝟖 𝟔 𝟑 𝟎 𝟒 𝟏 𝟎 −𝟐 𝟏 𝟑 −𝟓 −𝟒 −𝟐] 𝟎 𝟏𝟕 𝟏 At this point, it is clear that the equation 𝐴𝑥 = 𝑣 is consistent, so v is in𝐶𝑜𝑙 𝐴. with only three entries could not possibly be in 𝑁𝑢𝑙𝑙 𝐴, since 𝑁𝑢𝑙𝑙 𝐴 is a subspace of 𝑅 4 Contrast between 𝑵𝒖𝒍𝒍 𝑨 and 𝑪𝒐𝒍 𝑨 for an 𝒎 × 𝒏 matrix 𝑨 Null A 1. 𝑵𝒖𝒍𝒍 𝑨 𝒊s a subspace of 𝑹𝒏 2. It takes time to find vector in 𝑵𝒖𝒍𝒍 𝑨. Row operations [𝑨 𝒖] are required. 3. There is no obvious relation between 𝑵𝒖𝒍𝒍 𝑨 and the entries in A. 4. A typical vector v in 𝑵𝒖𝒍 𝑨 has the property that 𝑨𝒗 = 𝟎. 5. 𝑵𝒖𝒍 𝑨 = {𝟎} if and only if the equation 𝑨𝒙 = 𝒐 has only the trivial solution. 6. 𝑵𝒖𝒍 𝑨 = {𝟎} if and only if the linear transformation x→ 𝑨𝒙 𝒊𝒔 𝒐𝒏𝒆 − 𝒕𝒐 − 𝒐𝒏𝒆. 𝑪𝒐𝒍 𝑨 1. 𝑪𝒐𝒍 𝑨 is a subspace of 𝑹𝒎 . 2. It is easy to find vectors in 𝑪𝒐𝒍 𝑨.the columns of 𝑨 are displayed; other are formed from them. 3. There is an obvious relation between 𝑪𝒐𝒍 𝑨 and the entries in A, since each column of 𝑨 is in 𝑪𝒐𝒍 𝑨. 4. A typical vector v in 𝑪𝒐𝒍 𝑨 has the property that the equation 𝑨𝒙 = 𝒗 is consistent 5. 𝑪𝒐𝒍 𝑨 = 𝑹𝒎 if and only if the equation 𝑨𝒙 = 𝒃 has a solution for every 𝒃 in 𝑹𝒎 6. 𝑪𝒐𝒍 𝑨 = 𝑹𝒎 if and only if the linear transformation x→ 𝑨𝒙 𝒎𝒂𝒑𝒔 𝑹𝒏 𝒐𝒏𝒕𝒐 𝑹𝒎 Kernel and Range of a Linear Transformation Linear Transformation: A linear transformation 𝑇 from a vector space 𝑉 into a vector space 𝑊 is a rule that assigns to each vector 𝑥 in 𝑉 a unique vector 𝑇(𝑥) in 𝑊, such that i . 𝑇(𝑢 + 𝑣) = 𝑇(𝑢) + 𝑇(𝑣) for all 𝑢, 𝑣 in𝑉; ii. 𝑇(𝑐𝑢) = 𝑐𝑇(𝑢) for all 𝑢 in 𝑉 and all scalars 𝑐. Kernel: The kernel (or null space) of T is the set of all vectors u in V such that 𝑇(𝑢) = 0. Range: The range of T is the set of all vectors in W of the form 𝑇(𝑢) where 𝑢 𝑖s in 𝑉. Note: If A is the standard matrix for a linear transformation T, then 𝑇(𝑥) = 𝐴(𝑥) ∀𝑥 ∈ 𝑉 Which implies that A(x) =0, T(x) = 0 𝑁𝑢𝑙 𝐴 =𝐾𝑒𝑟 𝑇 Which implies that 𝐴(𝑥) = 0 𝑖𝑓𝑓 𝑇(𝑥) = 0 𝑁𝑢𝑙 𝐴 = 𝐾𝑒𝑟 𝑇 Similarly, 𝐶𝑜𝑙 𝐴 = 𝑅𝑎𝑛𝑔𝑒 𝑜𝑓 𝑇 Theorem: The kernel of a linear transformation from a vector space V to a vector space W is a subspace of V. Proof Suppose that u and v are vectors in the kernel of T. Then 𝑇(𝒖) = 𝑇(𝒗) = 0 We have 𝑇(𝒖 + 𝒗) = 𝑇(𝒖) + (𝒗) = 𝟎 + 𝟎 = 𝟎 and 𝑇(𝑐𝒖) = 𝑐𝑇(𝒖) = 𝑐 𝟎 = 𝟎 Hence 𝒖 + 𝒗 and 𝑐𝒖 are in the kernel of T. We can conclude that the kernel of T is a subspace of V. Theorem: The range of a linear transformation L from V to W is a subspace of W. Proof Let w1 and w2 vectors in the range of W. Then there are vectors v1𝒗𝟏 and 𝒗𝟐 with 𝑇(𝒗𝟏 ) = 𝒘𝟏 𝑎𝑛𝑑 𝑇(𝒗𝟐 ) = 𝒘𝟐 We must show closure under addition and scalar multiplication. We have 𝑇(𝒗𝟏 + 𝒗𝟐 ) = 𝑇(𝒗𝟏 ) + 𝑇(𝒗𝟐 ) = 𝒘𝟏 + 𝒘𝟐 And 𝑇(𝑐𝒗𝟏 ) = 𝑐𝑇(𝒗𝟏 ) = 𝑐𝒘𝟏 hence w1 + w2 and cw1 are in the range of T. Hence the range of T is a subspace of W. Linearly Independent Sets: Basis Basis: Let H be a subspace of a vector space V. An indexed set vector 𝐵 = {𝑏1 , 𝑏2 , … , 𝑏𝑝 } in V is a basis for H if B is a linearly independent set The subspace spanned by B coincides with H 𝐻 = 𝑠𝑝𝑎𝑛 {𝑏1 , 𝑏2 , … , 𝑏𝑝 } Standard basis: The standard basis (also called natural basis or canonical basis) for a Euclidean space consists of one unit vector pointing in the direction of each axis of the Cartesian coordinate system. For example, the standard basis for the Euclidean plane are the vectors And the standard basis for three-dimensional space are the vectors Here the vector ex points in the x direction, the vector ey points in the y direction, and the vector ez points in the z direction. There are several common notations for these vectors, including {ex, ey, ez}, {e1, e2, e3}, {i, j, k}, and {x, y, z}. In addition, these vectors are sometimes written with a hat to emphasize their status as unit vectors. These vectors are a basis in the sense that any other vector can be expressed uniquely as a linear combination of these. For example, every vector v in three-dimensional space can be written uniquely as The scalars vx, vy, vz being the scalar components of the vector v. In n-dimensional Euclidean space, there are n different standard basis vectors Where ei denotes the vector with a 1 in the ith coordinate Every vector a in three dimensions is a linear combination of the standard basis vectors i, j, and k. Example: standard basis for 𝑹𝒏 Let 𝑒1 , 𝑒2 , … , 𝑒𝑛 are column of 𝑛 × 𝑛 identity matrix,𝐼𝑛 , that is 1 0 0 0 1 𝑒1 = [ ] , 𝑒2 = [ ] , … , 𝑒𝑛 [0] ⋮ ⋮ ⋮ 0 0 1 The set {𝑒1 , 𝑒2 , … , 𝑒𝑛 }is called the standard basis for 𝑅 𝑛 𝑥3 𝑒3 𝑥2 𝑒2 𝑒1 𝑥1 The standard basis for 𝑅 3 . 𝟏 𝟎 𝟏 Example: Let 𝒗𝟏 = [𝟐] , 𝒗𝟐 = [𝟏] , 𝒗𝟑 = [𝟎] is{𝒗𝟏 , 𝒗𝟐 , 𝒗𝟑 } a basis for 𝑹𝟑 ? 𝟎 𝟑 𝟏 Solution: 𝟏 Let A=[𝒗𝟏 , 𝒗𝟐 , 𝒗𝟑 ]=[𝟐 𝟎 𝟎 𝟏 𝟏 𝟎] using row reduction, 𝟏 𝟑 𝟏 𝟎 [𝟐 𝟏 𝟎 𝟏 𝟏 𝟏 𝟎 𝟏 𝟏 𝟎] ~ [𝟎 𝟏 −𝟐] ~ [𝟎 𝟑 𝟎 𝟏 𝟑 𝟎 𝟎 𝟏 𝟏 −𝟐] 𝟎 𝟓 and since there are 3 pivots, the columns of A are linearly Independent and they span 𝑅 3 .Therefore {𝑣1 , 𝑣2 , 𝑣3 } is a basis for 𝑅 3 . Bases for 𝑵𝒖𝒍 𝑨 EXAMPLE: Find a basis for 𝑵𝒖𝒍 𝑨 where 𝟑 𝟔 𝐀=[ 𝟔 𝟏𝟐 𝟔 𝟑 𝟗 ] 𝟏𝟑 𝟎 𝟑 𝒙𝟏 −𝟐𝒙𝟐 −𝟏𝟑𝒙𝟒 −𝟑𝟑𝒙𝟓 𝒙𝟐 𝒙𝟐 𝒙𝟑 = 𝟔𝒙𝟒 + 𝟏𝟓𝒙𝟓 𝒙𝟒 𝒙𝟒 [𝒙𝟓 ] [ 𝒙𝟓 ] −𝟐 −𝟏𝟑 −𝟑𝟑 𝟏 𝟎 𝟎 = 𝒙𝟐 𝟎 + 𝒙𝟒 𝟔 + 𝒙𝟓 𝟏𝟓 𝟎 𝟏 𝟎 [ 𝟏 ] [𝟎] [ 𝟎 ] u v w Therefore {𝑢, 𝑣, 𝑤} is a spanning set for𝑁𝑢𝑙 𝐴. And we also show that this set is linearly independent .Therefore {𝑢, 𝑣, 𝑤} is a basis for𝑁𝑢𝑙 𝐴. The technique used here always provides a linearly independent set. The Spanning Set Theorem A basis can be constructed from a spanning set of vectors by discarding vectors which are linear combinations of proceeding vectors in the indexed set. 𝟎 𝟐 𝟔 EXAMPLE: let 𝒗𝟏 = [ 𝟐 ] , 𝒗𝟐 = [𝟐] , 𝒗𝟑 = [ 𝟏𝟔 ], and H=span{𝒗𝟏 , 𝒗𝟐 , 𝒗𝟑 }. −𝟏 𝟎 −𝟓 Note that 𝒗𝟑 = 𝟓𝒗𝟏 + 𝟑𝒗𝟐 ,and show that span{𝒗𝟏 , 𝒗𝟐 , 𝒗𝟑 } = 𝒔𝒑𝒂𝒏{𝒗𝟏 , 𝒗𝟐 } then find a basis for the subspace H. Solution: Every vector in span{𝒗𝟏 , 𝒗𝟐 } belongs to H because 𝒄𝟏 𝒗𝟏 + 𝒄𝟐 𝒗𝟐 = 𝒄𝟏 𝒗𝟏 + 𝒄𝟑 𝒗𝟐 + 𝟎𝒗𝟑 Now let x be any vector in H(say) ,𝒙 = 𝒄𝟏 𝒗𝟏 + 𝒄𝟐 𝒗𝟐 + 𝒄𝟑 𝒗𝟑 .since 𝒗𝟑 = 𝟓𝒗𝟏 + 𝟑𝒗𝟐 ,we may substitute 𝒙 = 𝒄𝟏 𝒗𝟏 + 𝒄𝟐 𝒗𝟐 + 𝒄𝟑 (𝟓𝒗𝟏 + 𝟑𝒗𝟐 ) = (𝒄𝟏 +𝟓𝒄𝟑 )𝒗𝟏 + (𝒄𝟐 + 𝟑𝒄𝟑 )𝒗𝟐 . Thus x is in𝑠𝑝𝑎𝑛{𝑣1 , 𝑣2 } , so every vector in H already belongs to 𝒔𝒑𝒂𝒏{𝒗𝟏 , 𝒗𝟐 }. We conclude that H and 𝒔𝒑𝒂𝒏{𝒗𝟏 , 𝒗𝟐 } are actually the same set of vectors. It follows that {𝒗𝟏 , 𝒗𝟐 } is a basis of H since {𝒗𝟏 , 𝒗𝟐 } is obviously linearly independent. THEOREM: The Spanning Set Theorem Let 𝑺 = {𝒗𝟏 , … , 𝒗𝒑 } be a set in V and let 𝑯 = 𝒔𝒑𝒂𝒏{𝒗𝟏 , … , 𝒗𝒑 }. a. If one of the vectors in S – say. 𝒗𝒌 - is a linear combination of the remaining vectors in S, then the set formed from S by removing 𝒗𝒌 still spans H. b. If H ≠ {0}, some subset of S is a basis for H. Proof: a) By rearranging the list of vectors in s, if necessary , we may suppose that 𝒗𝒑 is a linear combination of 𝒗𝟏 , … , 𝒗𝒑−𝟏 (say) 𝒗𝒑 = 𝒂𝟏 𝒗𝟏 + ⋯ + 𝒂𝒑−𝟏 𝒗𝒑−𝟏 (𝟏) Given any x in H, we may write 𝑥 = 𝒄𝟏 𝒗𝟏 + ⋯ + 𝒄𝒑−𝟏 𝒗𝒑−𝟏 + 𝒄𝒑 𝒗𝒑 (𝟐) For any suitable scalars 𝒄𝟏 , … , 𝒄𝒑 , substituting the expression for 𝒗𝒑 from (1) into (2) It is easy to see that x is a linear combination of 𝒗𝟏 , … , 𝒗𝒑−𝟏 thus{𝒗𝟏 , … , 𝒗𝒑−𝟏 } span H, because x was an arbitrary element of H. b) If the original spanning set S is linearly independent, then it is already a basis for H, otherwise, one of the vectors in S depends on the others and can be deleted, by part (a). so long as there are two or more vectors in the spanning set, we can repeat this process until the spanning set is linearly independent and hence is a basis for H. if the spanning is eventually reduced to one vector, that vector will be nonzero (and hence linearly independent) because H ≠ {0}. Bases for Col A EXAMPLE: Find a basis for 𝑪𝒐𝒍 𝑩, where 𝑩 = [𝒃𝟏 𝒃𝟐 𝟏 𝟎 … 𝒃𝟓 ] = [ 𝟎 𝟎 𝟒 𝟎 𝟎 𝟎 𝟎 𝟏 𝟎 𝟎 𝟐 −𝟏 𝟎 𝟎 𝟎 𝟎 ] 𝟏 𝟎 Solution: Each non pivot column of B is a linearly combination of the pivot columns. that is, 𝒃𝟐 = 𝟒𝒃𝟏 + 𝟎𝒃𝟑 + 𝟎𝒃𝟒 + 𝟎𝒃𝟓 𝒃𝟒 = 𝟐𝒃𝟏 −𝟏𝒃𝟑 + 𝟎𝒃𝟐 + 𝟎𝒃𝟓 Since 𝒃𝟐 is a linear combination of 𝒃𝟏 , 𝒃𝟑 , 𝒃𝟒 , 𝒃𝟓 . By spanning set theorem, we may discard 𝒃𝟐 and 𝒃𝟒 and {𝒃𝟏 , 𝒃𝟑 , 𝒃𝟓 } will still span 𝑪𝒐𝒍 𝑩. let 𝟏 𝟎 𝟎 𝟎 𝟏 𝟎 𝑆 = {𝒃𝟏 , 𝒃𝟑 , 𝒃𝟓 } = {[ ] , [ ] , [ ]} 𝟎 𝟎 𝟏 𝟎 𝟎 𝟎 Since 𝒃𝟏 ≠ 0 and no vector in S is a linear combination of the vectors that precede it , S is linearly independent .thus S is basis for 𝐶𝑜𝑙 𝐵 Note: - Elementary row operations on a matrix do not affect the linear dependence relations among the columns of the matrix. COORDINATE SYSTEMS Coordinate of vectors: If 𝛽 = {𝒃𝟏 , … , 𝒃𝒏 } is a basis for V and x is in V. then the coordinate of x relative to the basis 𝛽(𝑜𝑟 𝑡ℎ𝑒 𝒄𝒐𝒐𝒓𝒅𝒊𝒏𝒂𝒕𝒆 𝒐𝒇 𝒂𝒙𝒊𝒔) ,are the weight 𝒄𝟏 , … , 𝒄𝒏 such that 𝑥 = 𝒄𝟏 𝒃𝟏 + 𝒄𝟐 𝒃𝟐 + ⋯ + 𝒄𝒏 𝒃𝒏 If 𝒄𝟏 , … , 𝒄𝒏 are the 𝛽 coordinate of x then the vector in 𝑅 𝑛 𝒄𝟏 𝒄𝟐 [𝑥]𝛽 = [ ⋮ ] 𝒄𝒏 Is the coordinate vector of x relative to 𝛽 or the β coordinate vector of x Note:The mapping 𝒙 → [𝒙]𝜷 is called, the coordinate mapping (determined by𝜷) The Dimension of a Vector Space THEOREM If a vector space V has a basis 𝜷 = {𝒃𝟏 , … , 𝒃𝒏 } then any set in V containing more than n vectors must be linearly dependent. Proof: Suppose {𝑢1 … 𝑢𝑝 } is a set of vectors in V where 𝑝 > 𝑛. Then the coordinate vectors {[𝑢1 ]𝛽 , … , [𝑢𝑝 ] }form a linearly independent set in𝑅 𝑛 ,because there are more vectors (p) then 𝛽 entries (n) in each vector . so there exist scalars 𝑐1 , … , 𝑐𝑝 , not all zero, such that 𝟎 𝒄𝟏 [𝒖𝟏 ]𝜷 + ⋯ + 𝒄𝒑 [𝒖𝒑 ] = [ ⋮ ] 𝜷 𝟎 Since the coordinate mapping is a linear transformation 𝟎 [𝒄𝟏 𝒖𝟏 + ⋯ + 𝒄𝒑 𝒖𝒑 ]𝜷 = [ ⋮ ] 𝟎 The zero vector on the right contains the n weights needed to build the vector 𝑐1 𝑢1 + ⋯ + 𝑐𝑝 𝑢𝑝 from the basis vectors in 𝛽.that is 𝒄𝟏 𝒖𝟏 + ⋯ + 𝒄𝒑 𝒖𝒑 = 𝟎. 𝒃𝟏 + ⋯ + 𝟎𝒃𝒏 = 𝟎 Since the 𝑐𝑖 are not all zero, [𝒖𝟏 + ⋯ + 𝒖𝒑 ] is linearly dependent. DEFINITIONS Finite dimensional: A vector space 𝑉 is said to be finite dimensional if there is a positive integer n such that 𝑉 contains a linearly independent set of n vectors whereas any set of 𝑛 + 1 or more vectors of 𝑉 is linearly independent. 𝑛 is called the dimension of 𝑉written 𝒏 = 𝒅𝒊𝒎 𝑽 By definition, 𝑉 = {0} is finite dimensional and 𝒅𝒊𝒎 𝑽 = 𝟎 . If V is spanned by a finite set, then V is said to be finite-dimensional, and the dimension of V, written as dim V, is the number of vectors in a basis for V. The dimension of the zero vector space {0} is defined to be 0. Infinite dimensional: If 𝑉 is not finite dimensional, it is said to be infinite dimensional. Infinite dimensional vector spaces are of greater interest then finite dimensional ones. For instance, 𝐶[𝒂, 𝒃] and 𝒍𝟐 are infinite dimensional, whereas 𝑹𝒏 𝑎𝑛𝑑 𝑪𝒏 are n-dimensional Example: The standard basis for 𝑅 𝑛 contain n vectors, so𝒅𝒊𝒎𝑹𝒏 = 𝒏. The standard polynomial basis{𝟏, 𝒕, 𝒕𝟐 } show that 𝒅𝒊𝒎𝑷𝒏 = 𝒏 + 𝟏 The space 𝑷 of all polynomials is infinite-dimensional. 3 −1 Let 𝐻 = 𝑠𝑝𝑎𝑛{𝑣1 , 𝑣2 },where 𝑣1 = [6] , 𝑣2 = [ 0 ]a basis for 2 1 H is{𝑣1 , 𝑣2 } and 𝒅𝒊𝒎𝑯 = 𝟐. Dimensions of subspaces of 𝑹𝟑 0 0-dimensional subspace contains only the zero vectors {[0]} 0 1-dimensional subspaces. Any subspace spanned by a single nonzero vector. Such subspaces are lines through the origin. 2-dimensional subspaces. Any subspace spanned by a two linearly independent vectors. such subspaces are planes through the origin. 3-dimensional subspaces. Span {u, v, w} where u, v, w are linearly independent vectors in 𝑹𝟑 .this subspace is 𝑹𝟑 itself. Subspaces of Finite Dimensional Space THEOREM Let H be a subspace of a finite-dimensional vector space V. Any linearly independent set in H can be expanded, if necessary, to a basis for H. Also, H is finite-dimensional and 𝒅𝒊𝒎 𝑯 ≤ 𝒅𝒊𝒎 𝑽 Proof: If 𝑯 = {𝟎}, then certainly 𝒅𝒊𝒎 𝑯 = 𝟎 ≤ 𝒅𝒊𝒎𝑽otherwise, let𝑺 = {𝒖𝟏 , … , 𝒖𝒌 } be any linearly independent set in 𝑯. if S spans 𝑯, then S is a basis for H .otherwise, there is some 𝒖𝒌+𝟏 in H that is not in spans S.but then{𝒖𝟏 , … , 𝒖𝒌 , 𝒖𝒌+𝟏 }will be linearly independent, because no vector in the set can be linear combination of vectors that precede it. So long as the new set does not span 𝑯 , we can continue this process of expanding 𝑺 to a larger linearly independent set in 𝑯. But the number of vectors in linearly independent set expansion of 𝑺 can never exceed the dimension of𝑽 . So eventually the expansion of 𝑺 will span 𝑯 and hence will be a basis for𝑯. And 𝒅𝒊𝒎 𝑯 ≤ 𝒅𝒊𝒎 𝑽 The Basis Theorem Let V be a p − dimensional vector space, p ≥ 1. Then Any linearly independent set of exactly p vectors in V is automatically a basis for V. Any set of exactly p vectors that spans V is automatically a basis for V. Proof: By finite dimensional theorem, a linearly independent set S of 𝒑 element can be extended to a basis for𝑽. But that basis must contain exactly 𝒑 elements, since 𝒅𝒊𝒎𝑽 = 𝒑 so S must already be a basis for𝑽. Now suppose that 𝑺 has 𝒑 elements and span 𝑽 .since 𝑽 is nonzero, the spanning set theorem implies that a subset 𝑺́of S is a basis of 𝑽 since 𝒅𝒊𝒎𝑽 = 𝒑, 𝑺́must contain 𝒑 vectors. Hence 𝑺 = 𝑺́ Dimensions of 𝑪𝒐𝒍 𝑨 𝒂𝒏𝒅 𝑵𝒖𝒍 𝑨 The dimension of 𝑵𝒖𝒍 𝑨 is the number of free variables in the equation 𝑨𝒙 = 𝟎 The dimension of 𝑪𝒐𝒍 𝑨 is the number of pivot column in 𝑨 𝒅𝒊𝒎 𝑪𝒐𝒍 𝑨 = 𝒏𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝒑𝒊𝒗𝒐𝒕 𝒄𝒐𝒍𝒖𝒎𝒏𝒔 𝒐𝒇 𝑨 𝒅𝒊𝒎 𝑵𝒖𝒍 𝑨 = 𝒏𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝒇𝒓𝒆𝒆 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔 𝒐𝒇 𝑨 . EXAMPLE: Suppose 𝑨 = [ 𝟏 𝟐 𝟐 𝟒 𝟑 𝟒 ] Find 𝒅𝒊𝒎 𝑪𝒐𝒍 𝑨 and𝒅𝒊𝒎 𝑵𝒖𝒍 𝑨. 𝟕 𝟖 Solution Row reduces the augmented matrix [A 0] to echelon form: [ 𝟏 𝟐 𝟐 𝟒 𝟑 𝟒 𝟏 𝟐 ] ~[ 𝟎 𝟎 𝟕 𝟖 𝟑 𝟒 ] 𝟏 𝟎 There are two free variables (𝑥2 𝑎𝑛𝑑𝑥4 ). Hence the dimension of 𝑁𝑢𝑙 𝐴 is 2.also dim 𝐶𝑜𝑙 𝐴 = 2 because A has two pivot column. The Row Space If 𝑨 is an matrix, each row of 𝑨 has 𝒏 entries and thus can be identified with a vector in𝑹𝒏 .the set of all linear combinations of of the row vector is called the row space of A and is denoted by𝑹𝒐𝒘 𝑨. Each row has 𝒏 entries. So 𝑅𝑜𝑤 𝐴 is a subspace of𝑹𝒏 , since the rows of A are identified with the columns of 𝑨𝒕 so we may write as 𝑹𝒐𝒘 𝑨 = 𝑪𝒐𝒍𝑨𝒕 The Rank Theorem Definitions: The rank of a matrix 𝑨 is the dimension of the column space of 𝑨. The nullity of 𝑨 is the dimension of the null space of 𝑨. Since Row of 𝑨 is the same as𝑪𝒐𝒍𝑨𝒕 , the dimension of the row space of 𝑨 is the rank of 𝑨𝒕 . The Rank Theorem: The dimensions of the column space and the row space of an𝒎 × 𝒏 matrix A are equal. This common dimension, the rank of A, also equals the number of pivot positions in A and satisfies the equation 𝒓𝒂𝒏𝒌 𝑨 + 𝐝𝐢𝐦 𝑵𝒖𝒍𝑨 = 𝒏 Proof: By theorem" the pivot column of a matrix A form a basis for Col A" rank A is the number of pivot columns in A. Equivalently, rank A is the number of pivot positions in an echelon form B of A. Furthermore, since B has a nonzero row for each pivot, and since these rows form a basis for the row space of A, the rank of A is also the dimension of the row space. From the definition of dimension of null A "The dimension of 𝑵𝒖𝒍 𝑨 is the number of free variables in the equation 𝑨𝒙 = 𝟎".Expressed the an other way, the dimension of Null A is the number of column of A that are not pivot column. { 𝒏𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝒏𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝒏𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 }+{ }={ } 𝒑𝒊𝒗𝒐𝒕 𝒄𝒐𝒍𝒖𝒎𝒏𝒔 𝒏𝒐𝒏𝒑𝒊𝒗𝒐𝒕 𝒄𝒐𝒍𝒖𝒎𝒏𝒔 𝒄𝒐𝒍𝒖𝒎𝒏𝒔 This proves the theorem. Example a. If A is a 7 × 9 matrix with a two-dimensional null space, what is the rank Of A? b. Could a 6 × 9 matrix (call it B) have a two-dimensional null space? Solutions: a) By the Rank theorem, the rank of A must be 7 since 𝒓𝒂𝒏𝒌(𝑨) + 𝒅𝒊𝒎(𝑵𝒖𝒍(𝑨)) = 𝒏 = 𝒓𝒂𝒏𝒌 𝑨 + 𝟐 = 𝟗. b) No since by the Rank Theorem the rank of 𝑩 must be 7. But the columns Of B are vectors in 𝑹𝟔 , and so the dimension of 𝑪𝒐𝒍 𝑩 cannot exceed 6; that is the rank of B can not exceed 6. The Invertible Matrix Theorem Let A be a square n × n matrix. Then the following statements are equivalent to the statement that A is an invertible matrix. a. A is an invertible matrix. b. A is row equivalent to the 𝒏 × 𝒏 identity matrix. c. A has n pivot positions. d. The columns of A form a linearly independent set. e. The columns of A span𝑹𝒏 . f. The columns of A form a basis of𝑹𝒏 g. 𝑪𝒐𝒍𝑨 = 𝑹𝒏 h. 𝒅𝒊𝒎𝑪𝒐𝒍 𝑨 = 𝒏 i. 𝒓𝒂𝒏𝒌𝑨 = 𝒏 j. 𝑵𝒖𝒍𝑨 = {𝟎} k. 𝒅𝒊𝒎 𝑵𝒖𝒍𝑨 = 𝟎 Change of Basis In linear algebra, change of basis refers to the conversion of vectors and linear transformations between matrix representations which have different bases. When a basis 𝜷 is chosen for an n-dimensional vector space V, the associated coordinate mapping onto𝑹𝒏 provides a coordinate system for V. Each 𝒙 in V is identified uniquely by its 𝜷 − 𝒄𝒐𝒐𝒓𝒅𝒊𝒏𝒂𝒕𝒆𝒔 vector [𝒙]𝜷 Example: Consider two basis 𝜷 = {𝒃𝟏 , 𝒃𝟐 }𝒂𝒏𝒅 𝓒 = {𝒄𝟏 , 𝒄𝟐 } for a vector space V such that 𝒃𝟏 = 𝟒𝒄𝟏 + 𝒄𝟐 𝒂𝒏𝒅 𝒃𝟐 = −𝟔𝒄𝟏 + 𝒄𝟐 (𝟏) Suppose 𝑥 = 𝟑𝒃𝟏 + 𝒃𝟐 𝟑 That is, suppose[𝒙]𝜷 = [ ]. Find [𝒙]𝓒 . (𝟐) 𝟏 Solution Apply the coordinate mapping determined by 𝒞 to x in (2).since the coordinate mapping is a linear transformation, [𝒙]𝓒 = [3𝒃𝟏 + 𝒃𝟐 ]𝒞 = 3[𝒃𝟏 ]𝒞 + [𝒃𝟐 ]𝒞 We can write this vector equation as a matrix equation, using the vector in linear combination as the columns of matrix: [𝒙]𝓒 = [ [𝒃𝟏 ]𝓒 [𝒃𝟐 ]𝓒 ] [𝟑] (𝟑) 𝟏 This formula gives [𝒙]𝓒 . Once we know the columns of the matrix. From(1) [𝒃𝟏 ]𝓒 = [4] 𝑎𝑛𝑑 [𝒃𝟐 ]𝓒 = [−𝟔] 1 𝟏 Thus (3) provides the solution: [𝒙]𝓒 = [4 −6] [3] = [6] 1 1 1 4 Theorem: Let 𝜷 = {𝒃𝟏 , … , 𝒃𝒏 } 𝒂𝒏𝒅𝒞 = {𝒄𝟏 , … , 𝒄𝒏 } be basis of a vector space V. then there is a unique 𝑷 𝑷 𝒏 × 𝒏 matrix 𝒞 ← 𝜷 such that [𝒙]𝓒 =𝒞 ← 𝜷[𝒙]𝜷 (𝟏) 𝑷 The columns of 𝒞 ← 𝜷 are the 𝒞-coordinate vectors of the vectors in the basis 𝜷 that is, 𝑷 𝒞 ← 𝜷 = [ [ 𝒃𝟏 ] 𝓒 [𝒃𝟐 ]𝓒 ] (𝟐) Proof: 𝑷 The matrix 𝒞 ← 𝜷 is called the change -of – coordinate matrix from 𝜷 to 𝒞. Multiplication 𝑷 by 𝒞 ← 𝜷 converts 𝜷-coordinates into 𝒞-coordinates x [ ]𝓒 [ ]𝜷 [𝒙]𝓒 Multiplication 𝑷 By 𝒞 ← 𝜷 𝑹𝒏 [𝒙]𝜷 𝑹𝒏 Two coordinate system for V 𝑷 The columns of 𝒞 ← 𝜷 are linearly independent because they are the coordinate vectors of the 𝑷 𝑷 linearly independent set 𝜷 so 𝒞 ← 𝜷 is invertible. Left multiplying both side of (1) by (𝒞 ← 𝜷 )−1 yield 𝑷 (𝒞 ← 𝜷 )−1 [𝒙]𝓒 = [𝒙]𝜷 𝑷 Thus (𝒞 ← 𝜷 )−1 is the matrix that 𝒞-coordinates into 𝜷-coordinates. That is 𝑷 𝑷 (𝓒 ← 𝜷 )−𝟏 =𝜷 ← 𝓒