Survey

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Euclidean space wikipedia , lookup

Hilbert space wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Dimension wikipedia , lookup

Jordan normal form wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Geometric algebra wikipedia , lookup

Tensor operator wikipedia , lookup

System of linear equations wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

Euclidean vector wikipedia , lookup

Vector space wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Cartesian tensor wikipedia , lookup

Matrix calculus wikipedia , lookup

Dual space wikipedia , lookup

Four-vector wikipedia , lookup

Linear algebra wikipedia , lookup

Bra–ket notation wikipedia , lookup

Basis (linear algebra) wikipedia , lookup

Transcript
```Vector Spaces: Basis and Dimensions
Paper: Linear Algebra
Lesson: Vector Spaces: Basis and Dimensions
Course Developer: Vivek N Sharma
College / Department: Assistant Professor, Department of
Mathematics, S.G.T.B. Khalsa College, University of Delhi
Institute of Lifelong Learning, University of Delhi
pg. 1
Vector Spaces: Basis and Dimensions
1.
Learning Outcomes
2.
Introduction
3.
Definition of a Vector Space
4.
Linear Combination and Span
5.
Linear Independence
6.
Basis of a Vector Space
7.
Finite Dimensional Vector Space
8.
Rank and System of Linear Equations
9.
Direct Sum of Vector Spaces
10.
Quotient of Vector Spaces
11.
Summary
12.
Glossary
13.
14.
Solutions/Hints for Exercises
Institute of Lifelong Learning, University of Delhi
pg. 2
Vector Spaces: Basis and Dimensions
1.
Learning Outcomes
After studying this unit, you will be able to
2.

explain the concept of a vector space over a field.

understand the concept of linear independence of vectors over a field.

elaborate the idea of a finite dimensional vector space.

state the meaning of a basis of a vector space.

define the dimension of a vector space.

define the concept of rank of a matrix.

analyse a system of linear equations.

explain the concept of direct sum of vector spaces.

describe the idea of a quotient of two vector spaces.
Introduction:
In this unit, we shall be studying one of the most important algebraic structures in
mathematics. They are called Vector Spaces. Vector spaces are introduced in algebra. Of
course, their applications abound. This unit gives the first introduction to these structures.
The unit we are going to study can alternatively be referred to as „Introduction to Linear
Algebra‟. To make sense to this alternative title, it is imperative that the meaning of the
terms 'linear‟ and „algebra‟ be clarified in the context of mathematics. The term „linear‟ in
the context of algebra refers to entities which can be added in a manner „similar‟ to the
addition of matrices; and which can be multiplied by numbers (scalars). Obviously, only like
quantities can be added to each other. (For instance, one cannot add a 2 × 3 matrix to a
4 × 4 matrix.) All such like entities when brought together constitute a Vector Space. So,
let us commence with the idea of a vector space.
Institute of Lifelong Learning, University of Delhi
pg. 3
Vector Spaces: Basis and Dimensions
3.
Definition of a Vector Space:
Before formally defining a vector space, we shall relook at ℝ𝑛 for 𝑛 = 2, 3 . We begin with
𝑛 = 2.The Cartesian space ℝ2 , which we think of as the usual 𝑥 − 𝑦 plane, equals the set of all
ordered pairs of real numbers. That is, we have
ℝ2 = { 𝑥, 𝑦 : 𝑥, 𝑦 𝜀 ℝ}.
Similarly, we have
ℝ3 = { 𝑥, 𝑦, 𝑧 : 𝑥, 𝑦, 𝑧 𝜀 ℝ}.
Now, the question is: How, if possible, can we perform addition on the elements of ℝ2 𝑜𝑟 ℝ3 ?
The answer, as expected, emerges from co-ordinate geometry. We define addition on ℝ2
and ℝ3 in a coordinate-wise manner:
(𝑥1 , 𝑦1 ) + 𝑥2 , 𝑦2 = (𝑥1 + 𝑥2 , 𝑦1 + 𝑦2 )
and
(𝑥1 , 𝑦1 , 𝑧1 ) + 𝑥2 , 𝑦2 , 𝑧2 = (𝑥1 + 𝑥2 , 𝑦1 + 𝑦2 , 𝑧1 + 𝑧2 ).
This definition is easily adopted to ℝ𝑛 :
(𝑥1 , … … . 𝑥𝑛 ) + 𝑦1 , … … . 𝑦𝑛 = (𝑥1 + 𝑦1 , … . . 𝑥𝑛 + 𝑦𝑛 )
and, hence, on ℱ 𝑛 where ℱ = ℝ or ℂ. We may note that when ℱ = ℂ, we have
ℱ 𝑛 = ℂ𝑛 = { 𝑧1 , … . , 𝑧𝑛 : 𝑧𝑗 𝜀 ℂ ∀ 𝑗 = 1 1 𝑛}
The notation 𝑗 = 1 1 𝑛 stands for: “the index 𝑗 starts from one, raised by one, upto 𝑛”, that
is, 𝑗 = 1,2,3, … … , 𝑛. Writing out elements of ℱ 𝑛 explicity in a co-ordinate-wise manner may not
be illuminating every time. So, to denote the elements of ℱ 𝑛 , we compress the 𝑛-coordinates to a single symbol. We simply write:
𝑥 = 𝑥1 , 𝑥2 … . . , 𝑥𝑛 .
Hence, addition on ℱ 𝑛 corresponds to
𝑥 + 𝑦 = 𝑥1 , … . 𝑥𝑛 + 𝑦1 , … . 𝑦𝑛 = (𝑥1 + 𝑦1 , … . 𝑥𝑛 + 𝑦𝑛 ).
The other algebraic operation, namely, scalar multiplication, is defined as follows:
𝛼. 𝑥 = 𝛼 𝑥1 , … . 𝑥𝑛 = 𝛼𝑥1 , … . 𝛼𝑥𝑛 .
Institute of Lifelong Learning, University of Delhi
pg. 4
Vector Spaces: Basis and Dimensions
The general concept of a vector space follows this model of Euclidean space ℝ𝑛 . So, we may
now give the formal definition of a vector space.
Definition of Vector Space: We first remark that by an addition on 𝑉, we mean a function
on 𝑉 × 𝑉 that assigns an element 𝑢 + 𝑣 in 𝑉 to every pair (𝑢, 𝑣) in 𝑉 × 𝑉. Further, by a scalar
multiplication on 𝑉, we mean a function on ℱ × 𝑉 that assigns an element 𝛼𝑣 in 𝑉 to each
pair (𝛼, 𝑣) in ℱ × 𝑉, where ℱ is a field. The formal definition of a vector space is as under:
A vector space is a set 𝑉 along with two operations: an addition on 𝑉 and a scalar
multiplication on 𝑉 such that the following eight properties hold for all 𝑢, 𝑣, 𝑤 in 𝑉 & for all
𝛼, 𝛽 in ℱ:
1. 𝑢 + 𝑣 = 𝑣 + 𝑢 (Commutativity);
2. (𝑢 + 𝑣) + 𝑤 = 𝑢 + (𝑣 + 𝑤) (Associativity);
3. There exists an element 0 in 𝑉 such that 𝑣 + 0 = 𝑣 (Additive Identity);
4. for every 𝑣 in 𝑉, there exists a 𝑤 in 𝑉 such that 𝑣 + 𝑤 = 0 (Additive Inverse). We
may denote 𝑤 by – 𝑣, that is 𝑤 = −𝑣.
5. 1. 𝑣 = 𝑣;
6. (𝑎𝑏)𝑣 = 𝑎(𝑏𝑣) (Associativity);
7. 𝑎(𝑢 + 𝑣) = 𝑎𝑢 + 𝑎𝑣 (Distributivity);
8. (𝑎 + 𝑏)𝑣 = 𝑎𝑣 + 𝑏𝑣 (Distributivity).
Hence, in scalar multiplication, scalars come from the field ℱ. Thus, this scalar multiplication
depends upon field ℱ. Therefore, with reference to a vector space 𝑉, we say that “𝑉 is a
vector space over a field ℱ” (instead of simply saying that 𝑉 is a vector space). There are
various notations available to express the same 𝑉ℱ , 𝑉(𝐹) and so on. For our purpose, we
shall work with ℱ = ℝ or ℂ.
When ℱ = ℝ, the vector space 𝑉 over ℝ is called real vector space and, when
ℱ = ℂ, the vector space 𝑉 over ℂ is called a complex vector space.
More often than not, the choice of ℱ is obvious from the context. It goes without explicit
mention. Let us now turn to some of the famous examples of vector spaces.
Institute of Lifelong Learning, University of Delhi
pg. 5
Vector Spaces: Basis and Dimensions
3.1 Examples of Vector Spaces:
(1)
𝑉1 = ℱ 𝑛 is a vector space over ℱ. To see this, we note that
ℱ𝑛 =
𝑥1 , 𝑥2 , … , 𝑥𝑛 : 𝑥𝑖 𝜀 ℱ ∀ 𝑖 = 1 1 𝑛 ,
𝑥 + 𝑦 = 𝑥1 , … . , 𝑥𝑛 + 𝑦1 , … . , 𝑦𝑛 = (𝑥1 + 𝑦1 , … , 𝑥𝑛 + 𝑦𝑛 );
and scalar multiplication as
𝛼𝑥 = 𝛼 𝑥1 , … , 𝑥𝑛 = 𝛼𝑥1 , … , 𝛼𝑥𝑛 .
(2)
𝑉2 = ℙ𝑚 ℱ = {𝑝: ℱ → ℱ: 𝑝 𝑧 = 𝑎0 + 𝑎1 𝑧 + ⋯ + 𝑎𝑚 𝑧 𝑚 𝑎𝑛𝑑 𝑎𝑖 𝜀 ℱ ∀ 𝑖 = 0 1 𝑚}
is a vector space over ℱ under usual addition & scalar multiplication. This is the set
of all polynomials of degree 𝑚 over ℱ. Here, addition is defined as
𝑝(𝑧) + 𝑞(𝑧)= 𝑎0 + 𝑏0 + 𝑎1 + 𝑏1 𝑧 + ⋯ + (𝑎𝑚 + 𝑏𝑚 )𝑧 𝑚 ;
and scalar multiplication is defined as
𝛼. 𝑝 𝑧 = 𝛼(𝑎0 + 𝑎1 + ⋯ + 𝑎𝑚 𝑧 𝑚 ) = 𝛼𝑎0 + (𝛼𝑎1 𝑧) … . +(𝛼𝑎𝑚 )𝑧 𝑚 .
(3)
The set of all 𝑚 × 𝑛 matrices having entries from ℝ is a vector space
usual addition & scalar multiplication. In particular, the
entries from ℝ is a vector space
(4)
over ℝ, under
set of all square matrices having
over ℝ.
The set of all real-valued continuous functions on [𝑎, 𝑏] is a vector space over ℝ under
(5)
The set {0} is a vector space over any field ℱ.
As we can see, vector spaces are quite large objects. It is, therefore, imperative that we
study what are known as vector-subspaces of a given vector space.
3.2 Vector Subspace:
Definition: Let 𝑉 be a vector space over ℱ. A subset 𝑈 of 𝑉 is called vector subspace of 𝑉 is
𝑈 itself is a vector space over ℱ under the same addition and scalar multiplication as on 𝑉.
Let us look at some vector subspaces.
3.2.1 Examples of Vector subspaces:
(1)
{ 𝑥, 0 : 𝑥 𝜀 ℝ} is a real vector subspace of ℝ2 over ℝ. This is the 𝑥-axis
Institute of Lifelong Learning, University of Delhi
in ℝ2 .
pg. 6
Vector Spaces: Basis and Dimensions
(2)
(𝑥1 , 𝑥2 , 0 : 𝑥1 , 𝑥2 𝜀 ℝ} is a vector subspace of ℝ3 over ℝ. This is the 𝑥 − 𝑦
(3)
The set of diagonal matrices is a vector subspace of the vector space
plane in ℝ3 .
of all square
matrices over ℝ (or over ℂ).
(4)
Similarly, the sets of all upper-, symmetric, skew-symmetric matrices are subspaces
of the vector space of all square-matrices over 𝑅.
I.Q.1
So, let us now see how to test whether a typical subset of a vector space is a subspace.
3.2.2 Criterion for a subset to be a vector subspace:
To check if a subset 𝑈 of a vector space 𝑉 is a subspace of 𝑉, it enough to confirm the
following three conditions:
(1)
Existence of additive identify in 𝑈: 𝑂 𝜀 𝑈;
(2)
𝑈 is closed under addition: 𝑢 + 𝑣 𝜀 𝑈 ∀ 𝑢, 𝑣𝜀 𝑈; and
(3)
𝑈 is closed under scalar multiplication: 𝛼 𝑢 𝜀 𝑈 ∀ 𝛼 𝜀 ℱ and 𝑢 𝜀 𝑉.
These three conditions can be combined together.
Theorem 1: A subset 𝑈 ⊆ 𝑉 is a vector subspace of 𝑉 over ℱ if, and only if, 𝛼𝑢 + 𝑣 𝜀 𝑈 ∀ 𝛼 𝜀 ℱ
and 𝑢, 𝑣 𝜀 𝑈.
Proof: If 𝑈 is a vector subspace of 𝑉 over ℱ, then there is nothing to prove because 𝑈, itself
being a vector spaces over ℱ, is closed under addition and scalar multiplication over ℱ.
So, we let 𝛼𝑢 + 𝑣 𝜀 𝑈 ∀ 𝛼 𝜀 ℱ and 𝑢, 𝑣 𝜀 𝑈. We prove that 𝑈 satisfies the three conditions as
stated above. We see that
(1) is satisfied because 0 = −1 𝑣 + 𝑉 𝜀 𝑈;
(2) is satisfied because 𝑢 + 𝑣 = 1 𝑢 + 𝑣 𝜀 𝑈; and
(3) is satisfied because 𝛼 𝑢 = 𝛼 𝑢 + 0 𝜀 𝑈 ∀ 𝛼 𝜀 ℱ and 𝑢 𝜀 𝑉.
(Since 0 𝜀 𝑈, by (1), we take 𝑣 = 0 in the expression 𝛼𝑢 + 𝑣.)
The proof is now complete.
Institute of Lifelong Learning, University of Delhi
pg. 7
Vector Spaces: Basis and Dimensions
Hence, only a line that passes through the origin (0,0) is a vector subspace of ℝ2 , by virtue
of property (1). Similarly, only a plane that passes through the origin (0,0,0) is a vector
subspace of ℝ3 .

Let 𝑈1 and 𝑈2 be vector subspaces of the vector space 𝑉 over a field ℱ.
Then, 𝑈1 ⋂𝑈2 is also a subspace of 𝑉 over ℱ.

However, 𝑈1 ∪ 𝑈2 need not be a vector subspace of the vector space 𝑉.
I.Q.2
4. Linear Combination and Span:
Definition of Linear Combination of a set of vectors: Let 𝑉 be a vector space over ℱ.
Let 𝑣1 , … , 𝑣𝑚 𝜀 𝑉 be arbitrary. A linear combination of the vectors 𝑣1 , … , 𝑣𝑚 in 𝑉 is a vector of
the form
𝛼1 𝑣1 + ⋯ + 𝛼𝑚 𝑣𝑚 ,
where, 𝛼𝑖 𝜀 ℱ ∀ 𝑖 = 1 1 𝑚.
Example
1:
Show
that
in
ℝ3 ,
the
vector
(1,2,3)
is
a
linear
combination
of
{(0,0,0), (0,1,0), (0,0,1)}.
Solution: This follows by observing that
(1,2,3) = 1(1,0,0) + 2(0,1,0) + 3(0,0,1).
I.Q.3
Definition of 𝒔𝒑𝒂𝒏{𝒗𝟏 , … , 𝒗𝒎 }: The set of all linear combinations of the vectors 𝑣1 , … , 𝑣𝑚 is
called the span of the set 𝛽 = {𝑣1 , … , 𝑣𝑚 } and is denoted by 𝑠𝑝𝑎𝑛 {𝑣1 , … , 𝑣𝑚 } or 𝑠𝑝𝑎𝑛(𝛽) or simply
𝐿(𝛽).
Theorem 2: The set 𝑠𝑝𝑎𝑛{𝑣1 , … , 𝑣𝑚 } is a vector subspace of 𝑉 over ℱ for any set 𝛽 =
𝑣1 , … , 𝑣𝑚 ⊆ 𝑉.
Proof: Since 𝑢, 𝑣 𝜀 𝑠𝑝𝑎𝑛 𝑣1 , … , 𝑣𝑚 = 𝐿 𝛽 , therefore, ∃ 𝛼𝑖 , 𝛽𝑖 𝜀 ℱ; 𝑖 = 1 1 𝑚, such that
𝑢 = 𝛼1 𝑣1 + ⋯ + 𝛼𝑚 𝑣𝑚
and
𝑣 = 𝛽1 𝑣1 + ⋯ + 𝛽𝑚 𝑣𝑚 .
Hence, ∀ 𝛼 𝜀 ℱ, we have
Institute of Lifelong Learning, University of Delhi
pg. 8
Vector Spaces: Basis and Dimensions
𝛼𝑢 + 𝑣 𝜀 𝐿 𝛽
because
𝛼𝑢 + 𝑣 = 𝛼 𝛼1 𝑣1 + ⋯ + 𝛼𝑚 𝑣𝑚 + (𝛽1 𝑣1 + ⋯ + 𝛽𝑚 𝑣𝑚 )
= 𝛾1 𝑣1 + ⋯ + 𝛾𝑚 𝑣𝑚 𝜀 𝐿(𝛽),
where 𝛾𝑖 = 𝛼𝛼𝑖 + 𝛽𝑖 𝜀 ℱ ∀ 𝑖 = 1 1 𝑚.
In view of the theorem 1, 𝑠𝑝𝑎𝑛 𝑣1 , … , 𝑣𝑚 = 𝐿(𝛽) is a subspace of 𝑉.
The proof is now complete.
Let 𝛽1 = {𝑒1 = 1,0 , 𝑒2 = 0,1 }. Then, 𝐿 𝛽1 = ℝ2 .
Similarly, let 𝛽2 = {𝑒1 = 1,0,0 , 𝑒2 = 0,1,0 , 𝑒3 = 0,0,1 }. Then, 𝐿 𝛽2 = ℝ3 .

Let 𝐴 ⊆ 𝐵 ⊆ 𝑉, where 𝑉 is a vector space over a field ℱ. Then,
𝐿(𝐴) ⊆ 𝐿(𝐵).

Let 𝐴 ⊆ 𝑉, where 𝑉 is a vector space over a field ℱ. Then, 𝐿 𝐿 𝐴
= 𝐴.
I.Q.4
I.Q.5
5. Linear Independence:
Definition of a linearly independent subset of a vector space: A set {𝑣1 , … , 𝑣𝑚 } of
vectors in vector space 𝑉 is said to be linearly independent if the only scalars 𝛼𝑖 𝜀 ℱ ∀ 𝑖 =
1 1 𝑚 satisfying the relation
𝛼1 𝑣1 + ⋯ + 𝛼𝑚 𝑣𝑚 = 0
are
𝛼1 = 𝛼2 = . . . = 𝛼𝑚 = 0.
Example 2: Show that in ℝ3 , the set {𝑒1 = 1,0,0 , 𝑒2 = 0,1,0 , 𝑒3 = 0,0,1 } is linearly
independent.
Solution: This follows by observing the following implications.
𝛼1 𝑒1 + 𝛼2 𝑒2 + 𝛼3 𝑒3 = 0
⟺ 𝛼1 1,0,0 + 𝛼2 0,1,0 + 𝛼3 0,0,1 = 0
⟺ 𝛼1 , 𝛼2 , 𝛼3 = 0 = (0,0,0)
⟺ 𝛼1 = 0 = 𝛼2 = 0 = 𝛼3 = 0.
Institute of Lifelong Learning, University of Delhi
pg. 9
Vector Spaces: Basis and Dimensions
I.Q.6
Definition of a linearly dependent subset of a vector space: A set {𝑣1 , … , 𝑣𝑚 } of vectors
in 𝑉 is said to be linearly dependent if ∃ scalars 𝛼1 , … , 𝛼𝑚 𝜀 ℱ not all zero, such that
𝛼1 𝑣1 + ⋯ + 𝛼𝑚 𝑣𝑚 = 0.
Hence, at least one scalar has to be non-zero, for otherwise, all scalars becoming zero
would imply linear independence of the vectors 𝑣1 , … , 𝑣𝑚 .
Example 3: Show that the set {𝑣1 = −1,1,1 , 𝑣2 = 1,2,4 , 𝑣3 = 0, −3, −5 } is linearly dependent
in ℝ3 .
Solution: This follows by seeing that the sum
1𝑣1 + 𝑣2 + 1𝑣3 = 0
as
1. 𝑣1 + 1. 𝑣2 + 1. 𝑣3 = 𝑣1 + 𝑣2 + 𝑣3
= −1,1,1 + 1,2,4 + 0, −3, −5 = 0,0,0 = 0.
Example 4: Test for linear independence of the set {𝑣1 = −1,0 , 𝑣2 = 0,1 , 𝑣3 = −2,3 } in ℝ2 .
Solution: The set is linearly dependent because
𝑣3 = 2𝑣1 + 3𝑣2 .

Any subset of 𝑉 which contains the zero vector is linearly dependent.

In other words no linearly independent set can contain the zero vector.

A linearly independent set necessarily contains distinct vectors.

Any subset of a linearly independent set is linearly independent.

Any superset of a linearly dependent set as linearly dependent.

A set 𝛽 = {𝑣1 , … , 𝑣𝑚 } of vectors in ℝ𝑚 is linearly independent over ℝ if the following
condition is satisfied: 𝐷𝑒𝑡[𝑣1 , 𝑣2 , … , 𝑣𝑚 ] ≠ 0.

A set 𝛽 = {𝑣1 , … , 𝑣𝑚 } of vectors in ℝ𝑚 is linearly dependent over ℝ if the following
condition is satisfied: 𝐷𝑒𝑡 𝑣1 , 𝑣2 , … , 𝑣𝑚 = 0.
Institute of Lifelong Learning, University of Delhi
pg. 10
Vector Spaces: Basis and Dimensions
I.Q.7
I.Q.8
6. Basis of a vector space:
The concepts of span of a set and linear independence are combined together to form a
basis of a vector space.
Definition of Basis of a vector space: A set 𝐵 = {𝑒1 , … , 𝑒𝑚 } in a vector space 𝑉 is called a
basis of 𝑉 if every vector 𝑣 in 𝑉 can be written uniquely as
𝑣 = 𝛼1 𝑒1 + ⋯ + 𝛼𝑚 𝑒𝑚 ,
where 𝛼𝑖 𝜀 ℱ ∀ 𝑖 = 1 1 𝑚. By uniqueness is meant that if
𝑣 = 𝛼1 𝑒1 + ⋯ + 𝛼𝑚 𝑒𝑚
and
𝑣 = 𝛽1 𝑒1 + ⋯ + 𝛽𝑚 𝑒𝑚 ,
then
𝛼𝑖 = 𝛽𝑖 ∀ 𝑖 = 1 1 𝑚.
As an illustration, we have a basis for ℝ3 over ℝ in example 5.
Example 5: Show that 𝛽 = { 1,0,0 , 0,1,0 , 0,0,1 } is a basis for ℝ3 over ℝ.
Solution: This is because for any 𝑣 = 𝑥, 𝑦, 𝑧 𝜀 ℝ3 , we have
𝑣 = 𝑥, 𝑦, 𝑧
= 𝑥 1,0,0 + 𝑦 0,1,0 + 𝑧 0,0,1
= 𝑥𝑒1 + 𝑦𝑒2 + 𝑧𝑒3
and these 𝑥, 𝑦, 𝑧 𝜀 ℝ are unique. This basis is called the standard or canonical basis for ℝ3
over ℝ.
Example 6: Show that {1, 𝑖} is basis for ℂ over ℝ.
Solution: We know that every complex number 𝑧 can be uniquely written as
𝑧 = 𝑥 + 𝑖𝑦
which can be interpreted as the linear combination
𝑧 = 𝑥 1 + 𝑦(𝑖)
where 𝑥, 𝑦 𝜀 ℝ. Hence, {1, 𝑖} is basis for ℂ over ℝ.
Theorem 3: Any basis 𝛽 = 𝑣1 , … , 𝑣𝑚 of 𝑉 is a linearly independent set.
Proof: We prove by contradiction. Assuming 𝛽 to be linearly dependent, we see the ∃
scalars 𝛼1 , … , 𝛼𝑚 , not all zero, such that
Institute of Lifelong Learning, University of Delhi
pg. 11
Vector Spaces: Basis and Dimensions
𝛼1 𝑣1 + ⋯ + 𝛼𝑚 𝑣𝑚 = 0
We also know that
0 = 0. 𝑣1 +. . +0. 𝑣𝑚 .
Since 𝛽 is a basis of 𝑉, therefore, the linear combination
𝛼1 𝑣1 + ⋯ + 𝛼𝑚 𝑣𝑚 = 0
for the zero vector is unique. Hence, we have,
𝛼𝑖 = 0 ∀ 𝑖 = 1 1 𝑚.
In particular,
𝛼1 = 0.
Therefore, our original assumption that 𝛽 is linearly dependent must be wrong. Hence 𝛽 is
independent.
The proof is now complete.
I.Q.9

If 𝑉 is a vector space having a basis of 𝑚 elements, then any subset of 𝑉
having 𝑚 + 1 elements is linearly dependent.

Hence, if any subset 𝑊 of 𝑉 has cardinality strictly greater than that of a
basis of 𝑉, then 𝑊 is a linearly dependent subset of 𝑉.

Thus, a basis 𝛽 of a vector space 𝑉 is the largest subset of 𝑉 which is
linearly independent and spans 𝑉 over ℱ.

In other words, a basis of a vector space 𝑉 is the maximal linearly
independent subset of 𝑉 which span 𝑉 over ℱ.
I.Q.10
In view of the above remarks, it is clear that any set of four distinct vectors in ℝ3 is linearly
dependent over ℝ, because in ℝ3 , we have the standard basis {𝑒1 , 𝑒2 , 𝑒3 } of only three
vectors. In general, any set of (𝑛 + 1)-vectors in ℝ𝑛 is linearly dependent over ℝ.
A vector space can have many basis.
Example 7: Show that 𝛽1 =
0,1 , 1,0
and 𝛽2 = { 1,2 , 4,3 } are bases for ℝ2 over ℝ.
Institute of Lifelong Learning, University of Delhi
pg. 12
Vector Spaces: Basis and Dimensions
Solution: There are two facts to be shown: that the sets 𝛽1 and 𝛽2 are linearly independent
and that each spans the plane ℝ2 over ℝ. We first show that both the sets are linearly
independent in ℝ2 over ℝ. In view of the remarks following example 3, it is enough to verify
the determinant condition.
For 𝛽1 , we have
𝐷𝑒𝑡 𝑣1 , 𝑣2 =
0
1
1
= −1 ≠ 0
0
so that the set 𝛽1 is linearly independent over ℝ.
Similarly, for 𝛽2 , we have
𝐷𝑒𝑡 𝑣1 , 𝑣2 =
1
2
4
= −5 ≠ 0
3
so that the set 𝛽1 is linearly independent over ℝ.
Next, we show that 𝐿 𝛽1 = 𝐿 𝛽2 = ℝ2 . So, let 𝑥, 𝑦 𝜀 ℝ2 be arbitrary. We have to prove that
𝑥, 𝑦 𝜀 𝐿 𝛽1 and 𝑥, 𝑦 𝜀 𝐿 𝛽2 .
We observe that
𝑥, 𝑦 𝜀 𝐿 𝛽1
because
𝑥, 𝑦 = 𝑥 1,0 + 𝑦(0,1)
so that (𝑥, 𝑦) is a linear combination of the vectors (1,0) and (0,1). This proves that 𝛽1 is a
basis for ℝ2 over ℝ.
Next, we see that
𝑥, 𝑦 𝜀 𝐿 𝛽2
if, and only if, ∃ some 𝑟, 𝑠 𝜀 ℝ such that
𝑥, 𝑦 = 𝑟 1,2 + 𝑠(4,3),
that is,
𝑟 + 4𝑠 = 𝑥
2𝑟 + 3𝑠 = 𝑦.
Solving this system, we obtain,
𝑟=
4𝑦−3𝑥
5
and 𝑠 =
2𝑥−𝑦
5
.
Clearly, 𝑟, 𝑠 𝜀 ℝ, and hence,
𝑥, 𝑦 𝜀 𝐿 𝛽2 ,
that is, (𝑥, 𝑦) is a linear combination of the vectors (1,2) and (4,3). This proves that 𝛽2 is a
basis for ℝ2 over ℝ.
I.Q.11
Institute of Lifelong Learning, University of Delhi
pg. 13
Vector Spaces: Basis and Dimensions
7. Finite Dimensional Vector Space:
Looking at ℝ𝑛 , we see that we have the standard basis {𝑒1 , … , 𝑒𝑛 } which is finite. The
finiteness of the basis is a very important condition. This condition is absent for the vector
space of all real-valued continuous functions on the interval[𝑎, 𝑏]. This difference is so
profound that we have separate disciplines: finite-dimensional analysis and infinitedimensional analysis. What we are in the process of learning is finite-dimensional linear
algebra. The foundation for this subject is the fact that finite dimensional vector spaces
exist and they exist in abundance. The phrase “finite dimensional vector spaces” is often
abbreviated as F.D.V.S.
Definition of a F.D.V.S.: A vector space 𝑉 over a field ℱ is called finite dimensional if ∃ a
finite subset 𝑆 of 𝑉 such that 𝐿(𝑆) = 𝑉.
The fundamental theorem in the study of finite-dimensional linear algebra is the fact that
every finite dimensional vector space has a basis. We state this fact as a theorem.
Theorem 4: Every finite dimensional vector space 𝑉 over a field ℱ has a basis.
Proof: We give a constructive proof of the basis. We know that a basis is a linearly
independent subset of a vector space. Therefore, it cannot contain the zero vector. So, we
start with any non-zero vector in 𝑉. So, let 𝑣1 𝜀 𝑉\{0}.
Now, we pick a vector which does not belong to the 𝑠𝑝𝑎𝑛 (𝑣1 ).
So, let 𝑣2 𝜀 𝑉\{𝛼1 𝑣1 }.
Next, let 𝑣3 𝜀 𝑉\{𝛼1 𝑣1 + 𝛼2 𝑣2 }.
Next, let 𝑣4 𝜀 𝑉\{𝛼1 𝑣1 + 𝛼2 𝑣2 + 𝛼3 𝑣3 }.
Continuing so, we get 𝑣𝑛 𝜀 𝑉\{𝛼1 𝑣1 , , , , , , +𝛼𝑛−1 𝑣𝑛−1 }.
We shall show that the set
𝛽 = 𝑣1 , … , 𝑣𝑛
is a basis of 𝑉 over the field ℱ.
Firstly, we show that 𝛽 is linearly independent over the field ℱ. We proceed by contradiction.
So, let, if possible, ∃ 𝛾𝑖 𝜀 ℱ ∀ 𝑖 = 1 1 𝑛, not all zero, such that
𝛾1 𝑣1 + 𝛾2 𝑣2 + ⋯ + 𝛾𝑛 𝑣𝑛 = 0.
Since not all scalars 𝛾𝑖 are zero, we may assume that 𝛾𝑛 ≠ 0. Hence, we obtain
𝑣𝑛 = −
1
𝛾𝑛
(𝛾1 𝑣1 + 𝛾2 𝑣2 + ⋯ + 𝛾𝑛−1 𝑣𝑛−1 )
which means that 𝑣𝑛 𝜀 𝑠𝑝𝑎𝑛{𝑣1 , 𝑣2 … , 𝑣𝑛 −1 }. But this contradicts our construction of the set 𝛽.
Hence, our assumption that 𝛽 is linearly dependent over the field ℱ must be wrong.
Therefore, 𝛽 is linearly independent over the field ℱ.
Institute of Lifelong Learning, University of Delhi
pg. 14
Vector Spaces: Basis and Dimensions
Now, what remains to be proven is that the set 𝛽 spans 𝑉.
Since 𝑉 is a finite dimensional vector space, ∃ a finite set 𝛽 ′ = {𝑤1 , … , 𝑤𝑛 } such that 𝐿 𝛽 ′ = 𝑉.
That is, every vector in 𝑉 is a linear combination of the elements of 𝛽′. In particular, every
element of 𝛽 is a linear combination of the elements of 𝛽′. That is, ∃ 𝛼𝑖𝑗 𝜀 ℱ ∀ 𝑖, 𝑗 = 1 1 𝑛 such
that
𝛼11 𝑤1 + 𝛼12 𝑤2 +. … . +𝛼1𝑛 𝑤𝑛 = 𝑣1
𝛼21 𝑤1 + 𝛼22 𝑤2 +. . … + 𝛼2𝑛 𝑤𝑛 = 𝑣2
.
.
𝛼𝑛1 𝑤1 + 𝛼𝑛2 𝑤2 +. … + 𝛼𝑛𝑛 𝑤𝑛 = 𝑣𝑛 .
This system in matrix form can be written as
𝐴𝑤 = 𝑣;
where,
𝐴 = 𝛼𝑖𝑗 ; 𝑤 = [𝑤1 , 𝑤2 , … , 𝑤𝑛 ]𝑡 ; 𝑣 = [𝑣1 , 𝑣2 , … , 𝑣𝑛 ]𝑡 .
In this system of linear equations, 𝑤𝑖 ‟s can be viewed as unknowns. Since this system is
solvable (because 𝑤𝑖 ‟s exist), exactly one of the following will hold:
(a) 𝐷𝑒𝑡(𝐴) = 0 and 𝑣 = 0
(b) 𝐷𝑒𝑡 𝐴 ≠ 0 and 𝑣 ≠ 0.
Obviously, the condition (a) will not hold as 𝑣 ≠ 0 because 𝛽 is a linearly independent set
(and no linearly independent set can contain the zero vector). Hence, the condition (b) is
satisfied. Therefore, the matrix-inverse 𝐴−1 exists and we have
𝑤 = 𝐴−1 𝑣.
But this implies that every element of 𝛽′ is a linear combination of the elements of 𝛽. That
is,
𝛽′ ⊆ 𝐿(𝛽)
so that
𝐿 𝛽 ′ ⊆ 𝐿 𝐿 𝛽 . (Remark in section 4; taking 𝐴 = 𝛽 and 𝐵 = 𝐿(𝛽).)
But we know that
𝐿 𝐿 𝛽
= 𝛽, ∀ 𝛽 ⊆ 𝑉.
Hence, we get
𝐿 𝛽 ′ ⊆ 𝐿(𝛽).
Institute of Lifelong Learning, University of Delhi
pg. 15
Vector Spaces: Basis and Dimensions
Since 𝐿 𝛽 ′ = 𝑉, we have,
𝑉 ⊆ 𝐿(𝛽)
which gives
𝑉 = 𝐿(𝛽).
That is, 𝛽 spans the vector space 𝑉 over ℱ. Thus, 𝛽 is a basis of the vector space 𝑉 over ℱ.
This completes the proof.
I.Q.12
Now, we know that a vector space can have more than one basis. So, the question arises: is
it possible to have two bases of a finite dimensional vector space such that they have
distinct cardinalities? The next important theorem answers this question in negation.
Theorem 5: Let 𝑉 be a finite dimensional vector space over a field ℱ such that one basis
has 𝑚 elements and another has 𝑛 elements. Then, 𝑚 = 𝑛.
I.Q.13
This theorem paves way for the definition of dimension.
Definition of Dimension of a Finite Dimensional Vector Space: The dimension of a
finite dimensional vector space 𝑉 over a field ℱ is the number of elements in a basis of 𝑉
over the field ℱ. It is denoted by 𝑑𝑖𝑚ℱ (𝑉). If 𝑑𝑖𝑚ℱ 𝑉 = 𝑛, we say that 𝑉 is an 𝑛-dimensional
vector space over the field ℱ.
As a corollary to the theorem 5, we have another theorem.
Theorem 6: Let 𝑊 be a subspace of an 𝑛-dimensional vector space 𝑉 over a field ℱ. Then,
𝑑𝑖𝑚ℱ (𝑊) ≤ 𝑛. In particular, if 𝑑𝑖𝑚ℱ 𝑊 = 𝑛, then 𝑊 = 𝑉.
Proof: We first observe that 𝑊 is a subspace of a finite-dimensional vector space 𝑉 over a
field ℱ. Therefore, 𝑊 itself is finite dimensional over ℱ. Since 𝑑𝑖𝑚ℱ 𝑉 = 𝑛, any set of 𝑛 + 1 or
more vectors in 𝑉 is linearly dependent. But, any basis of 𝑊 will consist of linearly
independent vectors. Therefore, no basis of 𝑊 can contain more than 𝑛 elements. Hence,
𝑑𝑖𝑚ℱ (𝑊) ≤ 𝑛.
Institute of Lifelong Learning, University of Delhi
pg. 16
Vector Spaces: Basis and Dimensions
In particular, if 𝛽 ′ = {𝑤1 , … , 𝑤𝑛 } is a basis of 𝑊, then, because it is an independent set with 𝑛
elements, therefore, it is also a basis of 𝑉. Thus, we observe that
𝑊 = 𝐿 𝛽′ = 𝑉
so that 𝑊 = 𝑉.
The proof is now complete.
I.Q.14
We now look at some examples wherein we are to find a basis and dimension of some
subspaces of ℝ𝑛 .
Example 8: Find a basis and dimension of the subspace 𝑊 of ℝ3 where 𝑊 =
𝑎, 𝑏, 𝑐 : 𝑎 + 𝑏 +
𝑐=0 .
Solution:
We first note that 𝑊 ≠ ℝ3 , because, for example, 1,2,3 does not belong to 𝑊.
Thus, 𝑑𝑖𝑚ℱ (𝑊) ≠ 3, that is, dimℱ (𝑊) < 3. Thus, either
𝑑𝑖𝑚ℱ (𝑊) = 1 or 𝑑𝑖𝑚ℱ (𝑊) = 2.
Further, we observe that 𝑢1 = (1,0, −1) and 𝑢2 = (0,1, −1) are two linearly independent vectors
in 𝑊. This is because
𝛼1 𝑢1 + 𝛼2 𝑢2 = 𝛼1 1,0, −1 + 𝛼2 0,1, −1
= 𝛼1 , 𝛼2 , − 𝛼1 + 𝛼2
= (0,0,0)
⟺ 𝛼1 = 𝛼2 = 0.
Thus,{𝑢1 , 𝑢2 } forms a basis of 𝑊 and therefore, 𝑑𝑖𝑚ℱ ( 𝑊) = 2.
Example 9: Find a basis and dimension of the subspace 𝑊 of ℝ3 where 𝑊 =
𝑎, 𝑏, 𝑐 : 𝑎 = 𝑏 =
𝑐 .
Solution:
The vector 𝑢 = 1,1,1 𝜀 𝑊. Any vector 𝑤 𝜀 𝑊 has the form
𝑤 = 𝑘, 𝑘, 𝑘 .
Hence,
𝑤 = 𝑘𝑢.
Thus, {𝑢} spans 𝑊 and dimℱ ( 𝑊) = 1.
I.Q.15
Institute of Lifelong Learning, University of Delhi
pg. 17
Vector Spaces: Basis and Dimensions
Given a basis for a subspace, we can generate a basis for the parent vector space. This is
illustrated in exercises. The algorithm of echelon forms is used to generate the desired
basis.
8. Rank and System of Linear Equations:
In this section, ℱ = ℝ. We are well aware of the fact that a general system of linear
equations on ℝ is given by
𝛼11 𝑥1 + 𝛼12 𝑥2 +. … . +𝛼1𝑛 𝑥𝑛 = 𝑦1
𝛼21 𝑥1 + 𝛼22 𝑥2 +. . … + 𝛼2𝑛 𝑥𝑛 = 𝑦2
.
.
.
𝛼𝑚1 𝑥1 + 𝛼𝑚2 𝑥2 +. … + 𝛼𝑚𝑛 𝑥𝑛 = 𝑦𝑚 ;
which in the matrix form can be written as
𝐴𝑥 = 𝑦
where, 𝐴 = 𝛼𝑖𝑗 ; 𝑥 = [𝑥1 , 𝑥2 , … , 𝑥𝑛 ]𝑡 ; 𝑦 = [𝑦1 , 𝑦2 , … , 𝑦𝑚 ]𝑡 .
The solvability of this system can be easily expressed in terms of rank of a matrix. Let us
recollect the definition of rank of a rectangular matrix 𝐴.
Definition of Rank of a Matrix: The rank 𝑝 of an 𝑚 × 𝑛 matrix 𝐴 is the size of the largest
non-zero 𝑝 × 𝑝 sub-matrix with non-zero determinant.
To compute the rank of a matrix, we first reduce the matrix to echelon form using what
are known as elementary operations. Let us recall these notions once more.
Definition of Elementary Row Operations: The following operations on a matrix,
(1)
multiplying row 𝑖 by a non-zero scalar 𝛼, denoted by 𝐸 𝑖 (𝛼),
(2)
adding 𝛽 times row 𝑗 to row 𝑖, denoted by 𝐸 𝑖𝑗 (𝛽) (here 𝛽 is any scalar) , and
(3)
interchanging rows 𝑖 and 𝑗, denoted by 𝐸 𝑖𝑗 , (here 𝑖 ≠ 𝑗)
are called elementary row operations of types 1,2 and 3 respectively; and the matrices
𝐸 𝑖 (𝛼), 𝐸 𝑖𝑗 (𝛽) and 𝐸 𝑖𝑗 are called elementary row matrices of the same type. Elementary
column operations and the corresponding elementary matrices are defined analogously.
Institute of Lifelong Learning, University of Delhi
pg. 18
Vector Spaces: Basis and Dimensions

All elementary matrices are square matrices.

Each elementary matrix is non-singular.

Only an invertible matrix can be written as a product of elementary
matrices.
Performing an elementary row (resp. column) operation is the same as pre-multiplying
(resp. post-multiplying) by the corresponding elementary row (resp. column) matrix. For
example, the matrix product 𝐸 23 (−3)𝐴 is the matrix obtained from 𝐴 by subtracting 3 times
the row 3 from row 2. Here, 𝐸
23
−3 is 𝐸
23
1 0 0 


−3 = 0 1 3 .


0 0 1 


I.Q.16
Elementary row operations reduce the matrix to the row echelon form and reduced row
echelon form. Specifically, a matrix is in row echelon form if it satisfies the following
conditions.

All nonzero rows (rows with at least one nonzero element) are above any rows of all
zeroes (all zero rows, if any, belong at the bottom of the matrix).

The leading coefficient (the first nonzero number from the left, also called the pivot)
of a nonzero row is always strictly to the right of the leading coefficient of the row
above it (some texts add the condition that the leading coefficient must be 1.

All entries in a column below a leading entry are zeroes (implied by the first two
criteria).
In addition to this, a matrix is in reduced row echelon form (also called row canonical
form) if it satisfies the following conditions.

It is in row echelon form.

Every leading coefficient is 1 and is the only nonzero entry in its column.

A matrix is in column echelon form if its transpose is in row echelon form.

A matrix is in column echelon form if its transpose is in row echelon form.
Institute of Lifelong Learning, University of Delhi
pg. 19
Vector Spaces: Basis and Dimensions
1 0 2


Example 10: Reduce the matrix 𝐴 = 2 1 3 to row as well as column-echelon forms.


4 1 8


Solution: Reduction of 𝐴 to the row-echleon form is as under.
1 0 2


𝐴 = 2 1 3


4 1 8


1 0 2 


~ 0 1 1 (Pre-multiplying by 𝐸 23 (−2) or simply by: 𝑅2 → 𝑅2 − 2𝑅3 )


0 1 0 


1 0 2 


~ 0 1 1 (Pre-multiplying by 𝐸 23 (1) or simply by: 𝑅3 → 𝑅2 + 𝑅3 ).


 0 0 1


This is the row-echleon form of 𝐴.
Now, the column-echelon form of 𝐴 is obvious; it is the transpose of the row-echleon form
of 𝐴 and is
1 0 0 


 0 1 0  .
 2 1 1


I.Q.17
In the terminology of vector-spaces, we have two concepts: row-rank and column-rank. The
term row-rank refers to the dimension of the row-space (or column-space) of a matrix. The
row space (resp. column-space) of a matrix is the vector space spanned by the rows (resp
columns) of the matrix. We now formally define these terms.
Let 𝐴 be a 𝑚 × 𝑛 matrix. We denote the rows of 𝐴 as: {𝐴1 , 𝐴2 , … , 𝐴𝑚 } & columns of 𝐴 as:
{𝐴1 , 𝐴2 , … , 𝐴𝑛 }.
Definition of Row-Space: The vector space spanned by the rows {𝐴1 , … , 𝐴𝑚 } of 𝐴 is called
the row-space of 𝐴.
Definition of Row-Rank: The dimension of the row-space of a matrix 𝐴 is called the rowrank of 𝐴. Hence, the dimension of the row-space of a matrix is the maximum number of
linearly independent rows of 𝐴. Therefore, the dimension of the row-space of the matrix 𝐴
equals the number of non-zero rows in the row-echelon form of 𝐴.
Institute of Lifelong Learning, University of Delhi
pg. 20
Vector Spaces: Basis and Dimensions
Definition of Column-Space: The vector space spanned by the columns {𝐴1 , … , 𝐴𝑛 } of 𝐴 is
called the column-space of 𝐴.
Definition of Column-Rank: The dimension of the column-space of a matrix 𝐴 is called
the column-rank of 𝐴. Hence, the dimension of the column-space of a matrix is the
maximum number of linearly independent columns of 𝐴. Therefore, the dimension of the
column-space of the matrix 𝐴 equals the number of non-zero columns in the columnechelon form of 𝐴.

Column-rank (𝐴) = Row-rank (𝐴𝑡 )
I.Q.18
To compute the row-space (resp. column-space), we reduce the matrix 𝐴 (resp. 𝐴𝑡 ) to its
echelon form. The non-zero rows in the resulting matrix form the basis for the row-space
(resp. column-space) of the original matrix 𝐴.
I.Q.19
Now, the fact of fundamental importance is that the row-rank and column-rank of a matrix
are same. This is why we simply say rank of a matrix instead of the terms row-rank or
column rank.
Theorem 7: For any 𝑚 × 𝑛 matrix 𝐴, row-rank of 𝐴 = column-rank of 𝐴.
Proof: One can apply elementary row operations and elementary column operations to
bring the matrix 𝐴 to a matrix that is in both row reduced echelon form and column reduced
echelon form. In other words, there exist invertible matrices 𝑃 (of order 𝑚) and 𝑄 (of order
𝑛), which are products of elementary matrices, such that
𝑃𝐴𝑄 = 𝐸 =
𝐼𝑘 ×𝑘
𝑂(𝑚 −𝑘)×𝑘
𝑂𝑘×(𝑛−𝑘)
𝑂 𝑚 −𝑘
×(𝑛−𝑘)
As 𝑃 and 𝑄 are invertible, the maximum number of linearly independent rows in 𝐴 are equal
to the maximum number of linearly independent rows in 𝐸. That is, the row-rank of 𝐴 is
equal to the row-rank of 𝐸. Similarly the column-rank of 𝐴 is equal to the column-rank of 𝐸.
Now, it is evident that the row rank and column rank of 𝐸 are identical (to 𝑘). Hence, the
same holds for 𝐴.
This completes the proof.
Institute of Lifelong Learning, University of Delhi
pg. 21
Vector Spaces: Basis and Dimensions

A square matrix of order 𝑛 is invertible if, and only if, its rank is 𝑛.

Equivalently, the determinant of a square matrix of order 𝑛 is non-zero if, and
only if, the rank of the matrix is 𝑛.
I.Q.20
After this digression, let us now come back the question of solvability of the linear system
𝐴𝑥 = 𝑦. A linear system of equations in 𝑛 unknowns has a unique solution if the coefficient
matrix and the augmented matrix have the same rank 𝑛, and infinitely many solutions if
that common rank is less than 𝑛. The system has no solution if those two matrices have
different ranks. We state this precisely in the following more general theorem.
Theorem 8 (Fundamental Theorem for Linear Systems): Let the linear system 𝐴𝑥 = 𝑏
which in an explicit form may be written as
𝛼11 𝑥1 + 𝛼12 𝑥2 +. … . +𝛼1𝑛 𝑥𝑛 = 𝑏1
𝛼21 𝑥1 + 𝛼22 𝑥2 +. . … + 𝛼2𝑛 𝑥𝑛 = 𝑏2
.
𝛼𝑚1 𝑥1 + 𝛼𝑚2 𝑥2 +. … + 𝛼𝑚𝑛 𝑥𝑛 = 𝑏𝑚
be given. Then,
(a)
this linear system of 𝑚 equations in 𝑛 unknowns is consistent, that is, has solutions,
if and only if, the coefficient matrix 𝐴 and the augmented matrix 𝐵 = [𝐴|𝑏] have the
same rank. Otherwise, the system is said to be inconsistent.
(b)
the system has precisely one solution if and only if this common rank 𝑟 of 𝐴 and 𝐵
equals 𝑛; and
(c)
if this common rank 𝑟 is less than 𝑛, the system has infinity of solutions. All of these
solutions are obtained by determining 𝑟 suitable unknowns in terms of the remaining
𝑛 – 𝑟 unknowns, to which arbitrary values can be assigned. These 𝑛 – 𝑟 unknowns are
often referred to as free-variables.
Let us look at a concrete example now.
Example 11: Check for solvability of the systems
(i)
3𝑎 − 2𝑏 + 8𝑐 = 9
−2𝑎 + 2𝑏 + 𝑐 = 3
Institute of Lifelong Learning, University of Delhi
pg. 22
Vector Spaces: Basis and Dimensions
𝑎 + 2𝑏 − 3𝑐 = 8
(ii)
𝑎+ 𝑏 + 𝑐= 1
3𝑎 − 𝑏 − 𝑐 = 4
𝑎 + 5𝑏 + 5𝑐 = −1
(iii)
𝑎 + 2𝑏 − 3𝑐 = −2
3𝑎 − 𝑏 − 2𝑐 = 1
2𝑎 + 3𝑏 − 5𝑐 = −3
Solution: (i) Here, the augmented matrix is
 3 2 8 9 


𝐴 𝑏 = 2 2
1 3

 1 2 3 8 


and the row-echelon form of this matrix is
3 
1 0 0


𝐴 𝑏 ~ 0 1 9.5 13.5 .


0 0 1

1


It is easily seen that both the coefficient matrix 𝐴 and the augmented
matrix 𝐴 𝑏 have the same rank 3. Hence, the given system is solvable
and has a unique solution. This is because the common rank equals the
number of unknowns.
(ii) Once again, the augmented matrix is
1 1 1 1 


𝐴 𝑏 = 3 1 1 4


 1 5 5 1


which in row-echelon form corresponds to
 1 1 1 1


 0 4 4 1 .
 0 0 0 1


Here, we see that
𝑟𝑎𝑛𝑘 𝐴 = 2 but 𝑟𝑎𝑛𝑘 𝐴 𝑏
=3
and therefore, the system is inconsistent.
Institute of Lifelong Learning, University of Delhi
pg. 23
Vector Spaces: Basis and Dimensions
(iii) Here, the augmented matrix is
 1 2 3 2 


𝐴 𝑏 = 3 1 2 1


 2 3 5 3 


which in row-reduced form is
 1 0 1 0 


 0 1 1 1
0 0 0 0 


which gives us
𝑟𝑎𝑛𝑘 𝐴 = 2 = 𝑟𝑎𝑛𝑘 𝐴 𝑏
and therefore, the system is consistent. This system has infinity of
solutions because the common value of the rank is less than the number of
unknowns. In fact, in view of the above theorem 8, there is one free
variable.
The free variables correspond to the zero rows of the coefficient matrix in echelon form. In
fact, we have the following definition.
Definition of nullity of a matrix: The number of zero-rows in the echelon form of a
matrix is called the nullity of the matrix.

Let A be an 𝑚 × 𝑛 matrix. We have,
𝑅𝑎𝑛𝑘(𝐴) + 𝑁𝑢𝑙𝑙𝑖𝑡𝑦(𝐴) = 𝑚 =No. of rows of 𝐴
 1 2 1


Example 12: Find the rank and nullity of the matrix 𝐴 = 1 3 1 .


3 8 4 


Solution: We first reduce 𝐴 to the row-echelon form. We have
 1 2 1


𝐴= 1 3 1


3 8 4 


Institute of Lifelong Learning, University of Delhi
pg. 24
Vector Spaces: Basis and Dimensions
 1 2 1


~ 0 1 2 (By: 𝑅2 → 𝑅2 − 𝑅1 and 𝑅3 → 𝑅3 − 3𝑅1 )


0 2 7 


 1 2 1


~ 0 1 2 (By: 𝑅3 → 𝑅3 − 2𝑅2 ). This is the row-echelon form of 𝐴.


0 0 3 


Hence,
𝑅𝑎𝑛𝑘 𝐴 = 3 and 𝑁𝑢𝑙𝑙𝑖𝑡𝑦 𝐴 = 0.
When the constant vector 𝑏 is the zero vector, we call the system of linear equations as a
homogeneous linear system. In this case, the trivial zero solution always exist, irrespective
of the value of the rank of the coefficient matrix. More importantly, the set of all the
solutions makes up a vector space. All this is summarized in the theorem 9.
Theorem 9: The homogeneous linear system
𝑎11 𝑥1 + 𝑎12 𝑥2 +. … . +𝑎1𝑛 𝑥𝑛 = 0
𝑎21 𝑥1 + 𝑎22 𝑥2 +. . … + 𝑎2𝑛 𝑥𝑛 = 0
.
.
.
𝑎𝑚1 𝑥1 + 𝑎𝑚2 𝑥2 +. … + 𝑎𝑚𝑛 𝑥𝑛 = 0
always has the trivial solution 𝑥1 = 0, … , 𝑥𝑛 = 0. Nontrivial solutions exist if and only if the
rank of the coefficient matrix is strictly less than 𝑛. Further, if 𝑟𝑎𝑛𝑘(𝐴) = 𝑟 < 𝑛, then these
solutions, together with the trivial solution 𝑥 = 0, form a vector space of dimension 𝑛 − 𝑟,
and this vector space is called the solution space of this homogeneous linear system. In
particular, if 𝑋(1) and 𝑋(2) are the solution vectors of this homogeneous linear system, then
𝑥 = 𝑘1 𝑋(1) + 𝑘2 𝑋(2) with any scalars 𝑘1 and 𝑘2 is a solution vector of this homogeneous linear
system. This does not hold for non-homogeneous systems. Also, the term solution space is
used for homogeneous systems only.
Example 13: Find the complete solution set for the homogeneous linear system
𝑎 −𝑏 +
𝑑 + 2𝑒 = 0
−2𝑎 + 2𝑏 − 𝑐 − 4𝑑 − 3𝑒 = 0
Institute of Lifelong Learning, University of Delhi
pg. 25
Vector Spaces: Basis and Dimensions
𝑎 – 𝑏 + 𝑐 + 3𝑑 + 𝑒 = 0
−𝑎 + 𝑏 + 𝑐 + 𝑑 − 3𝑒 = 0.
Solution: Here, the coefficient matrix 𝐴 is
 1 1 0 1 2 


2 2 1 4 3 

𝐴=
.
 1 1 1 3 1 


 1 1 1 1 3 
We first reduce 𝐴 to the row-echelon form. We have
 1 1 0 1 2 


2 2 1 4 3 
𝐴 =
 1 1 1 3 1 


 1 1 1 1 3 
 1 1 0 1 2 


0 0 1 2 1 

~
(By: 𝑅2 → 𝑅2 + 2𝑅1 , 𝑅3 → 𝑅3 − 𝑅1 and 𝑅4 → 𝑅4 + 𝑅1 )
 0 0 1 2 1


 0 0 1 2 1
 1 1 0 1

0 0 1 2
~
0 0 0 0

0 0 0 0
 1 1

0 0
~
0 0

0 0
2

1
(By: 𝑅3 → 𝑅3 + 𝑅2 and 𝑅4 → 𝑅4 + 𝑅2 )
0

0
0 1 2

1 2 1
(By: 𝑅2 → (−1)𝑅2 )
0 0 0

0 0 0
This is the row-echelon form of 𝐴. Thus,
𝑟𝑎𝑛𝑘 𝐴 = 2. (Because 𝑛𝑢𝑙𝑙𝑖𝑡𝑦 𝐴 = 2 and 𝑟𝑎𝑛𝑘 𝐴 + 𝑛𝑢𝑙𝑙𝑖𝑡𝑦 𝐴 = 4.)
Now, because the number of variables is 5, by the above theorem 9, it follows that the
dimension of the vector-space of the solutions is 3. That is, the solution space is spanned by
three linearly independent solutions. Hence, to determine the full solution set, it suffices to
find these three basic solutions. Now, using the row-echelon form of 𝐴, we obtain an
equivalent system as
𝑎 = 𝑏 − 𝑑 − 2𝑒
𝑐 = −2𝑑 + 𝑒.
Institute of Lifelong Learning, University of Delhi
pg. 26
Vector Spaces: Basis and Dimensions
The variables 𝑏,𝑑 and 𝑒 are the free-variables and therefore, can be assigned arbitrary
values. So, setting 𝑏 = 𝑘1 , 𝑑 = 𝑘2 and 𝑒 = 𝑘3 , we obtain
a
1 
 1
 2 
 k1  k2  2k3 
b 
1 
0
0


k1
 
 
 
 


 c  =  2k2  k3  =𝑘1  0  + 𝑘2  2  + 𝑘3  1 


 
 
 
 
k2


d 
0
1
0


 e 
 0 
 0 
 1 
k3
which gives us the three linearly independent solutions as
1 
 1
 2 
1 
0
0
 
 
 
𝑋(1) =  0  , 𝑋(2) =  2  and 𝑋(3) =  1  .
 
 
 
0
1
0
 0 
 0 
 1 
Hence, the general solution set is
𝑥 = 𝑘1 𝑋 1 + 𝑘2 𝑋 2 + 𝑘3 𝑋 3
1 
 1
 2 
1 
0
0
 
 
 
𝑋(1) =  0  , 𝑋(2) =  2  , 𝑋(3) =  1  ; 𝑘𝑖 𝜀 ℝ for 𝑖 = 1,2,3}.
 
 
 
0
1
0
 0 
 0 
 1 
Now, let us turn our attention to the general case of a non-homogeneous system of linear
equations. Actually, there is a close geometric connection between the solution sets of the
systems 𝐴𝑥 = 𝑏 and 𝐴𝑥 = 0. The general solution set of 𝐴𝑥 = 𝑏 is an affine translate of the
solution space of 𝐴𝑥 = 0.
Theorem 10 (Structure of the Solution Set of 𝑨𝒙 = 𝒃): The general solution of a nonhomogeneous system of linear equations 𝐴𝑥 = 𝑏 is of the form 𝑥 = 𝑢 + 𝑣, where, 𝑢 is any
particular solution of the non-homogeneous system 𝐴𝑥 = 𝑏 and 𝑣 is the general solution of
the homogeneous system 𝐴𝑥 = 0.
We illustrate this theorem via an example. The algorithm followed here is adaptable to very
general situations.
Example 14: Find the general solution set for the non-homogeneous linear system
Institute of Lifelong Learning, University of Delhi
pg. 27
Vector Spaces: Basis and Dimensions
𝑎 −𝑏 +
𝑑 + 2𝑒 = −2
−2𝑎 + 2𝑏 − 𝑐 − 4𝑑 − 3𝑒 = 3
𝑎 – 𝑏 + 𝑐 + 3𝑑 + 𝑒 = −1
−𝑎 + 𝑏 + 𝑐 + 𝑑 − 3𝑒 = 3.
Solution: The augmented matrix [𝐴|𝑏] is
 1 1 0 1 2 2 


2 2 1 4 3 3 

𝐴𝑏 =
.
 1 1 1 3 1 1 


 1 1 1 1 1 3 
We reduce this to the row-echelon form and obtain the equivalence
 1 1

0 0
𝐴𝑏~ 
0 0

0 0
0 1 2 2 

1 2 1 1 
.
0 0 0 0

0 0 0 0
Thus,
𝑟𝑎𝑛𝑘 𝐴 = 𝑟𝑎𝑛𝑘( 𝐴 𝑏 ) = 2.
Therefore, the give system is consistent and possesses infinity of solutions. This is because
the number of variables is 5, and 𝑟𝑎𝑛𝑘 𝐴 = 2 < 5. Now, to find the general solution set,
we solve the equivalent non-homogeneous system obtained from the echelon form of
the augmented matrix 𝐴 𝑏 above. We have the equivalent system as
𝑎 = −2 + 𝑏 − 𝑑 − 2𝑒
𝑐=
1
− 2𝑑 + 𝑒
Once again, we observe that the variables 𝑏,𝑑 and 𝑒 are the free-variables and therefore,
can be assigned arbitrary values. So, setting 𝑏 = 𝑘1 , 𝑑 = 𝑘2 and 𝑒 = 𝑘3 , we obtain
a
1 
 1
 2 
 2  k1  k2  2k3   2 
b 






0


k1
 
1 
0
 

 0
 c  =  1  2k2  k3  =  1  + 𝑘1  0  + 𝑘2  2  + 𝑘3  1  .

  
 
 
 
 
k2

 0
d 
0
1
0

  0 
 e 
 0 
 0 
 1 
k3
Hence, the general solution set is
Institute of Lifelong Learning, University of Delhi
pg. 28
Vector Spaces: Basis and Dimensions
 2 
1 
 1
 2 
0
1 
0
0
 
 
 
 
𝑥 = 𝑢 + 𝑣; 𝑢 =  1  , 𝑣 = 𝑘1 𝑋 1 + 𝑘2 𝑋 2 + 𝑘3 𝑋 3 𝑋(1) =  0  , 𝑋(2) =  2  , 𝑋(3) =  1  ;
 
 
 
 
0
0
1
0
 0 
 0 
 0 
 1 
𝑘𝑖 𝜀 ℝ for
𝑖 = 1,2,3}.
9. Direct Sum of Vector Spaces:
From the perspective of co-ordinate geometry, we know how the Cartesian plane ℝ2 is
constructed from its two co-ordinate axes. The method of describing any point in ℝ2 as an
ordered pair of two real numbers yields a general method of making new vector spaces out
of old ones. The method gives us what is known as the direct sum of two vector spaces.
Definition of a Direct Sum of two vector spaces: If 𝑈 and 𝑉 are vector spaces over the
same field ℱ, we say that 𝑊 = 𝑈 ⊕ 𝑉 is the direct sum of 𝑈 and 𝑉, if each element of 𝑊 can
be written uniquely as a sum 𝑢 + 𝑣 where 𝑢 𝜀 𝑈 and 𝑣 𝜀 𝑉.
Definition of a Direct Sum of 𝒎-vector spaces: If 𝑈1 , 𝑈2 , … , 𝑈𝑚 are vector spaces over the
same field ℱ, we say that 𝑉 = 𝑈1 ⊕ 𝑈2 ⊕ … ⊕ 𝑈𝑚 is the direct sum of 𝑈1 , 𝑈2 , … , 𝑈𝑚 if each
element of 𝑉 can be written uniquely as a sum 𝑢1 + 𝑢2 + ⋯ + 𝑢𝑚 , where each 𝑢𝑖 𝜀 𝑈𝑖 ∀ 𝑖 = 1(1)𝑚.
Let us look at some examples now.
9.1 Examples of Direct sum of vector spaces:
(1)
Let 𝑉 = ℝ3 over ℝ. Then, every element of ℝ3 is of the form (𝑥, 𝑦, 𝑧) where 𝑥, 𝑦, 𝑧 𝜀 ℝ
and hence, can be written in one and only one way as
𝑥, 𝑦, 𝑧 = 𝑥, 0, 𝑧 + (0, 𝑦, 0).
Hence, letting
𝑈1 = { 𝑥, 0, 𝑧 : 𝑥, 𝑧 𝜀 ℝ}
and
𝑈2 = { 0, 𝑦, 0 : 𝑦 𝜀 ℝ},
we see that ℝ3 = 𝑈1 ⨁𝑈2 .
Institute of Lifelong Learning, University of Delhi
pg. 29
Vector Spaces: Basis and Dimensions
(2)
Let 𝑈𝑖 be the subspace of ℝ𝑛 over 𝑅 is given by
𝑈𝑖 = { 0,0, … , 𝑥, 0, … ,0 : 𝑥 𝜀 ℝ}
where all co-ordinates are zero, except possibly in the 𝑖 𝑡𝑕 position.
Then, every element (𝑥1 , 𝑥2 , … , 𝑥𝑛 ) of ℝ𝑛 can be written in a unique way as
𝑥1 , 𝑥2 , … , 𝑥𝑛 = 𝑥1 , 0, … ,0 +, … , + 0,0, … , 𝑥𝑖 , . . ,0 +, … , +(0,0, … , 𝑥𝑛 )
= 𝑢1 + 𝑢2 +, … , +𝑢𝑛 ,
where
𝑢𝑖 𝜀 𝑈𝑖 ∀ 𝑖 = 1 1 𝑛.
Hence,
ℝ𝑛 = 𝑈1 ⨁𝑈2 ⨁ … ⨁𝑈𝑛 .
(3)
Let 𝑉 = ℙ𝑚 ℝ = {𝑎0 + 𝑎1 𝑥+, … , 𝑎𝑚 𝑥 𝑚 : 𝑎𝑖 𝜀 ℝ ∀ 𝑖 = 0 1 𝑚}.
Let 𝑈𝑂 be the sub-space of ℙ𝑚 ℝ of polynomials of only odd degree, that is, let
𝑈𝑂 = {𝑎1 𝑥 + 𝑎3 𝑥 3 + 𝑎5 𝑥 5 +, … , 𝑎2𝑚 +1 𝑥 2𝑚 +1 : 𝑎𝑖 𝜀 ℝ ∀ 𝑖 = 1 2 2𝑚 + 1}.
Let 𝑈𝐸 be the sub-space of ℙ𝑚 ℝ of polynomials of only even degree, that is, let
𝑈𝐸 = {𝑎0 + 𝑎2 𝑥 2 + 𝑎4 𝑥 4 +, … , 𝑎2𝑚 𝑥 2𝑚 : 𝑎𝑖 𝜀 ℝ ∀ 𝑖 = 0 2 2𝑚}.
Then, every element 𝑝(𝑥) of ℙ𝑚 ℝ can be written in one and only one way as
𝑝 𝑥 = 𝑢𝑂 𝑥 + 𝑢𝐸 𝑥
where, 𝑢𝑂 𝜀 𝑈𝑂 and 𝑢𝐸 𝜀 𝑈𝐸 ;
because
𝑝 𝑥 = (𝑎0 + 𝑎1 𝑥+, … , +𝑎𝑚 𝑥 𝑚 )
=
𝑎1 + 𝑎3 𝑥 3 +, … , 𝑎2𝑚 +1 𝑥 2𝑚 +1 + (𝑎0 + 𝑎2 𝑥 2 +, … , 𝑎2𝑚 𝑥 2𝑚 )
= 𝑢𝑂 𝑥 + 𝑢𝐸 𝑥 .
Hence,
ℙ𝑚 ℝ = 𝑈𝑂 ⨁ 𝑈𝐸 .
(4)
We know that every square matrix can be written uniquely as sum of a
symmetric matrix and a skew-symmetric matrix. Therefore, the vector space of
all square matrices is a direct sum of the sub-spaces of symmetric and skewsymmetric matrices.
I.Q.21
Institute of Lifelong Learning, University of Delhi
pg. 30
Vector Spaces: Basis and Dimensions
9.2 A Non-Example:
Let us consider the three subspaces of ℝ3 over ℝ given by
𝑈1 =
𝑥, 𝑦, 0 ∈ ℝ3 : 𝑥, 𝑦 𝜀 ℝ ,
𝑈2 = {(0,0, 𝑧) ∈ ℝ3 : 𝑧 𝜀 ℝ} and
𝑈3 = {(0, 𝑦, 𝑦) ∈ ℝ3 : 𝑦 𝜀 ℝ}.
We see that we can write 0,0,0 𝜀 ℝ3 in two different ways as
0,0,0 = 0,1,0 + 0,0,1 + (0, −1, −1)
and
0,0,0 = 0,0,0 + 0,0,0 + (0,0,0).
In each case, summands on the RHS respectively lie in 𝑈1 , 𝑈2 , 𝑈3 . Therefore,
ℝ3 ≠ 𝑈1 ⨁𝑈2 ⨁𝑈3 .
So, let us now study the criterion for an ordinary sun of vector-spaces to be a direct sum.
Theorem 11: Suppose that 𝑈1 , … , 𝑈𝑚 are subspaces of 𝑉. Then 𝑉 = 𝑈1 ⨁ … ⨁ 𝑈𝑚 if and only if
both the following conditions hold:
(a)
𝑉 = 𝑈1 + ⋯ + 𝑈𝑚 ; and
(b)
the only way to write 0 as a sum 𝑢1 + ⋯ + 𝑢𝑚 , where each 𝑢𝑗 𝜀 𝑈𝑗 , is by taking
all the 𝑢𝑗 ′𝑠 equal to 0.
Proof: We first suppose that 𝑉 = 𝑈1 ⨁ … ⨁ 𝑈𝑚 . Clearly (a) holds (because that is how sum
and direct sum are defined). To prove (b), suppose that 𝑢1 𝜀 𝑈1 , … , 𝑢𝑚 𝜀 𝑈𝑚 and
0 = 𝑢1 + ⋯ + 𝑢𝑚
Then each 𝑢𝑗 must be 0 (this follows from the uniqueness part of the definition of direct sum
because 0 = 0 + ⋯ + 0 and 0 𝜀 𝑈1 , … ,0 𝜀 𝑈𝑚 ), proving (b).
Now, let us suppose that (a) and (b) hold. Let 𝑣 𝜀 𝑉 be arbitrary. By (a), we can write
𝑣 = 𝑢1 + ⋯ + 𝑢𝑚
for some 𝑢1 𝜀 𝑈1 , … , 𝑢𝑚 𝜀 𝑈𝑚 . To show that this representation is unique, suppose that we also
have
𝑣 = 𝑣1 + ⋯ + 𝑣𝑚 ,
where 𝑣1 𝜀 𝑈1 , … , 𝑣𝑚 𝜀 𝑈𝑚 . Subtracting these two equations, we have
0 = (𝑢1 − 𝑣1 ) + ⋯ + (𝑢𝑚 − 𝑣𝑚 ).
Clearly 𝑢1 − 𝑣1 𝜀 𝑈1 , … , 𝑢𝑚 − 𝑣𝑚 𝜀 𝑈𝑚 , so the equation above and (b) imply that each 𝑢𝑗 − 𝑣𝑗 = 0.
Thus 𝑢1 = 𝑣1 , … . , 𝑢𝑚 = 𝑣𝑚 , as desired.
The proof is now complete.
When 𝑚 = 2, we have the following corollary.
Institute of Lifelong Learning, University of Delhi
pg. 31
Vector Spaces: Basis and Dimensions
Corollary: Suppose that 𝑈 and 𝑊 are subspaces of 𝑉. Then 𝑉 = 𝑈⨁𝑊 if and only if 𝑉 = 𝑈 +
𝑊 and 𝑈 ⋂ 𝑊 = {0}.
9.3 Direct Sum as a Vector Space and its dimension:
The direct sum
𝑉 = 𝑈1 ⊕ 𝑈2 ⊕ … ⊕ 𝑈𝑚
of the 𝑚 vector spaces 𝑈1 , 𝑈2 , … , 𝑈𝑚 over the field ℱ is itself a vector space under the usual
Theorem 12: The dimension of the direct sum 𝑊 = 𝑈⨁𝑉 is the sum of the dimensions of its
summands. That is, 𝑑𝑖𝑚𝑓 𝑊 = 𝑑𝑖𝑚𝑓 𝑈 + 𝑑𝑖𝑚𝑓 (𝑉).
Proof: Let
𝑑𝑖𝑚𝑓 𝑈 = 𝑛
and
𝑑𝑖𝑚𝑓 𝑉 = 𝑚.
We shall show that if
𝛽𝑈 = {𝑢1 , … , 𝑢𝑛 }
and
𝛽𝑉 = {𝑣1 , … , 𝑣𝑚 }
are bases of 𝑈 and 𝑉 over ℱ respectively, then the set
𝛽𝑊 = {𝑢1 , … , 𝑢𝑛 , 𝑣1 , … , 𝑣𝑚 }
is a basis for 𝑊.
Clearly, 𝐿 𝛽𝑊 = 𝑊 because 𝑊 = 𝑈 + 𝑉 because 𝑊 = 𝑈⨁𝑉 (by theorem 11).
To show that 𝛽𝑊 is linearly independent, we let
𝛼1 𝑢1 + 𝛼2 𝑢2 +, … + 𝛼𝑛 𝑢𝑛 + 𝛾1 𝑣1 + 𝛾2 𝑣2 +, … . , +𝛾𝑚 𝑣𝑚 = 0.
But we know that 0 can also be written as
0𝑢 + 0𝑣 = 0
where 0𝑢 = 0𝑣 𝜀 𝑈 ⋂ 𝑉.
Hence, by the uniqueness of the representation of 0 𝜀 𝑊, we get,
𝛼1 𝑢1 +, … + 𝛼𝑛 𝑢𝑛 = 0𝑣 (because 𝛼1 𝑢1 +, … + 𝛼𝑛 𝑢𝑛 𝜀 𝑈),
and
𝛾1 𝑣1 +, … + 𝛾𝑚 𝑣𝑚 = 0𝑣 (because 𝛾1 𝑣1 +, … + 𝛾𝑚 𝑣𝑚 𝜀 𝑉).
Hence, linear independence of 𝑢𝑖 ′𝑠 and 𝑣𝑗 ′𝑠 implies,
𝛼𝑖 = 0 ∀ 𝑖 = 1 1 𝑛
Institute of Lifelong Learning, University of Delhi
pg. 32
Vector Spaces: Basis and Dimensions
and
𝛾𝑗 = 0 𝑗 = 1 1 𝑚.
Therefore the set 𝛽𝑊 is linearly independent in 𝑊 over ℱ.
Hence, 𝛽𝑊 is a basis of 𝑊 over ℱ.
𝑑𝑖𝑚𝑓 𝑊 = #(𝛽𝑊 ) = 𝑛 + 𝑚 = 𝑑𝑖𝑚𝑓 𝑈 + 𝑑𝑖𝑚𝑓 (𝑉).
The proof is now complete.
10. Quotient Vector Spaces:
We shall now demonstrate a new method for creating vector spaces from other vector
spaces. We will see that if 𝑉 is a vector space and 𝑊 is a subspace of 𝑉, then there is
another vector space 𝑉/𝑊 that can be obtained from 𝑉 by viewing all the vectors in 𝑊 as
the zero vector. We begin with the following definition.
Definition of a Coset: Let 𝑉 be a vector space over ℱ. Let 𝑊 be a subspace of 𝑉, and let
𝑣 𝜀 𝑉 be arbitrary. The coset of 𝑊 containing 𝒗, denoted by 𝑣 + 𝑊, is the set of vectors
𝑣 + 𝑊 = {𝑣 + 𝑤|𝑤 𝜀 𝑊} ⊆ 𝑉.
Clearly,
𝑣 = 𝑣 + 0 𝜀 𝑣 + 𝑊,
which is why 𝑣 + 𝑊 is called the coset of 𝑊 „containing 𝑣‟. Let us see an example.
9.1 Example of a Coset in ℝ𝟐 :
We consider 𝑉 = ℝ2
as a vector space over ℝ with the usual vector space operations.
Let 𝑊 = 𝑠𝑝𝑎𝑛{(1,1)}. Thus, 𝑊 is the line 𝑦 = 𝑥 in ℝ2 . If 𝑣 = (1,2), then, 𝑣 + 𝑊 is the line with
direction vector (1,1) through the point (1,2). That is, 𝑣 + 𝑊 contains 𝑣 = (1,2). Similarly every
coset of 𝑊 is a line through a point with the direction vector (1,1). Further, it is easy to see
that equality of sets
0,1 + 𝑊 = 1,2 + 𝑊 = 𝑣 + 𝑊,
since both the lines overlap. This can be made formal using the following result.
Lemma (a): Let 𝑉 be a vector space over ℱ and let 𝑊 be a subspace of 𝑉. Then 𝑣1 + 𝑊 =
𝑣2 + 𝑊 if, and only if, 𝑣1 − 𝑣2 𝜀 𝑊.
Proof: We begin by assuming that
𝑣1 + 𝑊 = 𝑣2 + 𝑊.
Since 𝑊 is a subspace, we have
0 𝜀 𝑊.
So, we have,
𝑣1 = 𝑣1 + 0 𝜀 𝑣1 + 𝑊 = 𝑣2 + 𝑊
Institute of Lifelong Learning, University of Delhi
pg. 33
Vector Spaces: Basis and Dimensions
Hence, by the definition of 𝑣2 + 𝑊, there exists a 𝑤 𝜀 𝑊 such that
𝑣1 = 𝑣2 + 𝑤.
Hence,
𝑣1 − 𝑣2 = 𝑤 𝜀 𝑊.
For the other direction, suppose 𝑣1 − 𝑣2 𝜀 𝑊.
Then,
𝑣1 − 𝑣2 = 𝑤
for some 𝑤 𝜀 𝑊. In other words, ∃ some 𝑤 𝜀 𝑊 such that
𝑣1 = 𝑣2 + 𝑤.
To see that 𝑣1 + 𝑊 = 𝑣2 + 𝑊, we let 𝑥 𝜀 𝑣1 + 𝑊 be arbitrary. Then, ∃ some 𝑤 ′ 𝜀 𝑊 such that
𝑥 = 𝑣1 + 𝑤′.
Therefore,
𝑥 = 𝑣1 + 𝑤 ′
= 𝑣2 + 𝑤 + 𝑤 ′
= 𝑣2 + 𝑤 + 𝑤 ′ 𝜀 𝑣2 + 𝑊,
as 𝑤 + 𝑤 ′ 𝜀 𝑊 since 𝑤, 𝑤 ′ 𝜀 𝑊 and 𝑊 is a subspace.
Hence,
𝑣1 + 𝑊 ⊆ 𝑣2 + 𝑊.
Similarly, we get
𝑣2 + 𝑊 ⊆ 𝑣1 + 𝑊.
Combining these two inclusions, we obtain
𝑣1 + 𝑊 = 𝑣2 + 𝑊.
This completes the proof.
Our next goal is to give the set of cosets a vector space structure. To do this, we need to
define addition and scalar multiplication. However, we need to be a little careful that our
operations are well-defined.
Lemma (b): Let 𝑉 be a vector space over ℱ and let 𝑊 be a subspace of 𝑉. Suppose
𝑥 + 𝑊 = 𝑥′ + 𝑊 and 𝑦 + 𝑊 = 𝑦′ + 𝑊. Then,
𝑥 + 𝑦 + 𝑊 = 𝑥 ′ + 𝑦′ + 𝑊
and
𝛼 𝑥 + 𝑊 = 𝛼𝑥 + 𝑊 ∀ 𝛼 𝜀 ℱ.
Proof: Suppose
𝑥 + 𝑊 = 𝑥′ + 𝑊
Institute of Lifelong Learning, University of Delhi
pg. 34
Vector Spaces: Basis and Dimensions
and
𝑦 + 𝑊 = 𝑦′ + 𝑊.
By the above lemma (a), there exist 𝑤1 , 𝑤2 𝜀 𝑊 such that
𝑥 − 𝑥 ′ = 𝑤1
and
𝑦 − 𝑦 ′ = 𝑤2 .
Therefore,
𝑥 + 𝑦 − 𝑥 ′ + 𝑦 ′ = 𝑤1 + 𝑤2 .
Since 𝑊 is a subspace and 𝑤1 , 𝑤2 𝜀 𝑊, we get
𝑥 + 𝑦 − 𝑥 ′ + 𝑦 ′ 𝜀 𝑊.
Hence, the lemma (a) implies that
𝑥 + 𝑦 + 𝑊 = 𝑥 ′ + 𝑦 ′ + 𝑊.
Moreover, if 𝛼 𝜀 ℱ, then
𝛼𝑥 𝜀 𝑉,
and
𝛼𝑊 = 𝛼𝑤: 𝑤 𝜀 𝑊 ⊆ 𝑊.
Hence, we get,
𝛼 𝑥 + 𝑊 = 𝛼 𝑥 +𝑤 :𝑤 𝜀 𝑊
= 𝛼𝑥 + 𝛼𝑤: 𝑤 𝜀 𝑊 = 𝛼𝑥 + 𝑊.
The proof is now complete.
We are now in a position to make the quotient of two vector spaces.
Definition of Quotient Vector Space:
Let 𝑉 be a vector space over ℱ and let 𝑊 be a
subspace of 𝑉. We define addition and scalar multiplication on the set of all cosets of 𝑊 by
𝑥+𝑤 + 𝑦+𝑊 = 𝑥+𝑦 +𝑊
and
𝛼 𝑥 + 𝑊 = 𝛼𝑥 + 𝑊
for all 𝑥, 𝑦 ∈ 𝑉 and 𝛼 𝜀 ℱ. The set of all cosets of 𝑊 with the given operations is called the
quotient space of 𝑉 modulo 𝑊 and is denoted as 𝑉/𝑊.
The reason we had to prove the lemma first is so that these operations are well-defined.
With the above definition in hand, we can now verify that 𝑉/𝑊 is a vector space over ℱ. For
that we need to check all the eight properties of being a vector space directly.
Institute of Lifelong Learning, University of Delhi
pg. 35
Vector Spaces: Basis and Dimensions
Theorem 13: Let 𝑉 be a vector space over ℱ and let 𝑊 be a subspace of 𝑉. Then 𝑉/𝑊 is a
vector space over ℱ with the coset operations.
Proof: To show that 𝑉/𝑊 is a vector space over ℱ with the above operations, we will show
that 𝑉/𝑊 satisfies the definition of a vector space. The operations are well-defined by the
above lemma (b). So, it suffices to check the eight vector space properties. (The reason the
eight properties will be satisfied is due to the structure of addition and scalar multiplication
on 𝑉/𝑊 and since 𝑉 is a vector space and thus satisfies the eight properties.)
1.
Let 𝑣1 + 𝑊 𝜀 𝑉/𝑊 and 𝑣2 + 𝑊 𝜀 𝑉/𝑊 be arbitrary. Then
(by definition)
𝑣1 + 𝑊 + (𝑣2 + 𝑊) = 𝑣1 + 𝑣2 + 𝑊,
= 𝑣2 + 𝑣1 + 𝑊,
(since V is a vector space)
= 𝑣2 + 𝑊 + (𝑣1 + 𝑊),
(by definition).
Therefore, since 𝑣1 + 𝑊,𝑣2 + 𝑊 𝜀 𝑉/𝑊 were arbitrary, addition is
commutative in 𝑉/𝑊.
2.
Let 𝑣1 + 𝑊 𝜀 𝑉/𝑊, 𝑣2 + 𝑊 𝜀 𝑉/𝑊, and 𝑣3 + 𝑊 𝜀 𝑉/𝑊 be arbitrary. Then
𝑣1 + 𝑊 + 𝑣2 + 𝑊
+ 𝑣3 + 𝑊
=( 𝑣1 + 𝑣2 + 𝑊) + 𝑣3 + 𝑊 ,
(by definition)
= ( 𝑣1 + 𝑣2 + 𝑣3 ) + 𝑊,
(by definition)
= ((𝑣1 + (𝑣2 + 𝑣3 )) + 𝑊,
(since V is a vector space)
=
(by definition)
𝑣1 + 𝑊 + 𝑣2 + 𝑣3 + 𝑊,
(by definition).
= 𝑣1 + 𝑊 + 𝑣2 + 𝑊 + 𝑣3 + 𝑊 ,
Therefore, since 𝑣1 + 𝑊 𝜀 𝑉/𝑊, 𝑣2 + 𝑊 𝜀 𝑉/𝑊, and 𝑣3 + 𝑊 𝜀 𝑉/𝑊 were
arbitrary,
3.
Let 0𝑉/𝑊 = 0𝑉 + 𝑉/𝑊 𝜀 𝑉/𝑊, where 0𝑉 is the zero vector in 𝑉. Since 𝑉 is a vector
space, 𝑣 + 0𝑣 = 𝑣 for all vectors 𝑣 𝜀 𝑉. Thus for all
𝑣 + 𝑊 𝜀 𝑉/𝑊, we have,
𝑣 + 𝑊 + 0𝑉/𝑊 = 𝑣 + 0𝑉 + 𝑊 = 𝑣 + 𝑊.
Therefore, 0𝑉/𝑊 is indeed a zero vector for 𝑉/𝑊.
4.
Let 𝑣 + 𝑊 𝜀 𝑉/𝑊 be arbitrary. Since 𝑉 is a vector space, 𝑣 has the additive inverse
– 𝑣 in 𝑉. Then – 𝑣 + 𝑊 𝜀 𝑉/𝑊 and hence,
𝑣+𝑊 +
−𝑣 + 𝑊 = 𝑣 + −𝑣
+ 𝑊 = 0𝑉 + 𝑊 = 0𝑉/𝑊
as desired. Hence, −𝑣 + 𝑊 is an additive inverse of 𝑣 + 𝑊. Since
𝑣 + 𝑊 𝜀 𝑉/𝑊
was arbitrary, every element of 𝑉/𝑊 has an additive inverse.
5.
Let 𝑣 + 𝑊 𝜀 𝑉/𝑊 be arbitrary. Then,
1. 𝑣 + 𝑊 = 1𝑣 + 𝑊, (by definition)
Institute of Lifelong Learning, University of Delhi
pg. 36
Vector Spaces: Basis and Dimensions
(since 𝑉 is a vector space).
= 𝑣 + 𝑊,
Therefore, since 𝑣 + 𝑊 𝜀 𝑉/𝑊 was arbitrary, 𝑉/𝑊 follows the fifth
6.
property.
Let 𝑣 + 𝑊 𝜀 𝑉/𝑊 and 𝛼, 𝛽 𝜀 ℱ be arbitrary. Then
𝛼. (𝛽. 𝑣 + 𝑊 = 𝛼.
𝛽𝑣 + 𝑊 , (by definition)
=(𝛼(𝛽𝑣)) + 𝑊,(by definition)
= 𝛼𝛽𝑣 + 𝑊, (since V is a vector space)
= 𝛼𝛽 𝑣 + 𝑊 ,
(by definition).
Since, 𝑣 + 𝑊 𝜀 𝑉/𝑊 and 𝛼, 𝛽 𝜀 ℱ were arbitrary, 𝑉/𝑊 has the sixth property.
7.
Let 𝛼 𝜀 ℱ, 𝑣1 + 𝑊 𝜀 𝑉/𝑊, and 𝑣2 + 𝑊 𝜀 𝑉/𝑊 be arbitrary. Then,
𝛼. ( 𝑣1 + 𝑊 + 𝑣2 + 𝑊 )
= 𝛼. ( 𝑣1 + 𝑣2 + 𝑊),
(by definition)
=
𝛼 𝑣1 + 𝑣2
(by definition)
=
𝛼𝑣1 + 𝛼𝑣2 + 𝑊,
=
𝛼𝑣1 + 𝑊 + 𝛼𝑣2 + 𝑊 ,
(by definition)
=
𝛼. 𝑣1 + 𝑊
(by definition).
+ 𝑊,
+ 𝛼. 𝑣2 + 𝑊 ,
(since 𝑉 is a vector space)
Since, 𝛼 𝜀 ℱ, 𝑣1 + 𝑊 𝜀 𝑉/𝑊, and 𝑣2 + 𝑊 𝜀 𝑉/𝑊 were arbitrary, 𝑉/𝑊 has the seventh
property.
8. Let 𝑣 𝜀 𝑉 and 𝛼, 𝛽 𝜀 ℱ be arbitrary. Then
𝛼 + 𝛽 . (𝑣 + 𝑊)
=
𝛼 + 𝛽 𝑣 + 𝑊,
(by definition)
= 𝛼𝑣 + 𝛽𝑣 + 𝑊,
(since 𝑉 is a vector space)
= 𝛼𝑣 + 𝑊 + 𝛽𝑣 + 𝑊 ,
(by definition)
= 𝛼. 𝑣 + 𝑊
(by definition).
+ 𝛽. 𝑣 + 𝑊 ,
Since 𝑣 𝜀 𝑉 and 𝛼, 𝛽 𝜀 ℱ were arbitrary, 𝑉/𝑊 has the eighth
property.
Therefore, by the definition of a vector space, 𝑉/𝑊 is a vector space over ℱ with the
operations given. The proof is now complete.
Now that we know 𝑉/𝑊 is a vector space, we shall investigate the case where 𝑉 is a finite
dimensional vector space.
Theorem 14: Let 𝑉 be a finite dimensional vector space over ℱ and let 𝑊 be a subspace of
𝑉. Then, 𝑉/𝑊 is a finite dimensional vector space with
𝑑𝑖𝑚ℱ
𝑉
𝑊
= 𝑑𝑖𝑚ℱ 𝑉 − 𝑑𝑖𝑚ℱ (𝑊).
Institute of Lifelong Learning, University of Delhi
pg. 37
Vector Spaces: Basis and Dimensions
Proof: Since 𝑉 is a finite dimensional vector space, 𝑊 is also a finite dimensional vector
space. Let
𝛽 = {𝑤1 , … , 𝑤𝑘 }
be a basis for 𝑊.
Now, we can extend 𝛽 to a basis
𝛾 = {𝑤1 , … , 𝑤𝑘 , 𝑣1 , … , 𝑣𝑛 }
of 𝑉. Therefore,
𝑑𝑖𝑚ℱ (𝑉) = 𝑛 + 𝑘 and 𝑑𝑖𝑚ℱ (𝑊) = 𝑘.
We shall show that 𝐾 = {𝑣1 + 𝑊, 𝑣2 + 𝑊, … . , 𝑣𝑛 + 𝑊} is a basis for 𝑉/𝑊.
To prove this, we must show that 𝐾 is linearly independent and
𝑠𝑝𝑎𝑛 𝐾 = 𝑉/𝑊.
To see that 𝐾 is linearly independent, we take scalars 𝑎1 , … 𝑎𝑛 𝜀 ℱ such that
𝑎1 . 𝑣1 + 𝑊 + . … + 𝑎𝑛 . 𝑣𝑛 + 𝑊 = 0𝑉/𝑊 .
Thus,
𝑎1 𝑣1 + … . +𝑎𝑛 𝑣𝑛 + 𝑊 = 0𝑣 + 𝑊.
Therefore, we have, by lemma (a),
𝑎1 𝑣1 + … . +𝑎𝑛 𝑣𝑛 𝜀 𝑊.
Since 𝛽 is a basis for 𝑊,
𝑠𝑝𝑎𝑛 𝛽 = 𝑊
so that there exist scalars 𝑏1 … , 𝑏𝑘 𝜀 ℱ such that
𝑎1 𝑣1 + ⋯ + 𝑎𝑛 𝑣𝑛 = 𝑏1 𝑤1 + ⋯ + 𝑏𝑘 𝑤𝑘 .
Thus,
𝑎1 𝑣1 + ⋯ + 𝑎𝑛 𝑣𝑛 + (−𝑏1 )𝑤1 + ⋯ + (−𝑏𝑘 )𝑤𝑘 = 0𝑣 .
Since 𝛾 is a basis for 𝑉, the above equation implies
𝑎𝑗 = 0 ∀ 𝑗 = 1 1 𝑛.
Hence, 𝐾 is linearly independent.
Next, to show that 𝑠𝑝𝑎𝑛 𝐾 = 𝑉/𝑊, we let 𝑣 + 𝑊 𝜀 𝑉/𝑊 be arbitrary. Since 𝛾 is basis for 𝑉,
𝑠𝑝𝑎𝑛 𝛾 = 𝑉, so that there exist scalars, 𝑎1 , … . , 𝑎𝑛 , 𝑏1 , … . 𝑏𝑘 𝜀 ℱ such that
𝑣 = 𝑎1 𝑣1 + ⋯ + 𝑎𝑛 𝑣𝑛 + 𝑏1 𝑤1 + ⋯ + 𝑏𝑘 𝑤𝑘 .
However,
𝑏1 𝑤1 + ⋯ + 𝑏𝑘 𝑤𝑘 𝜀 𝑊 as 𝑊 is a subspace.
Therefore, lemma (b) implies
𝑣 + 𝑊 = 𝑎1 𝑣1 + ⋯ + 𝑎𝑛 𝑣𝑛 + 𝑏1 𝑤1 + ⋯ + 𝑏𝑘 𝑤𝑘 + 𝑊 = 𝑎1 𝑣1 + ⋯ + 𝑎𝑛 𝑣𝑛 + 𝑊.
Hence,
Institute of Lifelong Learning, University of Delhi
pg. 38
Vector Spaces: Basis and Dimensions
𝑣 = 𝑎1 𝑣1 + 𝑊 + ⋯ + 𝑎𝑛 𝑣𝑛 + 𝑊 𝜀 𝑠𝑝𝑎𝑛(𝐾)
as desired. Hence, 𝐾 is a basis for 𝑉/𝑊.
Therefore,
𝑑𝑖𝑚ℱ
𝑉
𝑊
= 𝑛 = 𝑛 + 𝑘 − 𝑘 = 𝑑𝑖𝑚ℱ 𝑉 − 𝑑𝑖𝑚ℱ (𝑊)
as desired. The proof is now complete.
Exercises
(1)
Let 𝑉 be the vector-space of all 2 × 2 matrices over ℝ. Show that 𝑊 is not a subspace of 𝑉, where 𝑊 consists of all matrices with zero determinant.
(2)
Let 𝑉 be the vector space of all functions 𝑓: ℝ → ℝ under point
wise
addition & scalar multiplication over ℝ. Show that 𝑊 = {𝑓: 𝑓 3 = 0} is a vector
subspace of 𝑉 over ℝ.
(3)
Show that 𝑢1 = 1,2,5 , 𝑢2 = (1,3,7) and 𝑢3 = (1, −1, −1) do not span ℝ3 over ℝ.
(4)
Let 𝑊 be the sub-space of ℝ4 generated by the vectors
𝑢1 = 1, −2,5, −3 , 𝑢2 = (2,3,1, −4) and 𝑢3 = (3,8, −3, −5).
(5)
(a)
Find a basis for 𝑊.
(b)
Find the dimension of 𝑊.
(c)
Extend the basis of 𝑊 to a basis of the entire space ℝ4 .
Let 𝑈 = 𝑠𝑝𝑎𝑛(𝑢1 , 𝑢2 , 𝑢3 ) and 𝑉 = 𝑠𝑝𝑎𝑛(𝑣1 , 𝑣2 ) be sub-spaces of ℝ4 over ℝ where
𝑢1 =
1,2, −1, −3 , 𝑢2 = 2,4,1, −2 , 𝑢3 = 3,6,3, −7 ,
𝑣1 = 1,2, −4,11 , and 𝑣2 = 2,4, −5,14 . Show that 𝑈 = 𝑊.
(6)
Determine if the matrices 𝐴 and 𝐵 have the same column space where,
1 3 5 
1 2 3




𝐴 = 1 4 3 and 𝐵 = 2 3 4 .




1 1 9 
 7 2 17 




(7)
Show that the following system of linear equations is consistent.
Institute of Lifelong Learning, University of Delhi
pg. 39
Vector Spaces: Basis and Dimensions
𝑥 − 2𝑦 + 3𝑧 = 2
2𝑥 − 3𝑦 + 8𝑧 = 7
3𝑥 − 4𝑦 + 13𝑧 = 8.
(8) Consider the following sub-spaces of ℝ5 :
𝑈 = 𝑠𝑝𝑎𝑛{𝑢1 = 1,3, −2,2,3 , 𝑢2 = 1,4, −3,4,2 , 𝑢3 = 2,3, −1, −2,9 }; and
𝑊 = 𝑠𝑝𝑎𝑛{𝑣1 = 1,3,0,2,1 , 𝑣2 = 1,5, −6,6,3 , 𝑣3 = 2,5,3,2,1 }.
(a)
Find a basis and the dimension of 𝑈 + 𝑊.
(b)
Find a homogeneous system where solution space is 𝑈.
(c)
Find a homogeneous system whose solution space is 𝑊
(d)
Find a basis and the dimension of 𝑈 ∩ 𝑊.
Summary:
(a)
A vector space 𝑉 over a field ℱ is an abelian group under addition and is endowed
with a scalar multiplication from ℱ × 𝑉. The scalar multiplication is compatible
with the group structure of 𝑉.
(b)
A subset 𝑈 ⊆ 𝑉 is a vector subspace of 𝑉 over ℱ if, and only if, 𝛼𝑢 + 𝑣 𝜀 𝑈 ∀ 𝛼 𝜀 ℱ
and 𝑢, 𝑣 𝜀 𝑈.
(c)
The set of all linear combinations of the vectors 𝑣1 , … , 𝑣𝑚 is called the span of the
set 𝛽 = {𝑣1 , … , 𝑣𝑚 } and is denoted by 𝑠𝑝𝑎𝑛 {𝑣1 , … , 𝑣𝑚 } or 𝑠𝑝𝑎𝑛(𝛽) or simply 𝐿(𝛽). The
set 𝑠𝑝𝑎𝑛{𝑣1 , … , 𝑣𝑚 } is a vector subspace of 𝑉 over ℱ for any set 𝛽 = 𝑣1 , … , 𝑣𝑚 ⊆ 𝑉.
(d)
A set {𝑣1 , … , 𝑣𝑚 } of vectors in vector space 𝑉 is said to be linearly independent if
the only scalars 𝛼𝑖 𝜀 ℱ ∀ 𝑖 = 1 1 𝑚 satisfying the relation
𝛼1 𝑣1 + ⋯ + 𝛼𝑚 𝑣𝑚 = 0
are
𝛼1 = 𝛼2 = . . . = 𝛼𝑚 = 0.
(e)
A set 𝐵 = {𝑒1 , … , 𝑒𝑚 } in a vector space 𝑉 is called a basis of 𝑉 if every vector 𝑣 in 𝑉
can be written uniquely as
𝑣 = 𝛼1 𝑒1 + ⋯ + 𝛼𝑚 𝑒𝑚 ,
where 𝛼𝑖 𝜀 ℱ ∀ 𝑖 = 1 1 𝑚.
(f)
A vector space 𝑉 over a field ℱ is called finite dimensional if ∃ a finite subset 𝑆 of
𝑉 such that 𝐿(𝑆) = 𝑉.
Institute of Lifelong Learning, University of Delhi
pg. 40
Vector Spaces: Basis and Dimensions
(g)
Every finite dimensional vector space has a finite basis.
(h)
Any two bases of a finite dimensional vector space have same number of
elements. The common number is called the dimension of the vector space.
(i)
The rank 𝑝 of an 𝑚 × 𝑛 matrix 𝐴 is the size of the largest non-zero 𝑝 × 𝑝 submatrix with non-zero determinant. Equivalently, the number of non-zero-rows in
the echelon form of a matrix is called the rank of the matrix. The number of zerorows in the echelon form of a matrix is called the nullity of the matrix.
(j)
A linear system 𝐴𝑥 = 𝑏 of equations in 𝑛 unknowns has a unique solution if the
coefficient matrix and the augmented matrix have the same rank 𝑛, and infinitely
many solutions if that common rank is less than n. The system has no solution if
those two matrices have different ranks.
(k)
If 𝑈 and 𝑉 are vector spaces over the same field ℱ, we say that 𝑊 = 𝑈 ⊕ 𝑉 is the
direct sum of 𝑈 and 𝑉, if each element of 𝑊 can be written uniquely as a sum
𝑢 + 𝑣 where 𝑢 ∈ 𝑈 and 𝑣 ∈ 𝑉.
(l)
Let 𝑉 be a vector space over ℱ and let 𝑊 be a subspace of 𝑉. Then 𝑉/𝑊 is a
vector space over ℱ with the coset operations.
Glossary
Span, linear independence, basis, dimension, finite dimensional vector space, rank, rowspace, column-space, nullity, direct sum, quotient space
1.
Linear Algebra An Introductory Approach by Charles W. Curtis
Publication Year: 1984
Publisher: Springer-Verlag New York, Inc.
ISBN: 978-1-4612-7019-5
2.
Linear Algebra and Its Applications by David C. Lay (4/e)
Publication Year: 2011
Publisher: Pearson Higher Education
ISBN: 0321385179
3.
Linear Algebra (Third Edition) by Serge Lang
Publication Year: 1987
Institute of Lifelong Learning, University of Delhi
pg. 41
Vector Spaces: Basis and Dimensions
Publisher: Springer-Verlag New York, Inc.
ISBN: 0-387-96412-6
4.
Matrix Theory and Linear Algebra by I. N. Herstein and D. J. Winter
Publication Year: 1988
Publisher: Macmillan Pub. Co.
ISBN: 978-0023539510
Solutions/Hints for Exercises
1 0
0 0
 𝜀 𝑊 & =
 𝜀 𝑊. But 𝐴 + 𝐵 = 𝐼 ∉ 𝑊.
0 0
0 1
(1) We have 𝐴 = 
(2)
We have ∀ 𝛼, 𝛽 𝜀 ℝ, ∀ 𝑓, 𝑔 𝜀 𝑊, 𝛼𝑓 + 𝛽𝑔 3 = 𝛼. 𝑓 3 + 𝛽. 𝑔 3 = 0, so that
∀ 𝛼, 𝛽 𝜀 ℝ, ∀ 𝑓, 𝑔 𝜀 𝑊, 𝛼𝑓 + 𝛽𝑔 𝜀 𝑊.
1
2
5
(3) We simply observe that 𝐷𝑒𝑡 [𝑢1 , 𝑢2 , 𝑢3 ] = 1
3
7 =0.
1 1 1
(4) First, we form a matrix 𝐴 of the row-vectors 𝑢1 , 𝑢2 , 𝑢3 :
 u1 
 1 2 5 3 
 


𝐴 = u2 = 2 3
1 4  .
 

 u3 
 3 8 3 5 
 


Now, we observe that by the definition of row-space of a matrix,
𝑟𝑜𝑤 − 𝑠𝑝𝑎𝑐𝑒 𝑜𝑓 𝐴 = 𝑠𝑝𝑎𝑛 𝑢1 , 𝑢2 , 𝑢3 ,
so that we have
𝑟𝑜𝑤 − 𝑠𝑝𝑎𝑐𝑒 𝑜𝑓 𝐴 = 𝑠𝑝𝑎𝑛 𝑢1 , 𝑢2 , 𝑢3 = 𝑊.
Since, the 𝑟𝑜𝑤 − 𝑠𝑝𝑎𝑐𝑒 𝑜𝑓 𝐴 is generated by the basis of non-zero rows of 𝐴 in the
echelon form of 𝐴, we row-reduce 𝐴 to get the desired echelon form.
 1 2 5 3   1 2 5 3 

 

1 4  ~  0 7 9 2  .
𝐴= 2 3

 3 8 3 5   0 0 0 0 


 
(a)
A basis of 𝑊 is { 1, −2,5, −3 , 0,7, −9,2 }.
(b)
𝑑𝑖𝑚ℝ 𝑊 = 2 = No. of non-zero rows of 𝐴 in the echelon form of 𝐴.
(c)
To obtain a basis for ℝ4 , we find a linearly independent set of
Institute of Lifelong Learning, University of Delhi
pg. 42
Vector Spaces: Basis and Dimensions
four vectors in ℝ4 which contain the basis of 𝑊.
To do so, we append the vectors 𝑢4 = (0,0,1,0) and 𝑢5 = (0,0,0,1) to the basis of
𝑊 to
get a matrix
 1 2 5 3 


0 7 9 2 

𝐵=
.
0 0 1 0 


0 0 0 1 
Since 𝐵 is already in echelon form, the rows of 𝐵 form a basis for ℝ4 .
(5)
 u1 
 
Let 𝐴 = u 2 =
 
 u3 
 
 1 2 1 3 
 v1   1 2 4 11 


and 𝐵 =   = 
2
4
1

2
.


 v 2   2 4 5 14 
 3 6 3 7 


Then,
𝑟𝑜𝑤 − 𝑠𝑝𝑎𝑐𝑒 𝑜𝑓 𝐴 = 𝑠𝑝𝑎𝑛 𝑢1 , 𝑢2 , 𝑢3 = 𝑈
and
𝑟𝑜𝑤 − 𝑠𝑝𝑎𝑐𝑒 𝑜𝑓 𝐵 = 𝑠𝑝𝑎𝑛 𝑣1 , 𝑣2 = 𝑊.
Row-reducing 𝐴 and 𝐵 to their respective echelon forms, we get,
 1 2 0 1/ 3 
 1 2 0 1/ 3 


𝐴~ 0 0 1 8 / 3 and 𝐵~ 
.


0 0 1 8 / 3 

0 0 0

0 

Since, the non-zero rows of 𝐴 and 𝐵 are identical, 𝑈 and 𝑊 have the same basis.
Therefore, 𝑈 = 𝑊.
(6)
𝐴 and 𝐵 have the same column space if and only if the transposes 𝐴𝑡
the same row-space. Hence, it is enough to reduce 𝐴𝑡
and 𝐵𝑡
have
and 𝐵𝑡 to row-echelon forms. Doing
so we get,
1 1 1 1 0 3 

 

𝐴𝑡 = 3 4 1 ~ 0 1 2 and

 

5 3 9 0 0 0 

 

 1 2 7   1 0 3 

 

𝐵𝑡 = 2 3 12 ~ 0 1 2


 
 3 4 17   0 0 0 


 
Now, we can see that 𝐴𝑡 and 𝐵𝑡 have the same row space, and therefore, 𝐴 and 𝐵
have the same column space.
Institute of Lifelong Learning, University of Delhi
pg. 43
Vector Spaces: Basis and Dimensions
(7)
The consistency of the system follows from the fact that the coefficient matrix is
invertible. (The given system of linear equations is a square 3 × 3 system.)
(8)
(a)
We consider the augmented matrix of the vectors
𝑢1 , 𝑢2 , 𝑢3 , 𝑣1 , 𝑣2 , 𝑣3
and row-reduce the same. That is we row reduce
 u1   1 3
  
 u2   1 4
 u3   2 3
𝐴𝑈+𝑊 =   = 
 v1   1 3
 v2   1 5
  
 v3   2 5
2 2
3 4
1 2
0 2
6
3
6
2
3

2
9

1
3

1 
to get
1

0
0
𝐴𝑈+𝑊 ~ 
0
0

0
3 2 2 3 

1 1 2 1 
0 2 0 2 
 .
0 0 0 0
0 0 0 0

0 0 0 0 
Now, the set of non-zero rows in 𝐴𝑈+𝑊 span 𝑈 + 𝑊. Therefore, the set of nonzero rows of 𝐴𝑈+𝑊 is a basis for 𝑈 + 𝑊. Hence, we have,
𝑑𝑖𝑚ℝ 𝑈 + 𝑊 = 3.
(b)
Now, we consider the matrix of vectors 𝑢1 , 𝑢2 , 𝑢3 , 𝑢4 ; where
𝑢4 = (𝑎, 𝑏, 𝑐, 𝑑, 𝑒) and row-reduce the same. That is we row reduce
1

1
𝐵 =
2

a
3 2
4 3
3

2
3 1 2 9 

b c d e
2
4
to echelon form to get,
1

0
𝐵~ 
0

0
3
1
2
1
2
2
0 a  b  c 4a  2b  d
0
0
0


.
6a  b  e 

0

3
1
The desired homogeneous system of linear equations is
Institute of Lifelong Learning, University of Delhi
pg. 44
Vector Spaces: Basis and Dimensions
−𝑎 + 𝑏 + 𝑐 = 0,
4𝑎 − 2𝑏 + 𝑑 = 0, and
−6𝑎 + 𝑏 + 𝑒 = 0.
(c)
Similarly we obtain,
−9𝑎 + 3𝑏 + 𝑐 = 0,
4𝑎 − 2𝑏 + 𝑑 = 0, and
2𝑎 − 𝑏 + 𝑒 = 0.
(d)
The dimension of intersection sub-space 𝑈 ∩ 𝑊 is the nullity of the coefficient
matrix obtained by clubbing the above two homogenous systems. So,
combining the above two homogenous systems of three linear equations,
we get,
𝑀𝑋 = 0
where
 1 1

 4 2
 6 1
𝑀 =
 9 3
 4 2

 2 1
0
a

0
 
b

1
 ; 𝑋 = c 
0
 
d 

0 1 0
e

 
0 0 1 
1
0
0
1
0
1
0
0
Row-reducing to echelon form, we obtain
𝑀′𝑋 = 0
where
 1

0
′ 
𝑀~ 0

0
0

1
2
0
0
0
1
4
8
0
0
0 0

1 0
5 2 .

1 2 
0 0 
Therefore,
Institute of Lifelong Learning, University of Delhi
pg. 45
Vector Spaces: Basis and Dimensions
𝑅𝑎𝑛𝑘 𝑀 = 4
and
𝑁𝑢𝑙𝑙𝑖𝑡𝑦 𝑀 = 1.
Therefore, there is only one free variable, and that variable is 𝑒.
Hence,
𝑑𝑖𝑚ℝ 𝑈 ∩ 𝑊 = 1.
Setting 𝑒 = 2, we obtain,
𝑎 = 1, 𝑏 = 4, 𝑐 = −3, 𝑑 = 4, 𝑒 = 2.
Hence, { 1,4, −3,4,2 } is a basis for 𝑈 ∩ 𝑊 over ℝ.
Institute of Lifelong Learning, University of Delhi
pg. 46
```
Related documents