Download Transformasi Linear dan Isomorfisma pada Aljabar Max

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Matrix multiplication wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Matrix calculus wikipedia , lookup

Euclidean vector wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Exterior algebra wikipedia , lookup

Vector space wikipedia , lookup

Four-vector wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

System of linear equations wikipedia , lookup

Transcript
Transformasi Linear dan Isomorfisma pada Aljabar Max-Plus
(Linear Transformation and Isomorphism in Max-plus Algebra)
As in conventional linear algebra we can define the linear dependence and
independence of vectors in the max-plus sense. The following can be found in [1],
[2], [3] and [4].
Recall that the max-plus algebra is in idempotent semi-ring. In order to define linear
dependence, independence and bases we need the definition of a semi-module. A
semi-module is essentially a linear space over semi-ring. Semimodules and
subsemimodules are analogous to modules and submodules over rings [5].
Definition 1. A set 𝑉 βŠ† β„π‘›π‘šπ‘Žπ‘₯ is a commutative idempotent semi-module over β„π‘šπ‘Žπ‘₯
if it closed under βŠ• and scalar multiplication; 𝑒 βŠ• 𝑣 ∈ 𝑉 and 𝛼⨂𝑒 ∈ 𝑉 for all
𝛼 ∈ β„π‘šπ‘Žπ‘₯ and 𝑒, 𝑣 ∈ 𝑉.
Definition 2. A finitely generated semi-module 𝑉 βŠ‚ β„π‘›π‘šπ‘Žπ‘₯ is the set of all linear
combinations of a finite set {𝑒1 , 𝑒2 , … , π‘’π‘Ÿ } of vectors in β„π‘›π‘šπ‘Žπ‘₯ :
𝑉 = {β¨π‘Ÿπ‘–=1 𝛼𝑖 ⨂𝑒𝑖 |𝛼1 , 𝛼2 , … , π›Όπ‘Ÿ ∈ β„π‘šπ‘Žπ‘₯ }.
Definition 3. A element π‘₯ can be written as a finite linear combination of elements
of 𝐹 βŠ† 𝑉 if π‘₯ = β¨π‘“βˆˆπΉ πœ†π‘“ ⨂𝑓, for some πœ†π‘“ ∈ β„π‘šπ‘Žπ‘₯ such that πœ†π‘“ β‰  πœ€ for all but
finitely many 𝑓 ∈ 𝐹.
Linear independence and dependence in the max-plus sense are not completely
analogous to the conventional definition. There are different interpretations of
linear independence and dependence. We will consider the definitions of linear
dependence and linear independence due to Gondran and Minoux.
Definition 4. A set of set 𝑝 vectors {𝑣1 , 𝑣2 , … , 𝑣𝑝 } ∈ β„π‘›π‘šπ‘Žπ‘₯ is linearly dependent if
the set {1, 2, … , 𝑝} can be partitioned into disjoint subset 𝐼 and 𝐾 such that for
𝑗 ∈ 𝐼 βˆͺ 𝐾 there exist 𝛼𝑗 ∈ β„π‘šπ‘Žπ‘₯ not all equal to πœ€ and
β¨π‘–βˆˆπΌ 𝛼𝑖 ⨂𝑣𝑖 = β¨π‘˜βˆˆπΎ π›Όπ‘˜ β¨‚π‘£π‘˜ .
Definition 5. A set of set 𝑝 vectors {𝑣1 , 𝑣2 , … , 𝑣𝑝 } ∈ β„π‘›π‘šπ‘Žπ‘₯ is linearly independent
if for all disjoint subset 𝐼 and 𝐾 of the set {1, 2, … , 𝑝} can be partitioned into
disjoint subset 𝐼 and 𝐾 such that for 𝑗 ∈ 𝐼 βˆͺ 𝐾 and all 𝛼𝑗 ∈ β„π‘šπ‘Žπ‘₯ we have
β¨π‘–βˆˆπΌ 𝛼𝑖 ⨂𝑣𝑖 β‰  β¨π‘˜βˆˆπΎ π›Όπ‘˜ β¨‚π‘£π‘˜
unless 𝛼𝑗 = πœ€ for all 𝑗 ∈ 𝐼 βˆͺ 𝐾.
In other words linearly independent simply means not linearly dependent.
Definition 6. A subset 𝐹 of a semi-module 𝑀 over β„π‘šπ‘Žπ‘₯ spans 𝑀 or is a spanning
family of β„π‘šπ‘Žπ‘₯ if every element π‘₯ ∈ 𝑀 can be written as a finite linear combination
of the elements of 𝐹.
𝑝
Definition 7. A family of vectors {𝑒𝑖 }𝑖=1 is a basis of semi-module 𝑉 if it is a minimal
spanning family.
A.
Linear Transformation
Definition 1.
If 𝑇: 𝑉 β†’ π‘Š is a function from a semi-module 𝑉 to a semi-module π‘Š, then 𝑇 is
called a linear transformation from 𝑉 to π‘Š if the following two properties hold
for all vectors 𝒖 and 𝒗 in 𝑉 and for all scalars π‘˜:
(a) 𝑇(π‘˜β¨‚π‘’) = π‘˜β¨‚π‘‡(𝑒)
(b) 𝑇(𝑒 βŠ• 𝑣) = 𝑇(𝑒) βŠ• 𝑇(𝑣)
[Homogeneity Property]
[Additivity Property]
In the special case where 𝑉 = π‘Š, the linear transformation 𝑇 is called linear
operator on the semi-module 𝑉.
Theorem 1.
If 𝑇: 𝑉 β†’ π‘Š is a linear transformation, then:
𝑇(πœ€) = πœ€.
Proof. Let 𝑒 be any vector in 𝑉. Since πœ€π‘’ = πœ€, it follows from the homogeneity
property in Definition 1 that
𝑇(πœ€) = 𝑇(πœ€β¨‚π‘’) = πœ€β¨‚π‘‡(𝑒) = πœ€.
Theorem 2.
Let 𝑇: 𝑉 β†’ π‘Š is a linear transformation, where 𝑉 is finite dimensional. If
𝑆 = {𝑣1 , 𝑣2 , … , 𝑣𝑛 } is a basis for 𝑉. Then the image of any vector 𝑣 in 𝑉 can be
expressed as
𝑇(𝑣) = (𝑐1 ⨂𝑇(𝑣1 )) βŠ• (𝑐2 ⨂𝑇(𝑣2 )) βŠ• … βŠ• (𝑐𝑛 ⨂𝑇(𝑣𝑛 ))
Where 𝑐1 , 𝑐2 , … , 𝑐𝑛 are the coefficients required to express 𝑣 as a linear
combination of the vectors in 𝑆.
Proof. Expres 𝑣 = (𝑐1 ⨂𝑣1 ) βŠ• (𝑐2 ⨂𝑣2 ) βŠ• … βŠ• (𝑐𝑛 ⨂𝑣𝑛 ) and use the linearity of
𝑇.
Definition 2.
If 𝑇: 𝑉 β†’ π‘Š is a linear transformation, then the set of vectors in 𝑉 that 𝑇 maps
into πœ€ is called kernel of 𝑇 and is denoted by ker(𝑇). The set of all vectors in π‘Š
that are image under 𝑇 of at least one vector in 𝑉 is called the range of 𝑇 and is
denoted by 𝑅(𝑇).
Theorem 3.
If 𝑇: 𝑉 β†’ π‘Š is a linear transformation, then:
(a) The kernel of 𝑇 is a subsemi-module of 𝑉.
(b) The range of 𝑇 is a subsemi-module of π‘Š.
Proof (a). We must show that ker(𝑇) contains at least one vector and is closed
under addition and scalar multiplication. By Theorem 1 the vector 𝜺 is in ker(𝑇),
so the kernel contains at least one vector. Let 𝑣1 and 𝑣2 be vectors in ker(𝑇), and
let π‘˜ be any scalar. Then
𝑇(𝑣1 βŠ• 𝑣2 ) = 𝑇(𝑣1 ) βŠ• 𝑇(𝑣2 ) = 𝜺 βŠ• 𝜺 = 𝜺
so 𝑣1 βŠ• 𝑣2 is in ker(𝑇). Also,
𝑇(π‘˜β¨‚π‘£1 ) = π‘˜β¨‚π‘‡(𝑣1 ) = π‘˜β¨‚πœΊ = 𝜺
so π‘˜π‘’ is in ker(𝑇).
(b). We must show that 𝑅(𝑇) contains at least one vector and is closed under
addition and scalar multiplication. By Theorem 1 the vector 𝜺 is in 𝑅(𝑇), so the
range contains at least one vector. Let 𝑀1 and 𝑀2 be vectors in 𝑅(𝑇), and let π‘˜ be
any scalar, then there exist vector a and b in 𝑉 for which
𝑇(𝐚) = 𝑀1 ⨁𝑀2 and 𝑇(𝐛) = π‘˜β¨‚π‘€1
but the fact 𝑀1 and 𝑀2 are vectors in 𝑅(𝑇) tells us that there exist vectors 𝑣1 and 𝑣2
in 𝑉 such that
𝑇(𝑣1 ) = 𝑀1 and 𝑇(𝑣2 ) = 𝑀2
The following computations complete the proof by showing that the vectors
𝐚 = 𝑣1 ⨁𝑣2 and 𝐛 = k⨂𝑣1 satisfy the equation in
𝑇(𝐚) = 𝑇(𝑣1 βŠ• 𝑣2 ) = 𝑇(𝑣1 ) βŠ• 𝑇(𝑣2 ) = 𝑀1 ⨁𝑀2
𝑇(𝐛) = 𝑇(π‘˜β¨‚π‘£1 ) = π‘˜β¨‚π‘€1 .
Theorem 5.
If 𝑇: 𝑉 β†’ π‘Š is a linear transformation, then:
ker(𝑇) = {𝜺}.
Proof. By the idempotent property of ⨁, so β„π‘šπ‘Žπ‘₯ doesn’t have invers element for
⨁. It guarantees that there is no vector 𝑣 β‰  𝜺 in 𝑉 such that 𝑣 ∈ ker(𝑇).
Definition 3.
Let 𝑇: 𝑉 β†’ π‘Š is a linear transformation. If the range of 𝑇 is finite-dimensional,
then its dimension is called the rank of 𝑇; and if the kernel of 𝑇 is finite-
dimensional, then its dimension is called the nullity of 𝑇. The rank of 𝑇 is denoted
by π‘Ÿπ‘Žπ‘›π‘˜(𝑇) and the nullity of 𝑇 by 𝑛𝑒𝑙𝑙𝑖𝑑𝑦(𝑇).
Theorem 5.
If 𝑇: 𝑉 β†’ π‘Š is a linear transformation from an n-dimensional semi-module 𝑉 to a
semi-module π‘Š, then:
π‘Ÿπ‘Žπ‘›π‘˜(𝑇) ⨁ 𝑛𝑒𝑙𝑙𝑖𝑑𝑦(𝑇) = 𝑛.
Proof.
B.
Isomorphism
Definition 1.
If 𝑇: 𝑉 β†’ π‘Š is a linear transformation from an n-dimensional semi-module 𝑉 to a
semi-module π‘Š, then 𝑇 is said to be one-to-one if 𝑇 maps distinct vectors in 𝑉 into
distinct vectors in π‘Š.
Definition 2.
If 𝑇: 𝑉 β†’ π‘Š is a linear transformation from an n-dimensional semi-module 𝑉 to a
semi-module π‘Š, then 𝑇 is said to be onto (or onto π‘Š) if every vectors in π‘Š is the
image of at least one vectors in 𝑉.
Theorem 1.
If 𝑇: 𝑉 β†’ 𝑉 is a linear transformation and 𝑇 is one-to-one, then ker(𝑇) = {𝜺}.
Proof. Since 𝑇 is linear, we know that 𝑇(πœ€) = πœ€ by Theorem 1. Since 𝑇 is one-toone, there can be no other vectors in 𝑉 that map into πœ€, so ker(𝑇) = {𝜺}.
Theorem 2.
If 𝑉 is a finite-dimensional semi-module, and if 𝑇: 𝑉 β†’ 𝑉 is a linear operator, then
the following statements are equivalent.
(a) 𝑇 is one-to-one
(b) ker(𝑇) = {𝜺}.
(c) 𝑇 is onto.
Definition 3.
If 𝑇: 𝑉 β†’ π‘Š is a linear transformation is both one-to-one and onto, then 𝑇 is said
to be an isomorphism, and the semi-module 𝑉 and π‘Š are said to be isomorphic.
Theorem 3.
Every real 𝑛-dimensional semi-module is isomorphic to β„π‘›π‘šπ‘Žπ‘₯ .
C.
Matrices for General Linear Transformation
Recall matrices for general linear transformation in conventional sense. Matrices
for general transformation in max-plus case is like as in conventional case.
Definition 4. 𝑇: 𝑉 β†’ π‘Š is a linear transformation. Suppose further that 𝐡 is a
basis for 𝑉, that 𝐡′ is a basis for π‘Š. The matrix for 𝑇 relative to the bases 𝐡 and
𝐡′ and will denote by
𝐴 = [𝑇]𝐡′ .𝐡 = [[𝑇(𝑒1 )]𝐡′ | [𝑇(𝑒2 )]𝐡′ | … | [𝑇(𝑒𝑛 )]𝐡′ ]
So, for every vectors 𝑣 in 𝑉 then
𝑇(𝑣) = 𝐴𝑣.
Theorem . 𝑇: 𝑉 β†’ 𝑉 is a linear transformation and suppose 𝐴 is matrix relative to
𝑇. Then the following is equivalent.
(a) 𝑇 is surjective
(b) 𝐴 is invertible
(c) 𝑇 is injective.
Theorem .
Linear Transformation and Isomorphism in Max-plus Algebra
ABSTRACT
Max-plus algebra is one of the concepts in algebraic structures. Max-plus algebra
β„π‘šπ‘Žπ‘₯ is the set ℝ βˆͺ {βˆ’βˆž} with max and + as the two binary operations ⨁ and
⨂, respectively which forms a commutative idempotent semi-field. In this paper
we systematically revisit classical algebraic structures used in conventional algebra
and we substitute the commutative idempotent semi-field β„π‘šπ‘Žπ‘₯ for the field of
scalars. The purpose is to provide the mathematical tools need to study linear
transformation and isomorphism in commutative idempotent semi-modules over
β„π‘šπ‘Žπ‘₯ . Many researchers have worked on β„π‘šπ‘Žπ‘₯ . Among of them discuss about
semi-modules β„π‘›π‘šπ‘Žπ‘₯ , vectors, matrices and linear systems of equations. So, we will
use their results to define general semi-modules over β„π‘šπ‘Žπ‘₯ , further to identify the
properties of linear transformation and isomorphism in β„π‘šπ‘Žπ‘₯ .