Download 2/4/15

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Eigenvalues and eigenvectors wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

Euclidean vector wikipedia , lookup

System of linear equations wikipedia , lookup

Matrix calculus wikipedia , lookup

Exterior algebra wikipedia , lookup

Tensor product of modules wikipedia , lookup

Four-vector wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Vector space wikipedia , lookup

Tensor product wikipedia , lookup

Transcript
TENSOR ALGEBRA
INNA ZAKHAREVICH
In these notes we will be working in a field F with char F 6= 2.
The goal of these notes is to introduce tensor products and skew-symmetric tensor products of
vector spaces, with the goal of introducing determinants as an invariant of a linear transformation.
Hoffman & Kunze has a discussion of determinants from this perspective as well, in Chapter
5, Sections 6 and 7; however, they discuss it from the dual perspective, in terms of multilinear
functionals.
Let V be a vector space over F . Recall that V ∗ = L(V, F ), and that we discussed that for
finite-dimensional V there exists a (non-canonical!) isomorphism V → V ∗ . However, for infinitedimensional V this is not necessarily the case. We do have a linear transformation V → V ∗∗ ,
however, defined by sending the vector v to the map fv : V ∗ → F given by fv (α) = α(v). Equivalently,1 there is a map
ev : V × V ∗ −→ F
given by (v, α) 7→ α(v). Note that this map is not linear: if it were then
α(v) + β(w) = ev(v, α) + ev(w, β) = ev((v, α) + (w, β)) = ev(v + w, α + β) = (α + β)(v + w).
However, note that directly from the definition we can see that ev is linear in each variable separately. Thus
(α + β)(v + w) = (α + β)(v) + (α + β)(w) = α(v) + β(v) + α(w) + β(w).
Thus we need to either develop a separate theory of bilinear maps, or to construct a representing
object: a vector space W such that
bilinear maps
∼
L(W, F ) =
.
V ×V∗ →F
This is where we diverge from Hoffman & Kunze. They proceed to develop the theory of bilinear
maps, and we will construct the representing object. In addition, we solve a slightly more general
problem which will in fact be easier to work with: for any two vector spaces V and W , we will
construct a vector space V ⊗ W which will have the property that
bilinear maps
∼
L(V ⊗ W, Z) =
V ×W →Z
for any vector space Z.
We will do the construction in two ways. First, we will do a completely general construction,
which will work for both vector spaces over fields and modules over rings. The advantage of this
construction is that it will involve no choices, and thus showing that various nice properties hold
will be easy with this construction. After this we will do a much more computational construction
using bases; this will not be a natural construction, but it will make some later computations easier.
Let A be the free vector space on pairs v ⊗ w, with v ∈ V and w ∈ W ; by this we mean that as
a set, A is the set of formal linear combinations of vectors v ⊗ w, for v ∈ V and w ∈ W . A is very
large, and if F is infinite it will always be infinite-dimensional. From this definition we have that
set maps
L(A, Z) =
.
V ×W →Z
1You are asked to show that this equivalence holds on the next homework
1
2
INNA ZAKHAREVICH
Now obviously the set of set maps V ×W → Z is much larger than the set of linear maps, so we need
to make A smaller. We do this by enforcing the bilinearity: for any f ∈ L(V ⊗ W, Z) we want to
have f ((v +v 0 )⊗w) = f (v ⊗w)+f (v 0 ⊗w); we can enforce this by setting (v +v 0 )⊗w = v ⊗w+v 0 ⊗w
in V ⊗ W . We define


(v + v 0 ) ⊗ w − v ⊗ w − v 0 ⊗ w 





v ⊗ (w + w0 ) − v ⊗ w − v ⊗ w0 A0 = span
a
∈
F,
v
∈
V,
w
∈
W
.
(av) ⊗ w − a(v ⊗ w)






v ⊗ (aw) − a(v ⊗ w)
We then define V ⊗ W = A/A0 ; since A and A0 are abelian groups this definition is well-defined as
an abelian group. All that needs to be checked is that it inherits a scalar multiplication from A.2
To get a bilinear map from a linear map V ⊗ W → Z it suffices to check that
(1) the map V × W → V ⊗ W given by (v, w) 7→ v ⊗ w is bilinear, and
(2) the composition of a bilinear map and a linear map is bilinear.
Thus to check that V ⊗ W has the right bijection it remains to check that if we are given a bilinear
map V × W → Z it gives us a linear map V ⊗ W → Z.
Given a bilinear map f : V × W → Z we can define a linear map g : A → Z by defining
g(v ⊗ w) = f (v, w) for all v ∈ V and w ∈ W . Since these form a basis of A, g is a well-defined
linear map. In order to check that we get a linear map V ⊗ W → Z from g we just need to check
that g(A0 ) = {0}; then g descends to a well-defined linear map on A/A0 . In particular, we need to
check that g is 0 on each of the four types of generators of A0 ; that this holds follows directly from
the bilinearity of f .
This construction of V ⊗W is very clean and formal, but it does not give us a good model for how
to work with V ⊗ W explicitly. For example, if dim V = m and dim W = n, what is dim V ⊗ W ?
To answer this question, we give a second construction of V ⊗ W , this time using a basis.
Let {v1 , . . . , vm } be a basis for V and {w1 , . . . , wn } be a basis for W . Then we claim that
{vi ⊗ wj | 1 ≤ i ≤ m, 1 ≤ j ≤ n} is aPbasis for V ⊗ W . P
Indeed, note that any v ⊗ w ∈ V ⊗ W can
n
be written in terms of these: if v = m
a
v
and
w
=
i
i
i=1
j=1 bj wj then
!
m
m
m
X
X
X
v⊗w =
ai vi ⊗ w =
(ai vi ) ⊗ w =
ai (vi ⊗ w)
i=1
=
m
X
i=1

ai  vi ⊗
i=1
=
m X
n
X
n
X

bj wj  =
j=1
i=1
m
X
i=1
ai
n
X
j=1
vi ⊗ (bj wj ) =
m
X
i=1
ai
n
X
bj (vi ⊗ wj )
j=1
ai bj (vi ⊗ wj ).
i=1 j=1
Thus
V ⊗ W = span(v ⊗ w | v ∈ V, w ∈ W ) = span(vi ⊗ wj | 1 ≤ i ≤ m, 1 ≤ j ≤ n).
We claim that this is actually a basis for V ⊗W , and thus that we can define V ⊗W to be the vector
space with this basis. Checking that these are linearly independent directly from the definition is
somewhat difficult, so we do something slightly indirect. Let Z be a vector space on a basis zij ,
with 1 ≤ i ≤ m and 1 ≤ j ≤ n. We clearly have a linear transformation Z → V ⊗ W given by
zij 7→ vi ⊗ wj . We will show that this is an isomorphism by constructing an inverse. To do this,
note that the above calculation of v ⊗ w in terms of the vi ⊗ wj defines a linear map A → Z, so
it suffices to check that this is 0 on A0 ; this follows directly from the definition of A0 . Note that
since vi ⊗ wj ∈ A, the map V ⊗ W → Z is surjective, and checking that it is in fact the inverse to
2Note that this definition works equally well if V and W are R-modules; this may be helpful on the homework.
TENSOR ALGEBRA
3
the map Z → V ⊗ W is straightforward.3 Thus we see that we could alternately define V ⊗ W as
the vector space with basis vi ⊗ wj . From this it follows that dim V ⊗ W = mn.
Important: the vector space V ⊗ W is spanned by vectors of the form v ⊗ w, but not every vector
in V ⊗ W can be written in this form. For example, if dim V, dim W ≥ 2 and {v1 , v2 } and {w1 , w2 }
are linearly independent in V and W , respectively, then
v 1 ⊗ w1 + v 2 ⊗ w2
is not a pure tensor.
Example 0.1. If W = F then V ⊗ W ∼
=V.
Example 0.2. Let V = W = F , and let µ : F × F → F be the usual multiplication in F . This is
a bilinear map by the distributivity property, so it gives us a linear map F ⊗ F → F .
Example 0.3. Let V, W, Z be vector spaces. Composition of linear maps is bilinear (check this!)
and thus composition gives us a linear map L(W, Z) ⊗ L(V, W ) → L(V, Z).
Note that if T : V → W and S : V 0 → W 0 are linear transformations, then we get a linear
transformation T ⊗ S : V ⊗ V 0 → W ⊗ W 0 by defining
(T ⊗ S)(v ⊗ v 0 ) = (T v) ⊗ (Sv 0 ).
To check that it is linear it suffices to check that the map V × V 0 → W ⊗ W 0 given by (v, v 0 ) 7→
(T v) ⊗ (Sv 0 ) is bilinear.4 If all of these vector spaces are finite dimensional, then the matrix of the
linear transformation T ⊗ S will be the (dim V )(dim V 0 ) × (dim W )(dim W 0 ) matrix whose entries
will be the pairwise products of entries in the matrix of T with entries in the matrix of S.
There is one more important definition we need before we can introduce the determinant. We
will want to be able to look at the skew-symmetric tensor product: we want to look at the tensor
product with one extra relation imposed: that v ⊗ v 0 = −v 0 ⊗ v. Note that this only makes sense
if we’re looking at the tensor product of V with itself. In this case, we define
A1 = span(A0 , v ⊗ v 0 + v 0 ⊗ v | v ∈ V ),
and write
V ∧ V = A/A1 .
We write pure tensors in V ∧ V as v ∧ v 0 to emphasize that we have the extra relation. This is
called the “wedge product.”
Example 0.4. You’ve seen the symbol ∧ before, and it is not an accident. Let U be an open
subset of Rn , and let V be the vector space of continuous functions U → R. Then the module of
1-forms on U is the module
{f1 dx1 + · · · + fn dxn },
and the module of k-forms is the wedge product of this module with itself k times.
If we want a more explicit construction of V ∧ V we can write it in terms of the basis {v1 , . . . , vn }
of V ; it will be spanned by vectors vi ∧ vj with i < j.5 (The proof is entirely analogous to the proof
V
of the basis for V ⊗ W , above.) We write k V for the wedge product of V with itself k times.
Vk
V
Note that if dim V = n then dim
V = dimk V . In particular, dim dim V V = 1.
3We have written this out assuming our bases are finite, but in fact this assumption is not necessary; since
any vector only uses a finite subset of the basis, all of our calculations would hold even if V and W were infinitedimensional.
4On the next homework, you’ll also check that this works nicely with composition.
5Just like before, this will also work for an infinite basis.
4
INNA ZAKHAREVICH
Definition 0.5 (Determinant). Let T : V → V be any linear transformation. Then
dim
^V
T :
dim
^V
V →
dim
^V
V.
Since the dimension of each of the vector spaces on the right is 1, this is just multiplication by a
scalar. We define
dim
^V
det T =
T.