Download Remarks on dual vector spaces and scalar products

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Tensor product of modules wikipedia , lookup

Determinant wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

Cross product wikipedia , lookup

Gaussian elimination wikipedia , lookup

System of linear equations wikipedia , lookup

Jordan normal form wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Euclidean vector wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Exterior algebra wikipedia , lookup

Vector space wikipedia , lookup

Matrix multiplication wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Matrix calculus wikipedia , lookup

Four-vector wikipedia , lookup

Transcript
DUAL VECTOR SPACES AND SCALAR PRODUCTS
JÜRGEN TOLKSDORF
NOVEMBER 28, 2012
1. Dual vector spaces
Let, respectively, V and W be a real vector spaces of dimension 1 ≤ n < ∞ and
1 ≤ m < ∞. The real vector space of all linear maps from V to W is denoted by
HomR (V, W ). It is isomorphic to Rm×n when a basis is chosen in V and W .
Definition 1.1. The real vector space
V ∗ := HomR (V, R) ≡ {α : V −→ R | linear }
(1)
of all linear maps from V to R (the latter considered as a real one-dimensional vector
space) is called the dual vector space of V .
Lemma 1.1. The dimension of V ∗ equals the dimension of V .
Proof. Let ~v1 , . . . , ~vn ∈ V be a basis. One may define n linear maps νi ∈ V ∗ as
(1 ≤ i, j ≤ n) .
νi (~vj ) := δij
(2)
Remember that anyP
linear map is fully determined by its action on an (arbitrary)
basis. In fact, for ~v = 1≤k≤n λk~vk one gets νi (~v ) = λi ∈ R (i = 1, . . . , n).
We P
prove that ν1 , . . . , νn ∈ V ∗ are linearly independent. Assume that the vector
α := 1≤k≤n µk νk ∈ V ∗ is the zero map. I.e. ν(~v ) = 0 ∈ R holds for all ~v ∈ V .
Since this holds for all ~v ∈ V , it follows that µ1 = µ2 = · · · µn = 0 simply by letting
~v = ~vk , k = 1, . . . , n. Hence, n ≤ dim(V ∗ ).
To demonstrate that span(ν1 , . . . , νn ) = V ∗ , we remember that the vector space
HomR (V, W ) of all linear maps between real vector spaces V and W (each of finite
dimension) equals dim(V )dim(W ). In fact, choosing a basis in V and a basis in W
allows to identify V with Rdim(V ) and W with Rdim(W ) . Hence, HomR (V, W ) can be
identified with the real vector space Rdim(V )×dim(W ) of dimension dim(V )dim(W ). In
the special case where W = R one therefore obtains that dim(V ∗ ) = dim(V ) as was
to be proven.
Notation: The elements of V ∗ are referred to as linear functionals, or one-forms.
They are also called “dual vectors” or, especially in physics, as “co-vectors”. Accordingly, the basis ν1 , . . . , νn ∈ V ∗ that is defined by (2) is called the dual basis (co-basis)
of the basis ~v1 , . . . , ~vn ∈ V .
Notice that the dual basis of some chosen basis ~v1 , . . . , ~vn in V is uniquely defined
by ~v1 , . . . , ~vn ∈ V . Furthermore, in general it only makes sense to speak about the dual
of a basis. In contrast, with a single vector ~v ∈ V one cannot, in general, associate a
(uniquely determined) dual vector α ∈ V ∗ .
1
2
J. TOLKSDORF
Example: We may consider V = Rn with its standard basis ~e1 , . . . , ~en ∈ Rn . Its
dual basis is simply defined by the mappings:
e∗k : Rn −→ R
(x1 , x2 , . . . , xn ) 7→ xk
(k = 1, . . . , n) .
(3)
When the real vector space Rn of n−tuples is identified with the real vector space
of matrices of size n × 1, the dual space (Rn )∗ is identified with the real vector
space R1×n of matrices of size 1 × n. That is, one has the isomorphism
Rn×1
σnt : (Rn )∗ −→ R1×n
X
α=
µk e∗k 7→ µ ≡ (µ1 , . . . , µk ) ,
(4)
1≤k≤n
such that for all ~y ≡ (y1 , . . . , yn ) =
P
ek
1≤k≤n yk ~
∈ Rn and


y1
 
y ≡ σn (~y ) :=  ...  ∈ Rn×1
yn
one obtains for the action of α ∈ (Rn )∗ on Rn that
X
α(~y ) = µy =
µ k yk ∈ R .
(5)
(6)
1≤k≤n
In particular, e∗i (ej ) = e∗i ej = δij for all 1 ≤ i, j ≤ n.
As a warning to not confound R1×n with Rn we discuss the case n = 2 in some
detail. For this consider ~b1 := ~e1 + ~e2 = (1, 1) ∈ R2 and ~b1 := ~e1 − ~e2 = (1, −1) ∈ R2
as a different basis in R2 . It follows that ~x ≡ (x1 , x2 ) = x1~e1 + x2~e2 = y1~b1 + y2~b2 =
(y1 + y2 , y1 − y2 ) ∈ R2 . The coordinate vectors x, y ∈ R2×1 read:
x1
y1
x=
, y=
.
(7)
x2
y2
Here, the coordinate vector x ∈ R2×1 represents ~x ∈ Rn with respect to the standard
basis ~e1 , ~e2 ∈ Rn . In contrast, the coordinate vector y ∈ R2×1 represents the same
~x ∈ R2 with respect to the non-standard basis ~b1 , ~b2 ∈ R2 .
How does the dual basis b∗1 , b∗2 ∈ (R2 )∗ of the basis ~b1 , ~b2 ∈ R2 look like? For this
one has to solve the linear system of equations that corresponds to the definition of a
dual basis b∗i (~bj ) := δij , 1 ≤ i, j ≤ 2:
b∗1 (1, 1) = 1 ,
b∗1 (1, −1) = 0 ,
b∗2 (1, 1) = 0 ,
b∗2 (1, −1) = 1 .
Clearly, we may write b∗i =
∗
1≤j≤n βij ej
P
(8)
∈ (R2 )∗ for i = 1, 2.
Notice the order of the indices and compare this with the corresponding expansion
of the non-standard basis with respect to the standard basis of R2 .
EUCLIDEAN SPACES
3
One obtains the system of linear equations
β11 + β12
β11 − β12
β21 + β22
β21 − β22
= 1,
= 0,
= 0,
= 1,
(9)
whose unique solution is given by β11 = β21 = β21 = −β22 = 1/2. Hence,
1
b∗1 = e∗1 + e∗2 ,
2
(10)
1
∗
∗
∗
b2 = e1 − e2 .
2
The corresponding matrix representations of α = µ1 b∗1 + µ2 b∗2 = λ1 e∗1 + λ2 e∗2 ∈ (R2 )∗
read: µ = (µ1 , µ2 ) ∈ R1×2 and λ = (λ1 , λ2 ) ∈ R1×2 . That is to say, both dual basis
are represented by the same matrices:
e∗1 ≡ σ2t (e∗1 ) = σ2t (b∗1 ) ≡ b∗1 = (1, 0) ∈ R1×2 ,
e∗2 ≡ σ2t (e∗2 ) = σ2t (b∗2 ) ≡ b∗2 = (0, 1) ∈ R1×2 .
(11)
It follows that
α(~x) = λx = µy ∈ R .
(12)
µy = µ1 y1 + µ2 y2
1
1
= (λ1 + λ2 )(x1 + x2 ) + (λ1 − λ2 )(x1 − x2 )
2
2
= λ1 x1 + λ2 x2
= λx .
(13)
Indeed, one has
Notice that ~ek ∈ Rn , as opposed to e∗k ∈ (Rn )∗ . It is the matrix representation
e∗k ∈ R1×n of e∗k ∈ (Rn )∗ that looks formally identical to ~ek ∈ Rn . This is because real
n−tuples and real matrices of size 1×n look alike. As the example demonstrates, however, the matrix representation of any basis in (Rn )∗ formally looks like the standard
basis of Rn . Therefore, one has to clearly distinguish between ~x = (x1 , . . . , xn ) ∈ Rn
and x∗ = (x1 , . . . , xn ) ∈ R1×n . In the latter case, x∗ is supposed to represent some
element in (Rn )∗ with respect to some chosen basis. In contrast, ~x = (x1 , . . . , xn ) ∈ Rn
does not refer to any such a choice of basis.
Furthermore, a linear form (co-vector) α ∈ (Rn )∗ can never be expressed in terms
of an n−tuple, though its action on Rn can be represented by matrix multiplication:
α(~v ) = αv. The matrix α ∈ R1×n represents the linear form α ∈ (Rn )∗ and the matrix
v ∈ Rn×1 the vector ~v ∈ Rn with respect to a chosen basis ~b1 , . . . , ~bn ∈ Rn and its
corresponding dual basis b∗1 , . . . , b∗n ∈ (Rn )∗ .
The value λx ∈ R is independent of the choice of the basis used to express α(~x)
in terms of matrix multiplication. This is most clearly expressed, for instance, by the
equality (12). To see why this this holds true, actually, let ~v1 , . . . , ~vn ∈ V be any
basis with its dual basis denoted by v1∗ , . . . , vn∗ ∈ V ∗ . Also, let ~u1 , . . . , ~un ∈ V be
another basis with corresponding dual basis u∗1 , . . . , u∗n ∈ V ∗ . Let f ∈ AutR (V ) be
an isomorphism that is defined by f (~vk ) := ~uk for all k = 1, . . . , n ≡ dim(V ) ≥ 1.
4
J. TOLKSDORF
Correspondingly, let g ∗ ∈ AutR (V ∗ ) be an isomorphism on the real vector space V ∗
that is defined by g ∗ (vk∗ ) := u∗k for all k = 1, . . . , n = dim(V ∗ ).
We claim that g ∗ is fully determined by f −1 . Indeed, from the definition of a dual
basis one infers that
u∗i (~uj ) = g ∗ (vi∗ ) f (~vj ) = δij = vi∗ (~vj ) , for all i, j = 1, . . . , n .
(14)
P
P
We may rewrite this as follows. Let ~uj = 1≤i≤n aij ~vi ∈ V and u∗i = 1≤j≤n bij vj∗ ∈
V ∗ , with uniquely determined aij , bij ∈ R for all 1 ≤ i, j ≤ n. It follows that for all
1 ≤ i, j ≤ n:
X
u∗i (~uj ) =
bik alj vk∗ (~vl )
1≤k,l≤n
=
X
bik alj δkl
1≤k,l≤n
=
X
(15)
bik akj
1≤k≤n
= δij .
Therefore, the matrix B := (bij )1≤i,j≤n ∈ Rn×n , representing the linear map g ∗ , must
be the inverse of the matrix A := (aij )1≤i,j≤n ∈ Rn×n that represents the linear map
f , i.e. B = A−1 . Hence, the isomorphism g ∗ is fully determined by f −1 . To also
demonstrate this more explicitly, we mention that every linear map g : V −→ W
between real vector spaces V and W uniquely determines a correspondingly linear
map g ∗ : W ∗ −→ V ∗ , via g ∗ (α)(~v ) := α(g(~v )), for all α ∈ W and ~v ∈ V . Using this
one therefore obtains that g ∗ = (f −1 )∗ .
For instance, with respect to our example above this simply means that
µy = λA−1 Ax
(16)
= λ A−1 A x
= λx .
Hence, the real number λx ∈ R is indeed independent of the choice of basis.
Notice that B ∈ Rn×n acts from the right on λ ∈ R1×n , according to the rules of
matrix multiplication:
µ = λB ∈ R1×n .
(17)
This explains the already mentioned converse order of summation indices when compared to the matrix multiplication y = Ax ∈ Rn×1 . One says that the change of a
dual basis is contra-gradient to the change of a basis.
2. Inner product spaces
Let again V be a real vector space of (finite) dimension n ≥ 1.
Definition 2.1. A map
β : V × V −→ R
(~v , ~u) 7→ β(~v , ~u)
is called a bilinear form, provided it is linear in both arguments:
(1) β(~v1 + λ~v2 , ~u) = β(~v1 , ~u) + λβ(~v2 , ~u) ,
(18)
EUCLIDEAN SPACES
5
(2) β(~v , ~u1 + λ~u2 ) = β(~v , ~u1 ) + λβ(~v , ~u2 ) ,
for all ~v1 , . . . , ~u2 ∈ V and all λ ∈ R.
A bilinear form is said to be symmetric, provided that for all ~v , ~u ∈ V :
β(~v , ~u) = β(~u, ~v ) .
(19)
A bilinear form is called positive semi-definite if for all ~v ∈ V :
β(~v , ~v ) ≥ 0 .
(20)
It is called positive definite if furthermore β(~v , ~v ) = 0 ⇒ ~v = ~0.
Finally, a bilinear form is called non-degenerate, if for all ~u ∈ V :
β(~v , ~u) = 0
⇒
~v = ~0 .
(21)
A non-degenerated bilinear form is also called an inner product on V .
A symmetric and positive definite inner product is called a scalar product on V .
Definition 2.2. Let V be a finite dimensional real vector space endowed with a scalar
product β. The function
k · k : V −→ R
p
~v 7→ β(~v , ~v )
(22)
is called the norm (or length) of ~v ∈ V with respect to β.
Since β is a scalar product it follows for all ~v ∈ V and λ ∈ R that k~v k ≥ 0, whereby
k~v k = 0 ⇔ ~v = ~0, and kλ~v k = |λ|k~v k. Most important is the following statement
called Schwarz inequality, which we state without proof:
Proposition 2.1. Let V be a finite dimensional real vector space endowed with a
scalar product β. The induced norm fulfils for all ~v , ~u ∈ V :
|β(~v , ~u)| ≤ k~v k k~uk ,
(23)
where the equality holds if and only if ~v = λ~u for some λ ∈ R.
Definition 2.3. Let again V be a finite dimensional real vector space endowed with a
scalar product β. A linear map f : V −→ V is called symmetric, if for all ~v , ~u ∈ V :
β(f (~u), ~v ) = β(f (~v ), ~u) .
(24)
A symmetric linear map is called positive semi-definite if it fulfills for all ~u ∈ V :
β(f (~u), ~u) ≥ 0 .
(25)
If the inequality is strict for all ~u ∈ V \{~0}, then f is called positive definite.
Notice that a positive linear map is always invertible and thus an automorphism
on V .
Lemma 2.1. An inner product β on V is in one-to-one correspondence with an isomorphism V ' V ∗ .
Proof. Let β be an inner product on V . We may define
f : V −→ V ∗
V
~v 7→
~u
−→ R
7→ β(~v , ~u) .
(26)
6
J. TOLKSDORF
This map is linear and injective since β is bilinear and non-degenerated. It therefore
is an isomorphism since dim(V ∗ ) = dim(V ).
'
Conversely, let f : V −→ V ∗ be an isomorphism. We then consider
β(~v , ~u) := f (~v )(~u) ,
(27)
for all ~v , ~u ∈ V . The thus defined map β : V × V → R is bilinear and non-degenerated,
for f is an isomorphism and f (~v ) ∈ V ∗ is a linear form on V for all ~v ∈ V .
Notation: It is common to denote f (~v ) = β(~v , ·) ≡ v [ ∈ V ∗ and f −1 (α) ≡ α] ∈ V ,
where f and its inverse f −1 are called musical isomorphisms.
An inner product thus allows to identify a vector space with its dual vector space.
This should not be confounded with the fact that every choice of a basis always uniquely
determines a corresponding dual basis. For the latter to be (uniquely) determined,
however, one needs the whole set of basis vectors, in general. In contrast, an inner
product allows to associated with each individual vector a uniquely defined linear form
(and conversely).
A vector space V together with an inner product β is called an inner product space
(V, β).
P
P
Let ~v1 , . . . , ~vn ∈ V be a basis. Also, let ~x = 1≤i≤n xi~vi and ~y = 1≤j≤n yj ~vj .
Since β is bilinear one has
β(~v , ~u) =
≡
n
X
i,j=1
n
X
xi yj β(~vi , ~vj )
(28)
gij xi yj .
i,j=1
Here, the coefficients
gij := β(~vi , ~vj ) ∈ R
(29)
give rise to the matrix G ≡ (gij )1≤i,j≤n ∈ Rn×n , called the coefficient matrix of the
bilinear form β with respect to the chosen basis. Similar to linear maps, a bilinear
form β is also uniquely determined with respect to a basis, i.e. by the corresponding
coefficient matrix.
Let again ~u1 , . . . , ~un ∈ V be another basis and f ∈ AutR (V ) be an isomorphism that
0 := β(~
is defined by f (~vk ) := ~uk for all k = 1, . . . , n. We may set gij
ui , ~uj ). Accordingly,
0
0
n×n
we may denote by G := (gij )1≤i,j≤n ∈ R
the coefficient matrix of β with respect
to the basis ~u1 , . . . , ~un ∈ V . It follows that
0
gij
= β(f (~vi ), f (~vj ))
= β nk=1 aki~vk , nl=1 alj ~vl
n
X
=
aki alj β(~vk , ~vl )
P
P
(30)
k,l=1
=
X
aki gkl alj .
1≤k,l≤n
That is,
G0 = At GA ,
(31)
EUCLIDEAN SPACES
7
with At ≡ (aik )1≤i,k≤n ∈ Rn×n being the transposed matrix of the matrix A ≡
(aki )1≤k,i≤n ∈ Rn×n , which represents the automorphism f with respect to the basis ~v1 , . . . , ~vn ∈ V .
As a consequence, one may express also the action of a bilinear form β in terms of
matrix multiplication as
t
β(~x, ~y ) = xt Gy = x0 G0 y0 .
(32)
Notice that the matrix product xt Gy ∈ R does not depend on the basis ~v1 , . . . , ~vn ∈
V used to represent ~x, ~y ∈ V and β by the coordinate vectors x, y ∈ Rn×1 and the coefficient matrix G ∈ Rn×n . This is due to x0 = A−1 x and similar for y0 . Furthermore,
for all matrices A ∈ Rn×m , B ∈ Rm×k one has (AB)t = Bt At ∈ Rk×n . Also, for all
t
−1
invertible matrices A ∈ Rn×n it follows that A−1 = At
.
A bilinear form β is non-degenerated if and only if there exists a basis such that the
corresponding coefficient matrix is invertible. Finally, a bilinear form is symmetric if
and only if there is a basis such that the corresponding coefficient matrix is symmetric:
Gt = G.
Definition 2.4. Let (V, β) be an inner product space where β is a scalar product. A
basis ~v1 , . . . , ~vn ∈ V is called orthonormal with respect to β, provided that
β(~vi , ~vj ) = δij
(1 ≤ i, j ≤ n) .
(33)
Notice that the coefficient matrix of β respect to an orthonormal basis is given by
the unit matrix. In this case, one thus has β(~x, ~y ) = xt y.
Without proof we give the following statement:
Every finite dimensional vector space V that is endowed with a scalar product β
possesses an orthonormal basis.
This is proved by what is called the Gram-Schmidt procedure.
p The latter goes
~
as follows: First, choose any ~u1 ∈ V \{0} and put ~v1 := ~u1 / β(~u1 , ~u1 ). Clearly,
β(~v1 , ~v1 ) = 1. Notice that the definition of ~v1 ∈ V makes sense, for β is supposed to be
positive definite. Now take any w
~ 1 ∈ V \{~0, ~v1 } and consider ~u2 := w
~ 1 − β(w
~ 1 , ~v1 )~v1 .
When the Schwarz inequality is taken into account,
one may prove that ~u2 ∈ V \{~0}.
p
Moreover, β(~v1 , ~u2 ) = 0. Then set ~v2 := ~u1 / β(~u2 , ~u2 ) to obtain β(~vi , ~vj ) = δij for
i, j = 1, 2. To proceed consider any w
~ 2 ∈ V \{~0, ~v1 , ~v2 } and set
~u3 := w
~ 1 − β(w
~ 1 , ~v1 )~v1 − β(w
~ 1 , ~v2 )~v2
(34)
~
Again, when the Schwarz inequality is taken into account,
p one may show that ~u3 6= 0.
Also, β(~vk , ~u3 ) = 0 for all k = 1, 2. Then set ~v3 := ~u3 / β(~u3 , ~u3 ) and proceed until
one has constructed ~v1 , . . . , ~vn ∈ V , such that β(~vi , ~vj ) = δij for i, j = 1, . . . , n. It is
straightforward to demonstrate that these vectors are linearly independent and thus
build an orthonormal basis of (V, β).
One may therefore conclude that any scalar product is fully characterized by an
orthonormal basis. However, the latter is not unique.
Definition 2.5. Let V and W be finite dimensional real vector spaces and, respectively,
βV and βW be a scalar products on V and W . An isomorphism f ∈ HomR (V, W ) is
8
J. TOLKSDORF
called an isometry, provided that for all ~x, ~y ∈ V :
βW (f (~x), f (~y )) = βV (~x, ~y ) .
(35)
An isometry keeps the length fixed. It respects the full structure and therefore may
also be viewed as an isomorphism
'
(V, βV ) −→ (W, βW ) .
(36)
For W = V , it is straightforwardly checked that an automorphism f ∈ AutR (V ) is
an isometry if and only if the matrix A ∈ Rn×n , which represents f with respect to
an orthonormal basis, fulfills
At = A−1 .
(37)
Matrices satisfying this relation are called orthogonal matrices. They build a group,
called orthogonal group, that is denoted by SO(n) ⊂ Rn×n . Similarly, the set of all
isometries build a group SO(V, β) ⊂ AutR (V ). It follows that any scalar product β
is defined by selecting a basis in V , which is considered as being orthonormal. This
defines β uniquely up to isometries.
Example: We again consider the special case V = Rn . Since Rn possesses a
distinguished basis ~e1 , . . . , ~en ∈ Rn , one also has a distinguished scalar product on Rn ,
called the dot product:
~ei · ~ej := δij (1 ≤ i, j ≤ n) .
(38)
n
The dot product is thus the scalar product on R that is defined by considering the
standard basis as being orthonormal. Any other scalar product is given by a positive
linear map f : V → V , i.e.
β(~u, ~v ) := f (~u) · ~v
for all ~u, ~v ∈ V .
(39)
Indeed, β is symmetric since f is supposed to be symmetric with respect to the dot
product. Also, β is positive definiteP
since f is positive definite with respect to the
n
dot product. We may set f (~ej ) =
ei for all j = 1, . . . , n. It follows that
i=1 gij ~
n×n
G ≡ (gij )1≤i,j≤n ∈ R
is symmetric and positive definite: Gt = G and xt Gx > 0
for all 0 6= x ∈ Rn×1 . Hence,
β(~ei , ~ej ) =
n
X
gik~ek · ~ej =
n
X
gik δkj = gij .
(40)
k=1
k=1
We may set ~bk := f −1 (~ek ) for k = 1, . . . , n. Then, ~b1 , . . . , ~bn ∈ Rn is a basis since
f ∈ AutR (Rn ). In fact, it constitutes an orthonormal basis with respect to β:
β(~bi , ~bj ) = δij
(1 ≤ i, j ≤ n) .
(41)
However,
~bi · ~bj =
n
X
gik gkj ,
(42)
k=1
i.e. the positive matrix defined by (~bi · ~bj )1≤i,j≤n ∈ Rn coincides with Gt G = G2 .
Consequently,
gij = λ2i δij
(1 ≤ i, j ≤ n) ,
(43)
th
where λk ∈ R is strictly positive and denotes the k −eigenvalue of f : f (~ek ) = λk~ek
for all k = 1, . . . , n. Hence, ~bk = ~ek /λk ∈ Rn .
EUCLIDEAN SPACES
9
We have thus proved that all scalar products on Rn are determined by (43), i.e. by
coefficient matrices of the form
 2

λ1 · · · 0


G =  ... · · · ...  ∈ Rn×n .
(44)
2
0 · · · λn
Notice that the musical isomorphism
'
V −→ V ∗
~x 7→ x[
(45)
with respect to any scalar product β corresponds to the isomorphism
'
Rn×1 −→ R1×n
x 7→ xt .
(46)
This correspondency refers to any β−orthonormal basis. Especially this holds true for
V = Rn and β being given by the dot product. In this case, one simply gets
~x · ~y = x[ (~y ) = xt y ,
(47)
for all ~x, ~y ∈ Rn . That is, one my tend to simply identify x[ ∈ (Rn )∗ with the matrix
xt ∈ R1×n , whenever the vector ~x ∈ Rn is identified with the matrix x ∈ Rn×1 .
However, this is not appropriate since neither ~x · ~y , nor x[ (~y ) refers to any basis. Thus,
one may identify, for instance, the vector ~x ∈ Rn with the matrix x ∈ Rn×1 with
help of any basis in Rn . However, this yields ~x · ~y = xt Gy, where again G is the
coefficient matrix of the dot product with
t respect to the chosen basis. That is, one has
to identify x[ ∈ (Rn )∗ with xt G = Gx ∈ R1×n . Notice that in the case considered
xt Gy is still the matrix representation of the dot product, but with respect to an
arbitrary basis. This should not be confounded with xt Gy representing an arbitrary
scalar product β(~x, ~y ) with respect to the standard basis.
Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany
E-mail address: [email protected]