Download Linear Vector Spaces

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Eigenvalues and eigenvectors wikipedia , lookup

Exterior algebra wikipedia , lookup

System of linear equations wikipedia , lookup

Matrix calculus wikipedia , lookup

Euclidean vector wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Four-vector wikipedia , lookup

Vector space wikipedia , lookup

Lp space wikipedia , lookup

Transcript
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Linear Vector Spaces
with
Applications to Computer Graphics
Jim Fawcett
Copyright (c) 1998 - 2008
Linear Vector Spaces
1
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Linear Vector Spaces
Definition 1: Linear Vector Space
A vector space X is a set of elements called vectors, with an associated complete field of elements F called
scalars. Two operations are defined on elements of a vector space, vector addition:
x, y  X

x y X
Y
u
v
vy
u + v = v+u
v
vy
u
uy
X
ux
vx
vx
Linear Vector Spaces
2
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
And scalar multiplication:
  F, x  X
 x  X
Y
v
vy
v
vy
X
vx
vx
The Sets X and F and addition and scalar multiplication operations satisfy the following:
1. x  y  y  x
2. ( x  y )  z  x  ( y  z )
3.    X 
x    x x  X
4.  ( x  y )   x   y
5. (   ) x  x  x
6. ( ) x   ( x)
7. 0 x   , 1x  x
Linear Vector Spaces
3
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Definition 2: Cartesian Product
Let X and Y be vector spaces over a common field of scalars. The Cartesian product X x Y of X and Y is
the vector space of ordered pairs (x,y) with x ε X and y ε Y. Addition and scalar multiplication are defined
by (x1,y1) + (x2,y2) = (x1+x2 , y1+y2) and α(x,y) = (αx,αy). We write Xn for the Cartesian product of X with
itself n times.
Definition 3: Subspace
A nonempty subset M of a vector space X is called a subspace of X if every vector of the form αx + βy is in
M whenever x and y are both in M, and α and β are scalars.
Definition 4: Sum of Sets
The sum of two sets S and T in a vector space, denoted by S + T, consists of all vectors of the form s + t
were s є S and t є T.
A vector space X is the direct sum of two subspaces M and N if every vector x є X has a unique
representation x = m + n where m є M and n є N. This is denoted by:
X  M N
Definition 5: Linear Combination
A linear combination of the vectors x1, x2, …, xn in a vector space is a sum of the form:
a1 x1  a 2 x 2  ...  a n x n
Definition 6: Subspace Generated by a Subset S, [S]
Suppose S is a subset of a vector space X. The set [S], called the subspace generated by S, or the span of S,
consists of all vectors in X which are linear combinations of vectors in S.
Y
S1
x
s2
z
Linear Vector Spaces
4
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Proposition 1: Intersection of Subspaces
Let M and N be subspaces of a vector space X. The intersection, M ∩ N, of M and N is a subspace of X.
Proof:
x, y  M  N

Proposition 2:
 x  y  M , N
x, y  M , N
 x  y  M  N
Sum of Subspaces
Let M and N be subspaces of a vector space X. Their sum, M + N, is a subspace of X.
Proof:
recall that M  N  {x : x  m  n, m  N , n  N }
x, y  M  N  m1 , m2  M , n1 , n2  N  x  m1  n1 , y  m2  n2
x  y  m1  n1  m2  n2  (m1  m2 )  (n1  n2 )  x  y  M  N
Proposition 3:
If M and N are subspaces of a vector space then:
[M  N ]  M  N
Proposition 4:
If the vector space X is the sum of M and N then:
X M N
iff
M  N  { }
Proof of sufficiency:
Let x  MN and assume u  M  N  u  
then, given x  X  m  M , n  N  x  m  n  (m  u )  (n  u ) where
(m  u )  M , (n  u )  N  representation is not unique which contradicts X  MN
 X  MN  M  N  { }
Proof of necessity:
Given x  X , let x  m  n, m  M , n  N where M  N  { }
Assume that x  m  n  m1  n1 , m1  M , n1  N
 m  n  m1  n1    m  m1  n  n1   sin ce M  N  { }
 m  m1 , n  n1  the representation is unique.
Linear Vector Spaces
5
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Definition 7: Linear Dependence
A vector x is linearly dependent on a set S of vectors in X if x can be expressed as a linear combination of
vectors from S. Equivalently, x is linearly dependent on S if x ε [S].
Theorem 1: Linear Independence
A set of vectors, x1, x2, …, xn is linearly independent iff:
n

k 1
k
xk  
  k  0, k  1,2,..., n
Proof of necessity:
n
Assume

k 1
k
xk   and  r  0 for some r  1  r  n
  r xr    k xk  xr  span[ x1 , x2 ,, xn ]  dependence
k r
Proof of sufficiency:
Assume S  {x1 , x 2 ,, x n } is a dependent set.  x r  S  x r    k xk
   k xk  xr  
k r
and
k r
r  1  0
Corollary 1: Uniqueness of Representation
If x1, x2, …, xn are linearly independent vectors, then:
n

k 1
k
xk 
n

k 1
k
xk
 k  k
Proof:
x   k xk    k xk   ( k   k ) xk     k   k , k  1,2,, n
Definition 8: Basis
A set S of linearly independent elements which generates a vector space X is called a Hamel basis in X. A
vector space having a finite basis is said to be finite dimensional. All other vector spaces are said to be
infinite dimensional.
Linear Vector Spaces
6
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Normed Linear Vector Spaces
Definition 9: Normed Linear Vector Space
A normed linear vector space is a vector space X on which there is defined a real-valued function which
maps each element x ε X into a real number // x // called the norm of x. It satisfies the axioms:
Example 1:
1. // x //  0, // x //  0 iff x  
2. // x  y //  // x //  // y // x, y  X
vector length
triangle inequality
3. // x //  /  / // x //   F , x  X
vector scaling
Euclidian n Space is a Normed Linear Space
 x1  
  
 x 
n
E   2  , // x //  / x1 / 2  / x2 / 2  ...  / x3 / 2 ,  , x  C , or  , x  R,
 ...  
 xn  


Proposition 5: In a normed linear space X:
// x //  // y //  // x  y //
x, y  X
Proof:
// x //  // y //

Linear Vector Spaces
// x  y  y //  // y //
 // x  y //  // y //  // y //

// x  y //
7
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Definition 10: Convergence
In a normed linear space we say that an infinite sequence of vectors {xn} converges to a vector x if the
sequence {// x – xn//} converges to zero. In this case we write xn → x.
Definition 11: Cauchy Sequence
A sequence {xn} in a normed space is a Cauchy sequence if // x m – xn // → 0 as m, n → ∞.
Proposition 6: In a normed space every convergent sequence is a Cauchy sequence.
Proof:
xn  x  // xn  xm //
 // xn  x  x  xm //
 // xn  x //  // x  xm //  0
Definition 12: Banach Space
A normed linear vector space X is complete if every Cauchy sequence from X has a limit in X. A complete
normed linear vector space is called a Banach space.
Linear Vector Spaces
8
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Hilbert Spaces
Definition 13: Hilbert Space, Inner Product
A Hilbert space is a complete linear vector space X together with an inner product defined on X x X.
Corresponding to each pair of vectors x, y ε X the inner product <x,y> of x and y is a scalar. The inner
product satisfies the following axioms:
1.  x, y    y, x 
2.  x  y, z    x, z    y, z 
(  is complex conjugate of  )
additive in first arg ument
3.  x, y     x, y 
scaling in first arg ument
4.  x, x   0 and  x, x   0 iff x  
Proposition 7: Additive in second argument
 x, y  z    x, y    x, z 
Proof:
 x , y  z    y  z , x    y , x    z , x    y , x    z , x    x, y    x , z 
Proposition 8: Scaling second argument
 x, y     x, y 
Proof:
 x, y    y, x     y, x     y, x     x, y 
Proposition 9:
Cauchy-Schwarz Inequality
/  x, y  /  // x // // y //
equality holds iff x  y or y  
Proof:
if y   the statement is obviously true. Otherwise :
0   x  y, x  y    x, x  y     y, x  y 
  x, x     x , y     y , x      y , y 
if  
 x, y 
/  x, y  / 2 /  x, y  / 2 /  x, y  / 2
then 0   x, x  


 y, y 
 y, y 
 y, y 
 y, y 
  x, x  y, y   /  x, y  / 2  /  x, y  /  // x // // y //
Linear Vector Spaces
9
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Proposition 10: Hilbert space norm
// x //   x, x 
Proof:
norm properties 1. and 3. are immediate from inner product definition.
// x  y // 2   x  y, x  y    x, x    x, y    y, x    y, y 
 // x // 2  2 // x // // y //  // y // 2  (// x //  // y //) 2
Example 2:
Euclidian n space
n
The space E of n-tuples of real numbers is a Hilbert space with the inner product:
n
 x, y    x i y i
where
i 1
 x1 
 y1 
x 
y 
2

x
 X , y   2  Y
 ... 
 ... 
 
 
xn 
 yn 
Proposition 11: Zero Inner Product
if
x X
then  x, y   0 y  X
 x 
Proof:
x  X   x, y   0 y  X   x, x   0  x  
Proposition 12:
Parallelogram Law in Hilbert Spaces
// x  y // 2  // x  y // 2  2 // x // 2  2 // y // 2
Proof:
// x  y // 2  // x  y // 2   x  y, x  y    x  y, x  y 
  x, x    x, y    y, x    y, y    x, x    x, y    y, x    y, y 
 2  x, x   2  y, y   2 // x // 2  2 // y // 2
Linear Vector Spaces
10
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Definition14: Orthogonality
Two vectors x, y in a Hilbert space are said to be orthogonal if <x,y> = 0. This is written as x┴y. A vector
x is said to be orthogonal to a set S if:
xS

x  s, s  S
Proposition 13: Pythagorean Theorem
x  y  // x  y // 2  // x // 2  // y // 2
Proof:
// x  y // 2   x  y, x  y    x, x    x, y    y, x    y, y   // x // 2  // y // 2
Definition 15:
Closure Point
A point x ε X is said to be a closure point of a set P if:
given   0, p  P

// x  p //  
The set of all closure points of P is called the closure of P and is denoted by P with an overbar.
P is the closure of P, e.g., the set of all limit points of P
Definition 16:
Closed Set
P closed

PP
Theorem 2: Every finite dimensional space is closed.
Luenberger, page 38, Theorems 1 and 2
Linear Vector Spaces
11
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Theorem 3: Projection Theorem
Let H be a Hilbert space and M a closed subspace of H. Corresponding to any vector x ε H, there is a
unique vector m0 ε M such that // x – m0 // ≤ // x – m // for all m ε M. A necessary and sufficient condition
that m0 ε M be the unique minimizing vector is that x - m0 be orthogonal to M. We say that m0 is the
projection of x onto M.
Luenberger, pages 49-52, Theorems 1 and 2.
H = span [X, Y, Z]
Y
vy
vy = v - v p  M
v
vx = vpx
vz = vpz
x
mo = vp
z
M = span [X, Z]
Linear Vector Spaces
12
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Definition 17: Orthogonal Complement
Given a subset S of a Hilbert space, the set of all vectors orthogonal to S is called the orthogonal
complement of S and is denoted by S┴.
Proposition 14: Let S and T be subsets of a Hilbert space. Then:
1. S  is a closed subspace
2. S  S 
3. S  T  T   S
4. S   S 
5. S   [ S ]
In 5., S┴┴ is the smallest closed subspace containing S.
Luenberger, page 52
Definition 18:
Direct Sum
A vector space X is the direct sum of two subspaces M and N if every vector x ε X has a unique
representation of the form x = m + n where m ε M and n ε N. This is denoted by X = M Θ N.
Theorem 3: If M is a closed linear subspace of a Hilbert space H, then:
H  M M 
and
M  M 
Luenberger, page 53
Definition 19: Orthogonal Set of Vectors
A set S of vectors in a Hilbert space is said to be an orthogonal set if x ┴ y for each x, y ε S, x ≠ y. The set
is said to be orthonormal if each vector in the set has unit norm.
Proposition 15: Linearly independent set
An orthonormal set of nonzero vectors is a linearly independent set.
Proof:
Given that {xi }in1 is  xi x j i  j
  span[ xi ]  { j }     i xi
0    , x j     i xi , x j    i  xi , x j    i  x j , x j    j  0
Linear Vector Spaces
j  1,2,, n
13
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Theorem 4: Normal Equations
If M is a finite dimensional subspace of a Hilbert space H, equal to the span of a linearly independent set of
vectors {xI}, then the projection m0 of x ε H onto M is given by the solution to the normal equations:
m
m0    i x i ,
i m
 x1 , x1   x 2 , x1  ...  x m , x1    1   x, x1  
 x , x   x , x  ...  x , x      x, x  
2
2
m
2
2
 1 2
 2


  ...   ...


  

 x1 , x m   x 2 , x m  ...  x m , x m   m   x, x m 
Since the set of vectors {xi} is linearly independent a solution for the coefficients {αi} always exists.
Proof:
x  m0  M  x  m0  xi
i  1,2,, m   x  m0 , x j   0
m
j  1,2,, m
j  1,2,, m   i  xi , x j    x, x j 
  m0 , x j    x, x j 
j  1,2,, m
i 1
Corollary 2: If H is a Hilbert space, the projection of x ε H onto x1 ε H is given by:
m0   x,
x1
x1

// x1 // // x1 //
Proof:
Taking m  1 in Theorem 4. :
m0  1 x1
and
 x1 , x1  1   x, x1    1 
 x, x1 
x
x
 m0   x, 1  1
2
// x1 // // x1 //
// x1 //
x
m0
Linear Vector Spaces
x1
14
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Transformations
Definition 20: Transformations
Let X and Y be vector spaces and let D be a subset of X. a rule which associates with every element x ε D
an element y ε Y is said to be a transformation from X to Y with domain D, denoted by T:D → Y. We also
say that T is an operator mapping D into Y. If y corresponds to x under T we write y = T(x). A transformation from a vector space X into the space of real or complex scalars is said to be a functional on X.
Definition 21: Linearity
A transformation T mapping a vector space X into a vector space Y is said to be linear if for every x, y ε X
and scalars α, β ε F we have:
T (x  y )  T ( x)  T ( y )
Definition 22: Inverse transformations
Let T:X → Y be a linear operator between two vector spaces X and Y. Consider the equation Tx = y for a
given y ε Y. This equation:
1.
2.
3.
may have a unique solution x ε X.
may have no solution.
may have more than one solution.
Condition 1. holds for every y ε Y iff the mapping T from X to Y is one-to-one and has range equal to Y.
In this case T has an inverse T-1 such that T-1y = x whenever Tx = y.
Proposition 16: If Linear operator T:X →Y has an inverse, the inverse T -1 is linear.
Example 3: A:Rm →Rn
We denote the vector space of n-tuples with real scalar values as Rn. A transformation, A, from Rm into Rn
is denoted by A:Rm →Rn. If A is a linear transformation we write A ε L(R m,Rn). Every A ε L(Rm,Rn) is a
matrix with m columns and n rows.
Definition 23: Range and Null Spaces, R(T) and N(T)
Let T be transformation from X to Y with domain D. The collection of all vectors y ε Y for which there is
an x ε D with y = Tx is called the Range of T, denoted by R(T). The set {x ε X : Tx = θ} is called the null
space of T, denoted by N(T).
Linear Vector Spaces
15
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Definition 24: Adjoint operator on Hilbert Spaces
Suppose A is an operator mapping Hilbert Space G into Hilbert space H, e.g., A : G →H. The Adjoint
operator A* : H → G is defined by the relation:
 x, A* y    Ax, y 
Example 4: if A:Rm →Rn Then A is an m by n matrix and A* = AT, matrix transpose.
n
if Ax  y then yi  ( Ax) i   aij x j
i  1,2,, m
j 1
m
 Ax, y   
i 1
n
m
n
j 1
i 1
j 1
 yi aij x j   x j  yi aij   x, A* y   a *ji  aij  A*  AT
Theorem 4: Fundamental Theorem of Linear Algebra
If A is a bounded linear operator between two real Hilbert Spaces, then:
1. R( A)   N ( A* )
2. R( A)
 N ( A* )
3. R( A* )   N ( A)
4. R( A* )  N ( A) 
R( A) and R( A*) are the closures of the ranges of A and A* respectively.
If X and Y are Hilbert Spaces and A a bounded linear operator such that A : X → Y then:
X  N ( A)  N ( A) 

X  N ( A)  R( A* )
Y  R( A)  R( A) 

Y  R( A)  N ( A* )
Luenberger, page 157 and 53
Linear Vector Spaces
16
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Proposition 17: Solutions of Linear Equations in Hilbert Spaces
If X and Y are Hilbert spaces and A is a bounded linear operator A:X → Y, then existance of solutions of
the Operator equation Ax = y are determined by the subspaces R(A), N(A), R(A*), and N(A*).
If N ( A*)   then a solution always exists because y must be
contained in the range of A.
If y  R(A) then there is a unique solution only if N(A)  θ.
Otherwise there are an infinite number of solutions.
In that case we often select the minimun norm solution.
If y  R(A) then no solution exists.
In that case we often select the best approximation provided by the
projection theorem.
If N(A)  θ there are an infinite number of solutions for the projection.
In that case we often select the projection of minimum norm.
A( x)  y
Y  R( A)  N ( A*)
X  R( A*)  N ( A)
A* : Y  R( A*)  X
R(A*)
N(A*)
N(A)
R(A)
A : X  R( A) Y
Linear Vector Spaces
17
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Lines, Planes, and Intersections in E3
Proposition 18: Equation of a line passing through two points
A straight line is a set of points, obeying a linear relationship. If it passes through the points p 1 and p2 then
any point on the line is given by:
x  p1  (1   ) p 2
p2
x
p1
Proposition 19: Equation of a plane
The equation of a hyperplane (in E3 a hyperplane is an ordinary plane) is given by:
 n, p   c
Where n is a vector, called the normal, orthogonal to the plane, p is any point in the plane, and c is a
constant determined by the distance of the plane from the origin and // n //. A plane passing through the
three points p1, p2, and p3 is given by:
 n, p1  p2    n, p2  p3    n, p3  p1   
p1
p2
n
p3
Linear Vector Spaces
18
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Proposition 20: Computation of normal vector n in E3
Given three points, q, r, and s, lying in the plane, the components of normal n are found by solving the
equation:
q x q y q z  n x  c 

   
rx ry rz  n y   c 
 s s s  n  c 
 x y z  z  
The constant c determines the magnitude of the normal, n. Often c is chosen to be unity or so that //n// = 1.
Each row is an equation of the form :  n, p   c
Proposition 21: Intersection of line with plane in E3
Suppose that a line, determined by the points p and q, intersects with a plane determined by <x,n> = c,
where n is a normal to the plane and c defines the plane’s distance from the origin. Then the intersection of
the line and plane is given by:
x0  0 p  (1  0 )q, 0 
c   q, n 
 p  q, n 
To demonstrate this relationship simply solve simultaneously the two equations:
x0  0 p  (1  0 )q and
 n, x0   c
Where x0 is the point of intersection.
q
x
0
n
p
Linear Vector Spaces
19
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Proposition 22: Angle between two vectors
The angle between two vectors p and q is determined by the relationship:
cos   
p
q
,

// p // // q //
p

m
// m //   p,
Linear Vector Spaces
q
q
// m //
p
q
  cos 

,

// q //
// p //
// p // // q //
20
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Computer Graphics: Perspective Views and Shading
Application #1: Projection of line in E3 on a plane between line and viewer
The endpoints of the projection lie on rays from the viewer’s position to the endpoints of the line. The
calculation of the line usually assumes the viewer’s position lies on a normal to the plane, passing through
the center of the plane.
Y
q
p
X
r
n

observer
computer
screen

Z
Linear Vector Spaces
21
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Example #1: Projection of Triangular Patches onto Flat Screen
A surface can be projected onto the computer screen by dividing the surface into an array of points.
Triangular patches are constructed between every three points (so an array of four points would generate
two patches) and each patch projected onto the screen. The view becomes quite useful if we shade each
patch based on its orientation to some light source.
to light
source
l
computer
screen
n
patch
observer
Application #2: Computing the shade of a plane segment with single light source
The shade of a plane segment, usually taken to be a triangular patch, is proportional to the cosine of the
angle between the normal to the plane and a vector pointing at the light source. If the angle is zero (cosine
is one) the surface is white. If the angle is 90 degrees (cosine is zero) the surface is black.
Linear Vector Spaces
22
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
Mathematical Notation


for all
such that

there exists

iff
implies
if and only if , is equivalent to, 
x X
x is contained in X

vector of zero length
S  {x : P ( x)} S is set of x satisfying P ( x)
A B
A is a subset of B, e.g., x  A  x  B
A B
union of A and B  {x : x  A or x  B}
A B
R
in ter sec tion of A and B  {x : x  A and x  B}
set of real numbers
Rn
T : X Y
set of real n  tuples
Transformation mapping set X in to set Y
// x //
 x, y 
norm of x, a measure of length
inner product , a measure of the angle between x and y
En
Euclidian n  space,
the set of real n  tuples endowed with norm and inner product :
// x //  x1  x 2  ...  x n   x, x 
2
2
2
n
 x, y   x T y   x i y i
i 1
C
n
set of complex n  tuples endowed with norm and inner product :
// x //  / x1 / 2  / x 2 / 2  ...  / x n / 2   x, x 
n
 x, y   x T y   x i y i
i 1
where y is the complex conjugate of y
Linear Vector Spaces
23
ACE – Advanced Course in Engineering
Fall 1999, Modified Fall 2008
References
1. These notes1, Jim Fawcett, 1998
2. Optimization by Linear Vector Space Methods2, David Luenberger, Wiley, 1969.
3. Linear Algebra and Applications3, Gilbert Strang, Academic Press, 1976.
4. Fast Algorithms for 3D Graphics4, Georg Glaeser, Springer-Verlag, 1994
5. Programming Windows 955, Charles Petzold, Microsoft Press, 1996
6. Programming Windows6, Charles Petzold, Microsoft Press, 1999
7. Teach Yourself Visual C++ 6 in 21 Days7, Davis Chapman, SAMS, 1998
8. The MFC Answer Book8, Eugene Kain, Addison Wesley, 1998
These notes were developed from material presented in the course CSE691 – Software Modeling and
Analysis. Covered there, but not here, are representations of software other than code, study of
asynchronous systems using message passing, processes and threads, and queuing theory, all directed to
construction of sound software architectures.
2
This reference was the principle source for much of the material presented here. The statements of
definitions, propositions, and theorems follows this reference closely. Virtually all of the proofs not
presented in class may be found in this text.
3
Strang’s book is devoted to finite dimensional spaces and so many of the statements, derivations, and
proofs are simpler than those found in reference #1. This text is nicely motivated with examples.
4
This text covers basic graphics programming and a lot of material on 3D modeling, hidden surface
removal, and painting algorithms. Recommended if you plan to do any serious 3D programming.
5
Windows programming using the Win32 API, but does not use the Microsoft Foundation Class (MFC)
library or wizards.
6
Same as #5 except has added sections on multimedia and sockets programming.
7
A very readable introduction to windows programming, including graphics rendering. Assumes you
know the basics of C++. Uses the MFC library.
8
Answers to sophisticated questions about windows programming using the MFC library. Assumes you
know the material presented in reference #4 and have considerable experience with windows programming.
1
Linear Vector Spaces
24