Download VECTOR FIELDS

Document related concepts

Lorentz force wikipedia , lookup

Aharonov–Bohm effect wikipedia , lookup

Minkowski space wikipedia , lookup

Mathematical formulation of the Standard Model wikipedia , lookup

Euclidean vector wikipedia , lookup

Noether's theorem wikipedia , lookup

Vector space wikipedia , lookup

Field (physics) wikipedia , lookup

Four-vector wikipedia , lookup

Metric tensor wikipedia , lookup

Transcript
VECTOR FIELDS
Keijo Ruohonen
2013
Contents
1
1
2
5
7
9
13
I POINT. VECTOR. VECTOR FIELD
16
16
17
18
22
23
30
34
36
II MANIFOLD
38
38
41
43
III VOLUME
46
46
52
54
59
IV FORMS
1.1 Geometric Points
1.2 Geometric Vectors
1.3 Coordinate Points and Vectors
1.4 Tangent Vectors. Vector Fields. Scalar Fields
1.5 Differential Operations of Fields
1.6 Nonstationary Scalar and Vector Fields
2.1 Graphs of Functions
2.2 Manifolds
2.3 Manifolds as Loci
2.4 Mapping Manifolds. Coordinate-Freeness
2.5 Parametrized Manifolds
2.6 Tangent Spaces
2.7 Normal Spaces
2.8 Manifolds and Vector Fields
3.1 Volumes of Sets
3.2 Volumes of Parametrized Manifolds
3.3 Relaxed Parametrizations
4.1 k-Forms
4.2 Form Fields
4.3 Forms and Orientation of Manifolds
4.4 Basic Form Fields of Physical Fields
62
63
69
73
77
V GENERALIZED STOKES’ THEOREM
83
83
87
92
96
97
98
VI POTENTIAL
5.1 Regions with Boundaries and Their Orientation
5.2 Exterior Derivatives
5.3 Exterior Derivatives of Physical Form Fields
5.4 Generalized Stokes’ Theorem
6.1 Exact Form Fields and Potentials
6.2 Scalar Potential of a Vector Field in R3
6.3 Vector Potential of a Vector Field in R3
6.4 Helmholtz’s Decomposition
6.5 Four-Potential
6.6 Dipole Approximations and Dipole Potentials
i
ii
102 VII PARTIAL DIFFERENTIAL EQUATIONS
102 7.1 Standard Forms
102 7.2 Examples
107 Appendix 1: PARTIAL INTEGRATION AND GREEN’S
IDENTITIES
107 A1.1 Partial Integration
111 A1.2 Green’s Identities
111
111
113
115
117
Appendix 2: PULLBACKS AND CURVILINEAR COORDINATES
118
118
119
120
122
Appendix 3: ANGLE
A2.1 Local Coordinates
A2.2 Pullbacks
A2.3 Transforming Derivatives of Fields
A2.4 Derivatives in Cylindrical and Spherical Coordinates
A3.1 Angle Form Fields and Angle Potentials
A3.2 Planar Angles
A3.3 Solid Angles
A3.4 Angles in Rn
124 References
126 Index
Foreword
These lecture notes form the base text for the course ”MAT-60506 Vector Fields”. They are
translated from the older Finnish lecture notes for the course ”MAT-33351 Vektorikentät”, with
some changes and additions.
These notes deal with basic concepts of modern vector field theory, manifolds, (differential)
forms, form fields, Generalized Stokes’ Theorem, and various potentials. A special goal is a
unified coordinate-free physico-geometric representation of the subject matter. As a sufficient
background, basic univariate calculus, matrix calculus and elements of classical vector analysis
are assumed.
Classical vector analysis is one of the oldest areas of mathematical analysis.1 Modelling
structural strength, fluid flow, thermal conduction, electromagnetics, vibration etc. in the threespace needs generalization of the familiar concepts and results of univariate calculus. There
seem to be a lot of these generalizations. Indeed, vector analysis—classical as well as modern—
has been largely shaped and created by the many needs of physics and various engineering
applications. For the latter, it is central to be able to formulate the problem as one where fast
and accurate numerical methods can be readily applied. This generally means specifying the
local behavior of the phenomenon using partial differential equations (PDEs) of a standard type,
which then can be solved globally using program libraries. Here PDEs are not extensively dealt
with, mainly via examples. On the other hand, basic concepts and results having to do with
their derivation are emphasized, and treated much more extensively.
Modern vector analysis introduces concepts which greatly unify and generalize the many
scattered things of classical vector analysis. Basically there are two machineries to do this:
1
There is a touch of history in the venerable Finnish classics TALLQVIST and VÄISÄLÄ , too.
iii
manifolds and form fields, and Clifford’s algebras. These notes deal with the former (the latter
is introduced in the course ”Geometric Analysis”).
The style, level and order of presentation of the famous textbook H UBBARD & H UBBARD
have turned out to be well-chosen, and have been followed here, too, to an extent. Many tedious
and technical derivations and proofs are meticulously worked out in this book, and are omitted
here. As another model the more advanced book L OOMIS & S TERNBERG might be mentioned,
too.
Keijo Ruohonen
”One need only know that geometric objects in spacetime
are entities that exist independently of coordinate systems
or reference frames.”
(C.W. M ISNER & K.S. T HORNE & J.A. W HEELER : Gravitation)
Chapter 1
POINT. VECTOR. VECTOR FIELD
1.1 Geometric Points
A point in space is a physico-geometric primitive, and is not given any particular definition
here. Let us just say that dealing with points, lines, planes and solids is mathematically part
of so-called solid geometry. In what follows points will be denoted by capital italic letters:
P, Q, R, . . . and P1 , P2 , . . . , etc. The distance between the points P and Q is denoted by
d(P, Q). Obviously d(P, P ) = 0, d(P, Q) = d(Q, P ) and
d(P, R) ≤ d(P, Q) + d(Q, R) (triangle inequality).
An open P -centered ball of radius R is the set of all points Q with d(P, Q) < R, and is
denoted by B(R, P ). Further:
• The point set A is open, if for each point P in it there is a number RP > 0 such that
B(RP , P ) ⊆ A. In particular the empty set ∅ is open.
• The boundary ∂A of the point set A is the set of all points P such that every open ball
B(R, P ) (R > 0) contains both a point of A and a point of the complement of A. In
particular the boundary of the empty set is empty. A set is thus open if and only if it does
not contain any of the points of its boundary.
• The point set A is closed, if it contains its boundary. In particular the empty set is thus
closed. Since the boundaries of a set and its complement clearly are the same, a set is
closed if and only if its complement is open.
• The closure1 of the point set A is the set A = A ∪ ∂A and the interior is A◦ = A − ∂A.
The interior of an open set is the set itself, as is the closure of a closed set.
Geometric points naturally cannot be added, subtracted, or be multiplied by a scalar (a realvalued constant). It will be remembered from basic calculus that for coordinate points these
operations are defined. But in reality they are then the corresponding operations for vectors, as
will be seen soon. Points and vectors are not the same thing.
Note. Here and in the sequel only points, vectors and vector fields in space are explicitly dealt
with. The concepts may be defined for points, vectors and vector fields in plane or real axis.
For instance, an open ball in plane is an open circle, in real axis an open ball is an open finite
interval, and so on. To an extent they can be defined in higher dimensions, too.
1
Do not confuse this with the complement which is often also denoted by an overbar!
1
CHAPTER 1. POINT. VECTOR. VECTOR FIELD
2
1.2 Geometric Vectors
The directed line segment connecting the two points P (the initial point) and Q (the terminal
−→
−→
−→
point) is denoted by P Q. Two such directed line segments P Q and RS are said to be equivalent
if they can be obtained from each other by parallel transform, i.e., there is a parallel transform
which takes P to R and Q to S, or vice versa.
Directed line segments are thus partitioned into equivalence classes. In each equivalence
class directed line segments are mutually equivalent, while those in different equivalence classes
−→
−→
are not. The equivalence class containing the directed line segment P Q is denoted by hP Qi.
−→
The directed line segment P Q is then a representative of the class. Each class has a representative for any given initial (resp. terminal) point.
Geometric vectors can be identified with these equivalenc classes. The geometric vector
−→
with a representative P Q has a direction (from P to Q) and length (the distance d(P, Q)).
Since representatives of an equivalence class are equivalent via parallel transforms, direction
and length do not depend on the choice of the representative.
In the sequel geometric vectors will be denoted by small italic letters equipped with an
overarrow: ~r, ~s, ~x . . . and ~r0 , ~r1 , . . . , etc. For convenience the zero vector ~0 will be included,
too. It has no direction and zero length. The length of the vector ~r is denoted by |~r|. A vector
with length = 1 is called a unit vector.
A physical vector often has a specific physical unit L (sometimes also called dimension),
e.g. kg/m2 /s. In this case the geometric vector ~r couples the direction of the physical action to
the direction of the geometric vector—unless ~r is the zero vector—and the magnitude ~r is given
in physical units L. Note that if the unit L is a unit of length, say metre, then a geometric vector
~r may be considered as a physical vector, too. A physical vector may be unitless, so that it has
no attached physical unit, or has an empty unit. Physical units can be multiplied, divided and
raised to powers, the empty unit has the (purely numerical) value 1 in these operations.2
Often, however, a physical vector is completely identified with a geometric vector (with a
proper conversion of units). In the sequel we just generally speak about vectors.
Vectors have the operations familiar from basic courses of mathematics. We give the geometric definitions of these in what follows. Geometrically it should be quite obvious that they
are well-defined, i.e., independent of choices of representatives.
−→
• The opposite vector of the vector ~r = hP Qi is the vector
−→
−~r = hQP i.
In particular −~0 = ~0. The unit of a physical vector remains the same in this operation.
• The sum of the vectors
−→
~r = hP Qi
−→
and ~s = hQRi
(note the choice of the representatives) is the vector
−→
~r + ~s = hP Ri.
In particular we define
~r + ~0 = ~0 + ~r = ~r
See e.g. G IBBINGS , J.C.: Dimensional Analysis. Springer–Verlag (2011) for many more details of dimensional analysis.
2
CHAPTER 1. POINT. VECTOR. VECTOR FIELD
3
and
~r + (−~r ) = (−~r ) + ~r = ~0.
Only vectors sharing a unit can be physically added, and the unit of the sum is this unit.
Addition of vectors is commutative and associative, i.e.
~r + ~s = ~s + ~r
and ~r + (~s + ~t ) = (~r + ~s ) + ~t.
These are geometrically fairly obvious. Associativity implies that long sums may be
parenthesized in any (correct) way, or written totally without parenteheses, without the
result changing.
The difference of the vectors ~r and ~s is the vector
~r − ~s = ~r + (−~s ).
For physical vectors the units again should be the same in this operation.
−→
• If ~r = hP Qi is a vector and λ a positive scalar, then λ~r is the vector obtained as follows:
– Take the ray starting from P through the point Q.
– In this ray find the point R whose distance from P is λ|~r |.
−→
– Then λ~r = hP Ri.
In addition it is agreed that λ~0 = ~0 and 0~r = ~0.
This operation is multiplication of vector by scalar. Defining further (−λ)~r = −(λ~r ) we
get multiplication by a negative scalar. Evidently
1~r = ~r
,
(−1)~r = −~r
,
2~r = ~r + ~r ,
etc.
If the physical scalar λ and physical vector ~r have their physical units, the unit of λ~r is
their product. With a bit of work the following laws of calculation can be (geometrically)
verified:
λ1 (λ2~r) = (λ1 λ2 )~r ,
(λ1 + λ2 )~r = λ1~r + λ2~r and
λ(~r1 + ~r2 ) = λ~r1 + λ~r2 ,
where λ, λ1 , λ2 are scalars and ~r, ~r1 , ~r2 are vectors.
Division of a vector ~r by a scalar λ 6= 0 is multiplication of the vector by the inverse 1/λ,
denoted by ~r/λ.
A frequently occurring multiplication by scalar is normalization of a vector where a vector
~r 6= ~0 is divided by its length. The result ~r/|~r | is a unit vector (and unitless).
−→
−→
• The angle ∠(~r, ~s ) spanned by the vectors ~r = hP Qi and ~s = hP Ri (note the choice
−→
−→
of representatives) is the angle between the directed line segments P Q and P R given in
radians in the interval [0, π] rad. It is obviously assumed here that ~r, ~s 6= ~0. Moreover an
angle is always unitless, the radian is not a physical unit.
CHAPTER 1. POINT. VECTOR. VECTOR FIELD
• The distance of the vectors
4
−→
−→
~r = hP Qi and ~s = hP Ri
(note the choice of representatives) is
d(~r, ~s ) = d(Q, R) = |~r − ~s |.
In particular d(~r, ~0 ) = |~r |. This distance also satisfies the triangle inequality
d(~r, ~s ) ≤ d(~r, ~t ) + d(~t, ~s ).
• The dot product (or scalar product) of the vectors ~r 6= ~0 and ~s 6= ~0 is
~r • ~s = |~r ||~s | cos ∠(~r, ~s ).
In particular
~r • ~0 = ~0 • ~r = 0 and ~r • ~r = |~r |2 .
Dot product is commutative, i.e.
and bilinear, i.e.
~r • ~s = ~s • ~r,
~r • (λ~s + η~t ) = λ(~r • ~s ) + η(~r • ~t ),
where λ and η are scalars. Geometrically commutativity is obvious. Bilinearity on the
other hand requires a somewhat complicated geometric proof. Using coordinates makes
bilinearity straightforward.
The unit of the dot product of physical vectors is the product of their units. Geometrically,
if ~s is a (unitless) unit vector, then
~r • ~s = |~r | cos ∠(~r, ~s )
is the (scalar) projection of ~r on ~s. (The projection of a zero vector is of course always
zero.)
• The cross product (or vector product) of the vectors ~r and ~s is the vector ~r × ~s given by
the following. First, if ~r or ~s is ~0, or ∠(~r, ~s ) is 0 or π, then ~r × ~s = ~0. Otherwise ~r × ~s is
the unique vector ~t satisfying
– |~t | = |~r ||~s | sin ∠(~r, ~s ),
π
– ∠(~r, ~t ) = ∠(~s, ~t ) = , and
2
– ~r, ~s, ~t is a right-handed system of vectors.
Cross product is anticommutative, i.e.
~r × ~s = −(~s × ~r ),
and bilinear, i.e.
~r × (λ~s + η~t ) = λ(~r × ~s ) + η(~r × ~t ),
where λ and η are scalars. Geometrically anticommutatitivy is obvious, handidness
changes. Bilinearity again takes a complicated geometric proof, but is fairly easily seen
using coordinates.
CHAPTER 1. POINT. VECTOR. VECTOR FIELD
5
Cross product is an information-dense operation, involving lengths of vectors, angles and
handidness. It is easily handled using coordinates, too. Geometrically
|~r × ~s | = |~r ||~s | sin ∠(~r, ~s )
is the area of the parallelogram with lengths of sides |~r | and |~s | and spanning angle
∠(~r, ~s ). If ~r and ~s are physical vectors, then the unit of the cross product ~r × ~s is the
product of their units.
• Combining these products we get the scalar triple product
~r • (~s × ~t )
and the vector triple products
(~r × ~s ) × ~t and ~r × (~s × ~t ).
There being no danger of confusion, the scalar product is usually written without parentheses as ~r • ~s × ~t.
Scalar triple product is cyclically symmetric, i.e.
~r • ~s × ~t = ~s • ~t × ~r = ~t • ~r × ~s.
By this and commutativity of scalar product, the operations • and × can be interchanged,
i.e.
~r • ~s × ~t = ~r × ~s • ~t.
Geometrically it is easily noted that the scalar triple product ~r • ~s × ~t is the volume of the
parallelepiped with edges incident on the vertex P being the vectors
−→
~r = hP Ri ,
−→
−→
~s = hP Si and ~t = hP T i,
with a positive sign if ~r, ~s, ~t is a right-handed system, and a negative sign otherwise. (As
special cases situations where the scalar triple product is = 0 should be included, too.)
Cyclic symmetry follows geometrically immediately from this observation.
The triple vector product expansions (also known as Lagrange’s formulas) are
(~r × ~s ) × ~t = (~r • ~t )~s − (~s • ~t )~r and
~r × (~s × ~t ) = (~r • ~t )~s − (~r • ~s )~t.
These are somewhat difficult to prove geometrically, proofs using coordinates are easier.
~
Exactly as for points we can define an ~r-centered open ball B(R,
~r ) of radius R for vectors,
open and closed sets of vectors, and boundaries, closures and interiors of sets of vectors, but a
geometric intuition is then not as easily obtained.
1.3 Coordinate Points and Vectors
In basic courses on mathematics points are thought of as coordinate points, i.e. triples (a, b, c) of
real numbers. In the background there is then an orthonormal right-handed coordinate system
with its axes and origin. Coordinate points will be denoted by small boldface letters: r, s,
CHAPTER 1. POINT. VECTOR. VECTOR FIELD
6
x, . . . and r0 , r1 , . . . , etc. The coordinate point corresponding to the origin of the system is
0 = (0, 0, 0).
A coordinate system is determined by the corresponding coordinate function which maps
geometric points to the triples of R3 . We denote coordinate functions by small boldface Greek
letters, and their components by the corresponding indexed letters. If the coordinate function is
κ, then the coordinates of the point P are
κ(P ) = κ1 (P ), κ2 (P ), κ3(P ) .
A coordinate function is bijective giving a one-to-one correspondence between the geometric
space and the Euclidean space R3 .
Distances are given by the familiar Euclidean norm of R3 . If
κ(P ) = (x1 , y1 , z1 )
and κ(Q) = (x2 , y2 , z2 ),
then
p
d(P, Q) = κ(P ) − κ(Q) = (x1 − x2 )2 + (y1 − y2 )2 + (z1 − z2 )2 .
A coordinate function κ also gives a coordinate representation for vectors. The coordinate
−→
version of the vector ~r = hP Qi is


κ
(Q)
−
κ
(P
)
1
1
T
κ(~r ) = κ(Q) − κ(P ) =  κ2 (Q) − κ2 (P )  .
κ3 (Q) − κ3 (P )
Note that this is a column array. As is easily seen, this presentation does not depend on choices
of the representative directed line segments. In particular the zero vector has the representation
−→
−→
κ(~0 ) = (0, 0, 0)T = 0T . Also the distance of the vectors ~r = hP Qi and ~s = hP Ri (note the
choice of representatives) is obtained from the Euclidean norm of R3 :
d(~r, ~s ) = κ(~r ) − κ(~s ) = κ(Q) − κ(R) = d(Q, R).
And then |~r | = d(~r, ~0) = κ(~r ).
In the sequel also coordinate representations, or coordinate vectors, will be denoted by
small boldface letters, but it should be remembered that a coordinate vector is a column vector.
Certain coordinate vectors have their traditional notations and roles:
 
 
 
 
1
0
0
x
i =  0  , j =  1  , k =  0  and r =  y  .
0
0
1
z
The vectors i, j, k are the basis vectors and the vector r is used as a generic variable vector. In
the background there of course is a fixed coordinate system and a coordinate function. The row
array versions of these are also used as coordinate points.
Familiar operations of coordinate vectors now correspond exactly to the geometric vector
operations in the previous section. Let us just recall that if
 
 
a1
a2
κ(~r ) = r =  b1  and κ(~s ) = s =  b2  ,
c1
c2
then
~r • ~s = r • s = a1 a2 + b1 b2 + c1 c2
CHAPTER 1. POINT. VECTOR. VECTOR FIELD
and
7


b1 c2 − b2 c1
κ(~r × ~s ) = r × s =  c1 a2 − c2 a1  .
a1 b2 − a2 b1
The latter is often given as the more easily remembered formal determinant
i a1 a2 j b1 b2 ,
k c1 c2 to be expanded along the first column.
A coordinate transform changes the coordinate function. If κ and κ∗ are two available
coordinate functions, then they are connected via a coordinate transform, that is, there is a 3 × 3
orthogonal matrix3 Q and a coordinate vector b such that
κ∗ (P ) = κ(P )Q + b
and
κ(P ) = κ∗ (P )QT − bQT .
−→
The coordinate representation of a vector ~r = hP Qi is transformed similarly:
κ∗ (~r ) = κ∗ (Q) − κ∗ (P )
T
= QT κ(Q) − κ(P )
and
T
= κ(Q)Q + b − κ(P )Q − b
T
= QT κ(~r )
κ(~r ) = Qκ∗ (~r ).
Note that b is the representation of the origin of the ”old” coordinate system in the ”new”
system, and that the columns of QT are the representations of the basis vectors i, j, k of the ”old”
coordinate system in the ”new” system. Similarly −bQT is the representation of the origin of
the ”new” coordinate system in the ”old” system, and the columns of Q are the representations
of the ”new” basis vectors i∗ , j∗ , k∗ in the ”old” system.
1.4 Tangent Vectors. Vector Fields. Scalar Fields
−→
Geometrically a tangent vector4 is simply a directed line segment P Q, the point P is its point
of action.
It is however easier to think a tangent vector (especially a physical one) as a pair [P, ~r ] where
P is the point of action and ~r is a vector. It is then easy to apply vector operations to tangent
vectors: just operate on the vector component ~r. If the result is a vector, it may be thought of as
a tangent vector, with the original (joint) point of action, or as just a vector without any point of
action. Moreover, in the pair formulation, a coordinate representation is simply obtained using
a coordinate function κ:
κ [P, ~r ] = κ(P ), κ(~r ) .
Often in the pair [P, ~r ] we consider ~r as a physical vector operating in the point P .
3
Here and in the sequel matrices are denoted by boldface capital letters. Since handidness needs to be preserved,
we must have here det(Q) = 1. Recall that orthogonality means that Q−1 = QT which implies that det(Q)2 = 1.
4
Thus called because it often literally is a tangent.
CHAPTER 1. POINT. VECTOR. VECTOR FIELD
8
If the point of action is clear from the context, or is irrelevant, it is often omitted and only
the vector component of the pair is used, usually in a coordinate representation. This is what
we will mostly do in the sequel.
A vector field is a function mapping a point P to a tangent vector P, F~ (P ) (note the point
of action). Mostly we denote this just by F~ . A vector field may not be defined for all points of
the geometric space, i.e., its domain of definition may be smaller.
In the coordinate representation given by the coordinate function κ we denote
r = κ(P ) and F(r) = κ F~ (P ) ,
thus coordinate vector fields are denoted by capital boldface letters. Note that in the coordinate
transform
r∗ = rQ + b (i.e. κ∗ (P ) = κ(P )Q + b)
the vector field F = κ(F~ ) (that is, its representation) is transformed to the the field F∗ = κ∗ (F~ )
given by the formula
F∗ (r∗ ) = QT F (r∗ − b)QT .
A vector field may of course be defined in fixed coordinates in one way or another, and then
taken to other coordinate systems using the transform formula. On the other hand, definition
of a physico-geometric vector field cannot possibly depend on any coordinate system, the field
exists without any coordinates, and will automatically satisfy the transform formula.
A coordinate vector field is the vector-valued function of three arguments familiar from
basic courses of mathematics


F1 (r)
F(r) =  F2 (r)  ,
F3 (r)
with its components. Thus all operations and concepts defined for these apply, limits, continuity,
differentiability, integrals, etc.
A scalar field is a function f mapping a point P to a scalar (real number) f (P ), thus scalar
fields are denoted by italic letters, usually small. In the coordinate representation we denote
r = κ(P ) and just f (r) (instead of the correct f κ−1 (r) ). In the coordinate transform
r∗ = rQ + b
(i.e. κ∗ (P ) = κ(P )Q + b)
a scalar field f is transformed to the scalar field f ∗ given by the formula
f ∗ (r∗ ) = f (r∗ − b)QT .
A scalar field, too, can be defined in fixed coordinates, and then transformed to other coordinate
systems using the transform formula. But a physico-geometric scalar field exists without any
coordinate systems, and will automatically satisfy the transform formula.
Again a coordinate scalar field is the real-valued function of three arguments familiar from
basic courses of mathematics. Thus all operations and concepts defined for these apply, limits,
continuity, differentiability, integrals, etc.
An important observation is that all scalar and vector products in the previous section applied to vector and scalar fields will again be fields, e.g. a scalar field times a vector field is a
vector field.
CHAPTER 1. POINT. VECTOR. VECTOR FIELD
9
1.5 Differential Operations of Fields
Naturally, partial derivatives cannot be defined for physico-geometric fields, since they are intrinsically connected with a coordinate system. In a coordinate representation partial derivatives
can be defined, as was done in the basic courses.
In this way we get partial derivatives of a scalar field f , its derivative
f′ =
and gradient
∂f ∂f ∂f ,
,
∂x ∂y ∂z

∂f
 ∂x 


 ∂f 

,
=

∂y


 ∂f 

grad(f ) = ∇f = f ′T
∂z


F1
and the derivative or Jacobian (matrix) of a vector field F =  F2 
F3
∂F1

  ∂x

F1′
 ∂F2

′
′ 
F =  F2  = 
 ∂x

F3′
 ∂F3
∂x

∂F1
∂y
∂F2
∂y
∂F3
∂y

∂F1
∂z 

∂F2 
.
∂z 

∂F3 
∂z
Using the chain rule5 we get the transforms of the derivatives in a coordinate transform
r = rQ + b:
′
f ∗ ′ (r∗ ) = f (r∗ − b)QT = f ′ (r∗ − b)QT Q
∗
and
F∗′ (r∗ ) = QT F (r∗ − b)QT
′
= QT F′ (r∗ − b)QT Q.
Despite partial derivatives depending on the coordinate system used, differentiability itself
is coordinate-free: If a field has partial derivates in one coordinate system, it has them in any
other coordinate system, too. This is true for second order derivatives as well. And it is true for
continuity: Continuity in one coordinate system implies continuity in any other system. And
finally it is true for continuous differentiability: If a field has continuous partial derivatives (first
or second order) in one coordinate system, it has them in any other system, too. All this follows
from the transform formulas.
5
Assuming f and g differentiable, the familiar univariate chain rule gives the derivative of the composite function:
′
f g(x) = f ′ g(x) g ′ (x).
More generally, assuming F and G continuously differentiable (and given as column arrays), we get the derivative
of the composite function as
′
= F′ G(r)T G′ (r).
F G(r)T
The arguments are here thought of as row arrays. The rule is valid in higher dimensions, too.
CHAPTER 1. POINT. VECTOR. VECTOR FIELD
10
The common differential operations for fields are the gradient (nabla) of a scalar field f , and
its Laplacian
∂2f
∂2f
∂f 2
∆f = ∇ • (∇f ) = ∇2 f =
+
+
,
∂x2
∂y 2
∂z 2
 
F1

and for a vector field F = F2  its divergence
F3
div(F) = ∇ • F =
and curl
∂F1 ∂F2 ∂F3
+
+
∂x
∂y
∂z
 ∂F3 ∂F2
∂
i
−
F
1
 ∂y
 ∂z
∂x

  ∂F1 ∂F3  ∂


−
=
curl(F) = ∇ × F = 
j
F
2 .
 ∂z
∂x
∂y

  ∂F2 ∂F1  ∂
−
k
F3 ∂x
∂y
∂z
(As the cross product, the curl can be given as a formal determinant.)
As will be verified shortly, gradient, divergence and curl are coordinate-free. Thus ∇ • F
can be interpreted as a scalar field, and, as already indicated by the column array notation,
∇f and ∇ × F as vector fields.
For the gradient coordinate-freeness is physically immediate. It will be remembered that the
direction of the gradient is the direction of fastest growth for a scalar field, and its length is this
speed of growth (given as a directional derivative). For divergence and curl the situation is not
at all as obvious.
It follows from the coordinate-freenes of the gradient that the directional derivative of a
scalar field f in the direction n (a unit vector)

∂f
= n • ∇f
∂n
is also coordinate-free and thus a scalar field.
The Laplacian may be applied to a vector field as well as follows:


∆F1
∆F =  ∆F2  .
∆F3
This ∆F is coordinate-free, too, and can be interpreted as a vector field.
A central property, not to be forgotten, is that all these operations are linear, in other words,
if λ1 and λ2 are constant scalars, then e.g.
∇(λ1 f + λ2 g) = λ1 ∇f + λ2 ∇g
and
∇ • (λ1 F + λ2 G) = λ1 ∇ • F + λ2 ∇ • G , etc.
The following notational expression appears often:
G • ∇ = G1
∂
∂
∂
+ G2
+ G3 ,
∂x
∂y
∂z
CHAPTER 1. POINT. VECTOR. VECTOR FIELD
11
where G = (G1 , G2 , G3 )T is a vector field. This is interpreted as an operator applied to a scalar
field f or a vector field F = (F1 , F2 , F3 )T as follows:
(G • ∇)f = G • (∇f ) = G1
∂f
∂f
∂f
+ G2
+ G3
∂x
∂y
∂z
and (taking F1 , F2 , F3 to be scalar fields)


(G • ∇)F1
(G • ∇)F =  (G • ∇)F2  = F′ G.
(G • ∇)F3
These are both coordinate-free and hence fields. Coordinate-freeness of (G • ∇)F follows from
the nabla rules below (or from the coordinate-freeness of F′ G).
Let us tabulate the familiar nabla-calculus rules:
(i) ∇(f g) = g∇f + f ∇g
(ii) ∇
1
1
= − 2 ∇f
f
f
(iii) ∇ • (f G) = ∇f • G + f ∇ • G
(iv) ∇ × (f G) = ∇f × G + f ∇ × G
(v) ∇ • (F × G) = ∇ × F • G − F • ∇ × G
(vi) ∇ × (F × G) = (G • ∇)F − (∇ • F)G + (∇ • G)F − (F • ∇)G
(vii) ∇(F • G) = (G • ∇)F − (∇ × F) × G − (∇ × G) × F + (F • ∇)G
In matrix notation ∇(F • G) = F′ T G + G′ T F.
(viii) (∇ × F) × G = (F′ − F′ T )G
(ix) ∇ • (∇ × F) = 0
(x) ∇ × ∇f = 0
(xi) ∇ × (∇ × F) = ∇(∇ • F) − ∆F
(so-called double-curl expansion)
(xii) ∆(f g) = f ∆g + g∆f + 2∇f • ∇g
In formulas (ix), (x), (xi) we assume F and f are twice continuously differentiable. These
formulas are all symbolical identities, and can be verified by direct calculation, or e.g. using the
Maple symbolic computation program.
CHAPTER 1. POINT. VECTOR. VECTOR FIELD
12
Let us, as promised, verify coordinate-freeness of the operators. In a coordinate transform
r∗ = rQ + b
we denote the nabla in the new coordinates by ∇∗ . Coordinate-freeness for the basic operators
then means the following:
1. ∇∗ f (r∗ − b)QT = QT ∇f (r) (gradient)
Subtracting b and multiplying by QT we move from the new coordinates r∗ to the old
ones, get the value of f , and then the gradient in the new coordinates. The result must be
the same when the gradient is obtained in the old coordinates and then transformed to the
new ones by multiplying by QT .
2. ∇∗ • QT F (r∗ − b)QT = ∇ • F(r) (divergence)
Subtracting b and multiplying by QT we move from the new coordinates r∗ to the old
ones, get F, transform the result to the new coordinates by multiplying by QT , and get
the divergence using the new coordinates. The result must remain the same when the
divergence is obtained in the old coordinates.
3. ∇∗ × QT F (r∗ − b)QT = QT ∇ × F(r) (curl)
Subtracting b and multiplying by QT we move from the new coordinates r∗ to the old
ones, get F, transform the result to the new coordinates by multiplying by QT , and get the
curl using the new coordinates. The result must be the same when the curl is obtained in
the old coordinates and then transformed to the new ones by multiplying by QT .
Theorem 1.1. Gradient, divergence, curl and Laplacian are coordinate-free. Furthermore, if
F and G are vector fields (and thus coordinate-free), then so is (G • ∇)F.
Proof. By the above
T
f ∗ ′ (r∗ ) = ∇f (r) Q
and F∗ ′ (r∗ ) = QT F′ (r)Q.
This immediately gives formula 1. since
∇∗ f ∗ (r∗ ) = f ∗ ′ (r∗ )T = QT ∇f (r).
To show formula 2. we use the trace of the Jacobian. Let us recall that the trace of a square
matrix A, denoted trace(A), is the sum of the diagonal elements. A nice property of trace6 is
that if the product matrix AB is square—whence BA is square, too—then
trace(AB) = trace(BA).
Since
trace(F′ ) =
6
∂F1 ∂F2 ∂F3
+
+
= ∇ • F,
∂x
∂y
∂z
Denoting A = (aij ) (an n × m matrix), B = (bij ) (an m × n matrix), AB = (cij ) and BA = (dij ) we have
trace(AB) =
n
X
k=1
ckk =
n X
m
X
k=1 l=1
akl blk =
m X
n
X
l=1 k=1
blk akl =
m
X
l=1
dll = trace(BA).
CHAPTER 1. POINT. VECTOR. VECTOR FIELD
13
formula 2. follows:
∇∗ • F∗ (r∗ ) = trace F∗ ′ (r∗ ) = trace QT F′ (r)Q
= trace QQT F′ (r)
(take B = Q)
= trace F′ (r) = ∇ • F(r).
To prove formula 3. we denote the columns of Q by q1 , q2 , q3 . Let us consider the first
component of the curl ∇∗ ×F∗ (r∗ ). Using the transform formula for the Jacobian, nabla formula
(viii) and rules for the scalar triple product we get
∗
∗
∗
∇ × F (r )
1
∂F3∗ ∂F2∗
=
−
∂y ∗
∂z ∗
= qT3 F′ (r)q2 − qT2 F′ (r)q3
(because F∗ ′ = QT F′ Q)
= qT3 F′ (r)q2 − qT3 F′ (r)T q2
′
(qT
2 F q3 is a scalar)
= qT3 F′ (r) − F′ (r)T q2
= q3 • ∇ × F(r) × q2
= q2 • q3 × ∇ × F(r)
= q2 × q3 • ∇ × F(r)
= q1 • ∇ × F(r)
(extract the factors qT
3 and q2 )
= QT ∇ × F(r)
1
(formula (viii))
(cyclic symmetry)
(interchange • and ×)
(here q2 × q3 = q1 )
.
Note that q2 × q3 = q1 since the new coordinate system must be right-handed, too. The other
components are dealt with similarly.
Coordinate-freeness of the Laplacian follows directly from that of gradient and divergence
for scalar fields, and for vector fields from formula (xi).
Adding formulas (vi) and (vii) on both sides we get an expression for (G • ∇)F:
1
∇ × (F × G) + (∇ • F)G − (∇ • G)F + ∇(F • G)
2
+ (∇ × F) × G + (∇ × G) × F .
(G • ∇)F =
All six terms on the right hand side are coordinate-free and thus vector fields. The left hand side
(G • ∇)F then also is coordinate-free and a vector field. (This is also easily deduced from the
matrix form (G • ∇)F = F′ G, but the formula above is of other interest, too!)
1.6 Nonstationary Scalar and Vector Fields
Physical fields naturally are often time-dependent or dynamical, that is, in the definition of the
field a time variable t must appear.
A scalar field is then of the form f (P, t) and a vector field of the form F~ (P, t). (The point
of action is omitted here, even though that, too, may be time-dependent.) In a coordinate representation these forms are respectively f (r, t) and F(r, t). Time-dependent fields are called
nonstationary, and time-independent fields are called stationary.
CHAPTER 1. POINT. VECTOR. VECTOR FIELD
14
From the coordinate representation, interpreted as a function of the four variables x, y, z, t,
we again get the concepts continuity, differentiability, etc., familiar from basic courses, also for
the time variable t. In a coordinate transform r∗ = rQ + b the time variable is not transformed,
i.e.
f ∗ (r∗ , t) = f (r∗ − b)QT , t and
F∗ (r∗ , t) = QT F (r∗ − b)QT , t .
Thus for the time derivatives we get the corresponding transform formulas
∂i ∗ ∗
∂i
∗
T
f
(r
,
t)
=
f
(r
−
b)Q
,
t
and
∂ti
∂ti
i
∂i ∗ ∗
T ∂
∗
T
F
(r
,
t)
=
Q
F
(r
−
b)Q
,
t
,
∂ti
∂ti
which shows that they are fields.
In addition to the familiar partial derivative rules we get for the time derivatives e.g. the
following rules, which can be verified by direct calculation:
(1)
∂F
∂G
∂
(F • G) =
•G+F•
∂t
∂t
∂t
(2)
∂
∂F
∂G
(F × G) =
×G+F×
∂t
∂t
∂t
(3)
∂
∂f
∂F
(f F) =
F+f
∂t
∂t
∂t
(4)
∂
∂F
∂G
∂H
(F • G × H) =
•G×H+F•
×H+F•G×
∂t
∂t
∂t
∂t
∂G
∂F
∂
∂H (5)
F × (G × H) =
× (G × H) + F ×
×H +F× G×
∂t
∂t
∂t
∂t
Another kind of time dependence in a coordinate representation is obtained by allowing a
moving coordinate system (e.g. a rotating one, as in a carousel). If the original representation in
a fixed coordinate system is f (r) (a scalar field) or F(r) (a vector field), then at time t we have
a coordinate transform
r∗ (t) = rQ(t) + b(t)
and the representations of the fields are
f ∗ (r∗ , t) = f
r∗ (t) − b(t) Q(t)T
and
F∗ (r∗ , t) = Q(t)T F r∗ (t) − b(t) Q(t)T .
Note that now the fields are stationary, time-dependence is only in the coordinate representation
and is a consequence of the moving coordinate system.
CHAPTER 1. POINT. VECTOR. VECTOR FIELD
15
Similarly for nonstationary fields f (r, t) and F(r, t) in a moving coordinate system we get
the representations
f ∗ (r∗ , t) = f r∗ (t) − b(t) Q(t)T , t and
F∗ (r∗ , t) = Q(t)T F r∗ (t) − b(t) Q(t)T , t .
Now part of the time-dependence comes from the time-dependence of the fields, part from the
moving coordinate system.
”The intuitive picture of a smooth surface becomes analytic
with the concept of a manifold. On the small scale a manifold
looks like Euclidean space, so that infinitesimal operations
like differentiation may be defined on it.’
(WALTER T HIRRING : A Course in Mathematical Physics)
Chapter 2
MANIFOLD
2.1 Graphs of Functions
The graph of a function f : A → Rm , where A ⊆ Rk is an open set, is
r, f(r) r ∈ A ,
a subset of Rk+m . The graph is often denoted—using a slight abuse of notation—as follows:
s = f(r)
(r ∈ A).
Here r contains the so-called active variables and s the so-called passive variables. Above
active variables precede the passive ones in the order of components. In a graph we also allow a
situation where the variables are scattered. A graph is smooth,1 if f is continuously differentiable
in its domain of definition A. In the sequel we only deal with smooth graphs. Note that a graph
is specifically defined using a coordinate system and definitely is not coordinate-free.
A familiar graph is the graph of a real-valued
z
univariate function f in an open interval (a, b), i.e.,
2
the subset of R consisting of the pairs
z = f(x,y)
x, f (x) (a < x < b),
the curve y = f (x). Another is the graph of a realvalued bivariate function f , i.e., the surface
z = f (x, y) ((x, y) ∈ A)
y
A
3
in the space R (see the figure on the right). Not all
x
curves or surfaces are graphs, however, e.g. circles
(x,y)
and spheres are not (and why not?).
The most common dimensions k and m are of course the ones given by physical position
and time, that is 1, 2 and 3, whence k + m is 2, 3 or 4. On the other hand, degrees of freedom
in mechanical systems etc. may lead to some very high dimensions
As a limiting case we also allow m = 0. In Rm = R0 there is then only one element (the
so-called empty vector ()). Moreover, then Rk+m = Rk , thus all variables are active and the
graph of the function f : A → Rm is A. Open subsets of the space are thus always graphs, and
agreed to be smooth, too.
1
In some textbooks smoothness requires existence of continuous derivatives of all orders.
16
CHAPTER 2. MANIFOLD
17
Similarly we allow k = 0. Then f has no arguments, and so it is constant, and the graph
consists of one point.2 Again it is agreed that such graphs are also smooth.
In what follows we will need inverse images of sets. For a function g : A → B the inverse
image of the set C is the set
g−1 (C) = r g(r) ∈ C .
Note that this has little to do with inverse functions, indeed the function g need not have an
inverse at all, and C need not be included in B.
For a continuous function defined in an open set A the inverse image of an open set is always
open.3 This implies an important property of graphs:
Theorem 2.1. If a smooth graph of a function is intersected by an open set, then the result is
either empty or a smooth graph.
Proof. This is clear if the intersection is empty, and also if k = 0 (the intersection is a point) or
m = 0 (intersection of two open sets is open). Otherwise the intersection of the graph
s = f(r)
(r ∈ A)
and the open set C in Rk+m is the graph
s = f(r)
(r ∈ D),
where D is the inverse image of C for the continuous4 function
g(r) = r, f(r) .
2.2 Manifolds
A subset M of Rn is a k-dimensional manifold, if it is locally a smooth graph of some function
of k variables.5
”Locally” means that for every point p of M there is an open set Bp of Rn containing the
point p such that M ∩ Bp is the graph of some function fp of some k variables. For different
points p the set Bp may be quite different, the active variables chosen in a different way, always
numbering k, however, and the function fp may be very different.
The functions fp are called charts, and the set of all charts is called an atlas. Often a small
atlas is preferable.
Example. A circle of radius R centered in the origin is a 1-dimensional manifold of R2 , since
(see the figure below) its each point is in an open arc delineated by either black dots or white
dots, and these are smooth graphs of the functions
p
√
y = ± R2 − x2 and x = ± R2 − y 2
(atlas) in certain open intervals.
2
Here we may take A to be the whole space R0 , an open set.
This in fact is a handy definition of continuity. Continuity of g in the point r0 means that taking C to be an
arbitrarily small g(r0 )-centered open ball, there is in A a (small) r0 -centered open ball which g maps to C.
4
It is not exactly difficult to see that if f is continuous in the point r0 , then so is g, because
g(r) − g(r0 )2 = kr − r0 k2 + f (r) − f (r0 )2 .
3
5
There are several definitions of manifolds of different types in the literature. Ours is also used e.g. in H UB & H UBBARD and N IKOLSKY & VOLOSOV. They are also often more specifically called ”smooth manifolds” or ”differentiable manifolds”. With the same underlying idea so-called abstract manifolds can be defined.
BARD
CHAPTER 2. MANIFOLD
18
In a similar fashion the sphere x2 + y 2 + z 2 = R2 is seen
to be a 2-dimensional manifold of R3 . Locally it is a smooth
graph of one of the six functions
p
√
x = ± R2 − y 2 − z 2 , y = ± R2 − x2 − z 2
y
x
and
p
z = ± R2 − x2 − y 2
(atlas) in properly chosen open sets.
Of course, each smooth graph itself is a manifold, in particular each open subset of Rn is its
n-dimensional manifold, and each single point is a 0-dimensional manifold. If a space curve is
a smooth graph, say of the form
(y, z) = f1 (x), f2 (x) (a < x < b),
where f1 and f2 are continuously differentiable, then it will be a 1-dimensional manifold of R3 .
Also the surface
z = f (x, y) ((x, y) ∈ A)
is a manifold of R3 if f is continuously differentiable. On the other hand, e.g. the graph of the
absolute value function y = |x| is not smooth and therefore not a manifold of R2 .
A manifold can always be restricted to be more local. As an immediate consequence of
Theorem 2.1 we get
Theorem 2.2. If a k-dimensional manifold of Rn is intersected by an open set, then the result
is either empty or a k-dimensional manifold.
Note. Why do we need manifolds? The reason is that there is an unbelievable variety of loosely
defined curves and surfaces, and there does not seem to be any easy general global method to
deal with them. There are continuous curves which fill a square or a cube, or which intersect
themselves in all their points, continuous surfaces having normals in none of their points, etc.
The only fairly easy way to grasp this phenomenon is to localize and restrict the concepts
sufficiently far, at the same time preserving applicability as much as possible.
Finding and proving global results is then a highly demanding and challenging area of
algebro-topological mathematics in which many Fields Medals have been awarded.
2.3 Manifolds as Loci
One way to define a manifold is to use loci. A locus is simply a set of points satisfying some
given conditions. For instance, the P -centered circle of radius R is the locus of points having
distance R from P . As was noted, it is a manifold.
In general a locus is determined via a coordinate representation, and the conditions are given
as mathematical equations. A condition then is of the form
F(r, s) = 0,
where r has dimension k, s has dimension n − k, and F is a n − k-dimensional function
of n variables. The locus of points satisfying the condition is then the set of points (r, s) in
Rn determined as solutions of the equation, solving for s. As indicated by the notation used,
r is purported to be active and s passive. Even though here active variables appear before the
passive ones in the component order, active variables may be scattered, roles of variables may
differ locally, actives changing to passives, etc.
CHAPTER 2. MANIFOLD
19
Example. A circle and a sphere are loci of this kind when we set the conditions
F (x, y) = R2 − x2 − y 2 = 0
and
F (x, y, z) = R2 − x2 − y 2 − z 2 = 0
(centered in the origin and having radius R). In the circle one of the variables is always active,
in the sphere two of the three variables.
Not all loci are manifolds. For instance, the locus of points of R2 determined by the condition
y − |x| = 0
is not, and neither is the locus of points satisfying the condition
y 2 − x2 = 0.
The former is not smooth in the origin, and the latter is not a graph of any single function in the
origin (but rather of two functions: y = ±x). Actually the condition
(y − x)2 = 0
does not determine a proper manifold either. The locus is the line y = x, but counted twice!
So, in the equation F(r, s) = 0 surely F then should be continuously differentiable, and
somehow uniqueness of solution should be ensured, too, at least locally. In classical analysis
there is a result really taylor-made for this, the so-called Implicit Function Theorem, long known
and useful in many contexts. Here it is used to make the transition from local loci6 to graphs. It
should be mentioned that in the literature there are many versions of the theorem,7 we choose
just one of them.
Implicit Function Theorem. Assume that the function F : S → Rn−k , where 0 ≤ k < n,
satisfies the following conditions:
1. The domain of definition S is an open subset of Rn .
2. F is continuously differentiable.
3. F′ is of full rank in the point p0 , i.e., the rows of F′ (p0 ) are linearly independent (whence
also some n − k columns are also linearly independent).
4. F(p0 ) = 0 and the n − k columns of F′ (p0 ) corresponding to the variables in s are
linearly independent.
Denote by r variables other than the ones in s. By changing the order of variables, if necessary,
we may assume that p = (r, s), and especially p0 = (r0 , s0 ). Denote further the derivative of
F with respectto the variables in r by F′r , and with respect to the variables in s by F′s . (Thus
F′ = F′r F′s in block form.)
Then there is an open subset B of Rk containing the point r0 , and a uniquely determined
function f : B → Rn−k such that
6
This is not pleonasm, although it might seem to be since ’local’ could be constructed to mean ’relating to a
locus’ etc.
7
See e.g. K RANTZ , S.G. & PARKS , H.R.: The Implicit Function Theorem. History, Theory, and Applications.
Birkhäuser (2012).
CHAPTER 2. MANIFOLD
20
(i) the graph of f is included in S,
(ii) p0 = r0 , f(r0 ) ,
(iii) F r, f(r) = 0 in B,
(iv) f is continuously differentiable, the matrix F′s r, f(r) is nonsingular in B, and
f ′ (r) = −F′s r, f(r)
(this only for k > 0).
−1
F′r r, f(r)
Proof. The proof is long and tedious, and is omitted here, see e.g. A POSTOL or H UBBARD &
H UBBARD or N IKOLSKY & VOLOSOV. The case k = 0 is obvious, however. Then r0 is the
empty vector and f is the constant function p0 . The derivative of f in item (iv) is obtained by
implicit derivation, i.e., applying the chain rule to the left hand side of the identity
F r, f(r) = 0
in B, and then solving for f ′ (r) the obtained equation
F′r r, f(r) + F′s r, f(r) f ′ (r) = O.
Using the Implicit Function Theorem (and Theorem 2.1) we immediately get definition of
manifolds using local loci:
Corollary 2.3. If for any point p0 in the subset M of Rn there is a subset S of of Rn and a
function F : S → Rn−k such that the conditions 1.– 4. in the Implicit Function Theorem are
satisfied, and the locus condition F(p) = 0 defines the set M ∩ S, then M is a k-dimensional
manifold of Rn .
The converse holds true, too, i.e., all manifolds are local loci:
Theorem 2.4. If M is a k-dimensional manifold of Rn and k < n, then for each point p of M
there is a set S and a function F : S → Rn−k such that the conditions 1.— 4. of the Implicit
Function Theorem are satisfied and the locus condition F(p) = 0 defines the set M ∩ S.
Proof. Let us just see the case k > 0. (The case where k = 0 is similar—really a special case.)
If M is a k-dimensional manifold of Rn , then locally in some open set containing the point p0
it is the graph of some continuously differentiable function f
s = f(r)
(r ∈ A)
for some choice of the k variables of r (the active variables). Reordering, if necessary, we may
assume that the active variables precede the passive ones.
Choose now the set S to be the Cartesian product A × Rn−k , i.e.
S = (r, s) r ∈ A and s ∈ Rn−k ,
and F to be the function
F(r, s) = s − f(r).
CHAPTER 2. MANIFOLD
21
Then S is an open subset8 of Rn and F is continuously differentiable in S. Moreover then
F′ = −f ′ In−k
is of full rank (In−k is the (n − k) × (n − k) identity matrix).
Excluding the n-dimensional manifolds, manifolds of Rn are thus exactly all sets which are
local loci. In particular conditions of the form
G(p) = c
or G(p) − c = 0,
where c is a constant, define a manifold (with the given assumptions), the so-called level manifold of G.
Representation of a manifold using loci is often called an implicit representation and the
representation using local graphs of functions—as in the original definition– is called an explicit
representation.9
Example. 2-dimensional manifolds in R3 are smooth surfaces. Locally such a surface is defined
as the set determined by a condition
F (x, y, z) = 0,
where in the points of interest
F′ =
∂F ∂F ∂F 6= 0.
,
,
∂x ∂y ∂z
In particular surfaces determined by conditions of the form G(x, y, z) = c, where c is constant,
are level surfaces of G.
For such surfaces it is often quite easy to check whether or not a point p0 = (x0 , y0 , z0 ) is in
the surface. Just calculate (locally) F (x0 , y0 , z0 ) and check whether or not it is = 0, of course
even this could turn out to be difficult.
Example. 1-dimensional manifolds in R3 are smooth space curves. Locally a curve is the locus
of points satisfying a condition
(
F1 (x, y, z) = 0
F(x, y, z) = 0 or
F2 (x, y, z) = 0,
where in the curve the derivative matrix
F′ =
F1′
F2′
!
is of full rank, i.e., its two rows are linearly independent. Locally we have then the curve as the
intersection of the two smooth surfaces
F1 (x, y, z) = 0 and F2 (x, y, z) = 0
If B is the r0 -centered ball of radius R in A and s0 ∈ Rn−k , then the (r0 , s0 )-centered open ball of radius R
in R is included in S, because in this ball
2
kr − r0 k2 ≤ kr − r0 k2 + ks − s0 k2 = (r, s) − (r0 , s0 ) < R,
8
n
so that r ∈ B.
9
There is a third representation, so-called parametric representation, see Section 2.5.
CHAPTER 2. MANIFOLD
22
(locus conditions, cf. the previous example). It may be noted that the curve of intersection of
two smooth surfaces need not be a smooth manifold (curve), for this the full rank property is
needed.10
Example. The condition
F (x, y) = yey − x = 0
defines a 1-dimensional manifold (a smooth curve, actually a graph) of R2 . On the other hand,
F ′ (x, y) = − 1, (1 + y)ey ,
so, except for the point (−1/e, −1), the corresponding local graph can be taken as f (y), y
where f is one of the so-called Lambert W functions11 W0 (green upper branch) or W−1 (red
lower branch), see the figure below (Maple).
Since here
√
y = xe−y and − ln 2 > −1/e,
this means that
√
√
2
√
2
2
√ ·.·
2
exists, which may seem odd because
Actually it has the value
√ W0 − ln 2
√
= 2.
− ln 2
√
2 > 1.
2.4 Mapping Manifolds. Coordinate-Freeness
Manifolds are often ”manipulated” by mapping them by some functions in one way or another.
For manifolds defined as in the previous sections it then is not usually at all easy to show that
the resulting set really is a manifold. For parametrized manifolds this is frequently easier, see
the next section.
On the other hand, inverse images come often just as handy:
Theorem 2.5. If
• M is a k-dimensional manifold of Rn included in the open set B,
• A is an open subset of Rm , where m ≥ n, and
• g : A → B is a continuously differentiable function having a derivative matrix g′ of full
rank (i.e. linearly independent rows),
then the inverse image g−1 (M) is an m − n + k-dimensional manifold of Rm .
10
For instance, the intersection of the surface F1 (x, y, z) = z − xy = 0 (a saddle surface) and the surface
F2 (x, y, z) = z = 0 (the xy-plane) is not a manifold (and why not?).
11
Very useful in many cases.
CHAPTER 2. MANIFOLD
23
Proof. The case k = n is clear. Then M is an open subset of Rn and its inverse image g−1 (M)
is an open subset of Rm , i.e. an m-dimensional manifold of Rm .
Take then the case k < n. Consider an arbitrary point r0 of g−1(M), i.e. a point such
that g(r0 ) ∈ M. Locally near the point p0 = g(r0 ) the manifold M can be determined as
a locus, by Theorem 2.4. More specifically, there is an open subset S of Rn and a function
F : S → Rn−k , such that the conditions 1.– 4. of the Implicit Function Theorem are satisfied.
For a continuous function defined in an open set the inverse image of an open set is open,
so g−1 (S) is open. The locus condition
F g(r) = 0
determines locally some part of the set g−1 (M). In the open set g−1 (S) the composite function
F ◦ g now satisfies the conditions 1.– 4. of the Implicit Function Theorem since (via chain rule)
its derivative is
(F ◦ g)′ (r0 ) = F′ g(r0 ) g′ (r0 ) = F′ (p0 )g′ (r0 ),
and it is of full rank. Thus, by Corollary 2.3, g−1 (M) is a manifold of Rm and its dimension is
m − (n − k) (the dimension of r minus the dimension of F).
So far we have not considered coordinate-freeness of manifolds. A manifold is always
expressly defined in some coordinate system, and we can move frome one system to another
using coordinate transforms. But is a manifold in one coordinate system also a manifold in any
other, and does the dimension then remain the same?
As a consequence of Theorem 2.5 the answer is positive. To see this, take a coordinate
transform
r∗ = rQ + b,
and choose m = n and
g(r∗ ) = (r∗ − b)QT
in the theorem. Then a manifold M in coordinates r∗ is the inverse image of the manifold in
coordinates r, and thus truly a manifold. Dimension is preserved as well. Being a manifold in
one coordinate system guarantees being a manifold in any other coordinates. ”Manifoldness” is
a coordinate-free property.
2.5 Parametrized Manifolds
Whenever a manifold can be parametrized it will be in many ways easier to handle.12 For
instance, defining and dealing with integrals over manifolds then becomes considerably simpler.
A parametrization13 of a k-dimensional manifold M of Rn consists of an open subset U of
Rk (the so-called parameter domain), and a continuously differentiable bijective function
γ:U →M
having a derivative matrix γ ′ of full rank. Since the derivative γ ′ is an n×k matrix and k ≤ n, it
is the columns that are linearly independent. These columns are usually interpreted as vectors.
(It is naturally assumed here that k > 0.)
12
In many textbooks manifolds are indeed defined using local parametrization, see e.g. S PIVAK or O’N EILL .
This usually requires the so-called transition rules to make sure that the chart functions are coherent. Our definition,
too, is a kind of local parametrization, but not a general one, and not requiring any transition rules.
13
This is often called a smooth parametrization.
CHAPTER 2. MANIFOLD
24
Evidently, if a manifold is the graph of some function, i.e.
s = f(r)
(r ∈ A),
it is parametrized, we just take
γ(r) = r, f(r) .
U = A and
Also an n-dimensional manifold of Rn , i.e. an open subset A, is parametrized in a natural way,
we take A itself as the parameter domain and the identity function as the function γ.
Example. A circle Y : x2 + y 2 = R2 is a 1-dimensional manifold of R2 which cannot be
parametrized. To show this, we assume the contrary, i.e., that Y actually can be parametrized,
and derive a contradiction. The parameter domain U is then an open subset of the real line,
that is, it consists of disjoint open intervals. Consider one of these intervals, say (a, b) (where
we may have a = −∞ and/or b = ∞). Now, when a point u in the interval (a, b) moves to
the left towards a, the corresponding point γ(u) in the circle moves along the circumference in
eiher direction. It cannot stop or turn back because γ is a bijection and γ ′ has full rank. Thus
also the limiting point
p = lim γ(u)
u→a+
is in the circle Y.
But we cannot have in U a point v such that p = γ(v). Such a point would be in one of the
open intervals in U, and—as above—we see that a sufficiently short open interval (v−ǫ, v+ǫ) is
mapped to an open arc of Y containing the point p. This would mean that γ cannot be bijective,
a contradiction.
On the other hand, if we remove one of the points in Y, say (R, 0), it still remains a manifold
(why?) and can be parametrized using the familiar polar angle φ:
(x, y) = γ(φ) = (R cos φ, R sin φ) (0 < φ < 2π).
Now
γ(φ) = (R cos φ, R sin φ)
is a continuously differentiable bijection, and
γ ′ (φ) =
−R sin φ
R cos φ
!
is always 6= 0.
Also the inverse parametrization is easily obtained:
φ = atan(x, y),
where atan is the bivariate arctangent, i.e., an arc tangent giving correctly the quadrant and
also values in the coordinate axes, that is

y

arctan for x > 0 and y ≥ 0


x




y


2π + arctan for x > 0 and y < 0


x



y
atan(x, y) = π + arctan for x < 0
x



π



for x = 0 and y > 0


2





 3π for x = 0 and y < 0.
2
CHAPTER 2. MANIFOLD
25
It can be found, in one form or in another, in
just about all mathematical programs. atan
is actually continuous and also continuously
differentiable—excluding the negative x-axis,
see the figure on the right (by Maple)—since
(verify!)
10
8
6
∂atan(x, y)
y
=− 2
∂x
x + y2
and
4
–2
x
∂atan(x, y)
= 2
.
∂y
x + y2
y=0,
–1
–1
0
Example. A sphere x2 + y 2 + z 2 = R2 cannot
be parametrized either. However, if, say, the
half great circle
x2 + z 2 = R2 ,
–2
2
y
1
2
1x
2
x≥0
is removed then a manifold is obtained which can be parametrized by the familiar spherical
coordinates as
(x, y, z) = γ(θ, φ) = (R sin θ cos φ, R sin θ sin φ, R cos θ)
and the parameter domain is the open rectangle
U : 0 < θ < π , 0 < φ < 2π.
The derivative matrix


R cos θ cos φ −R sin θ sin φ
γ ′ (θ, φ) =  R cos θ sin φ R sin θ cos φ 
−R sin θ
0
is of full rank, and the inverse parametrization is again easy to find:

θ = arccos z
R
φ = atan(x, y).
Example. Parametrizarion of a general smooth space curve is of the form
r = γ(u) (u ∈ U),
where U is an open interval. (The parameter domain might actually consist of several open
intervals but then the curve can be divided similarly.) Here γ is continuously differentiable and
γ ′ 6= 0, which guarantees that the curve has a tangent everywhere.
Example. Parametrization of a general smooth surface is of the form
r = γ(u)
(u ∈ U),
where U is an open subset of R2 . Here γ is continuously differentiable and γ ′ has full rank,
i.e., its two columns are linearly independent. This guarantees that the surface has everywhere
a normal (the cross product of the two columns of γ ′ (u)).
CHAPTER 2. MANIFOLD
26
Parametrization is at the same time more restrictive and more extensive than our earlier
definitions of manifolds: Not all manifolds can be parametrized and not all parametrizations
define manifolds. On the other hand, as noted, parametrization makes it easier to deal with
manifolds. In integration restrictions of parametrizations can be mostly neglected since they do
not affect the values of the integrals, as we will see later. Let us note, however, that if a set is
parametrized then at least it is a manifold in a certain localized fashion:
Theorem 2.6. If M ⊆ Rn , U is an open subset of Rk , u0 ∈ U, and there is a continuously
differentiable bijective function γ : U → M with a derivative γ ′ of full rank, then there is an
open subset V of U such that u0 ∈ V and γ(V) is a k-dimensional manifold of Rn .
Proof. Consider a point
p0 = γ(u0 )
of M. Then the columns of γ ′ (u0 ) are linearly independent, and thus some k rows are linearly
independent, too. Reordering, if necessary, we may assume that these rows are the first k rows
of γ ′ (u0 ).
Let us first consider the case k < n. For a general point p = (r, s) of M, r contains the k
first components. We denote further by γ 1 the function consisting of the first k components of
γ, and r0 = γ 1 (u0 ). Similarly, taking the last n − k components of γ we get the function γ 2 .
The function
F(u, r) = r − γ 1 (u)
defined in the open set S = U × Rk (cf. the proof of Theorem 2.4) then satisfies the conditions
1.– 4. of the Implicit Function Theorem. Thus there is a continuously differentiable function
g : B → Rk , defined in an open set B, whose graph u = g(r) is included in S, such that
r = γ 1 g(r) .
The chart function f in the point p0 is then obtained by taking
f(r) = γ 2 g(r) .
Finally we choose V = γ −1
1 (B), an open set. (And where, if anywhere, do we need bijectivity
of γ?)
The case k = n is similar. The function
F(u, p) = p − γ(u)
defined in the open set S = U × Rn then satisfies conditions 1.– 4. of the Implicit Function Theorem. Hence there is an open set B, containing the point p0 , and a continuously differentiable
function g : B → Rn , whose graph u = g(p) is included in S, such that
p = γ g(p) .
Thus B ⊆ M. Again we choose V = γ −1 (B), an open set.
Parametrization of a manifold M by
γ:U →M
may be exchanged, a so-called reparametrization, as follows. Take a new parameter domain
V ⊆ Rk , and a continuously differentiable bijective function
η : V → U,
CHAPTER 2. MANIFOLD
27
such that the derivative η ′ is nonsingular. The new parametrization is then by the composite
function γ ◦ η, that is, as
r = γ η(v) (v ∈ V).
This really is a parametrization since, by the chain rule, γ ◦ η is continuously differentiable and
its derivative
γ ′ η(v) η ′ (v)
is of full rank. n-dimensional manifolds of Rn , i.e. open subsets, often benefit from reparametrization.
Example. 3-dimensional manifolds of R3 , i.e. open subsets or ’solids’, are often given using
parametrizations other than the trivial one by the identity function.
Familiar parametrizations of this type are those using cylindrical or spherical coordinates.
For instance, the slice of a ball below (an open set) can be parametrized by spherical coordinates as
r = (x, y, z) = γ(ρ, θ, φ) = (ρ sin θ cos φ, ρ sin θ sin φ, ρ cos θ),
where the parameter domain is the open rectangular prism
V : 0 < ρ < R , 0 < θ < π , 0 < φ < α.
z
φ= 0
φ parameter domain
φ= α
α
y
ρ= R
U
π
ρ
θ
R
x
Different parametrizations of a manifold may come separately, without any explicit reparametrizations. Even then in principle, reparametrizations are there.
Theorem 2.7. Different parametrizations of a manifold can always be obtained from each other
by reparametrizations.
Proof. Consider a situation where the k-dimensional manifold M of Rn has the parametrizations
r = γ 1 (u) (u ∈ U) and r = γ 2 (v) (v ∈ V).
An obvious candidate for the reparametrization is the one using η = γ −1
1 ◦ γ 2 , as
u = γ −1
γ 2 (v) .
1
This function η is bijective, we just must show that it is continuously differentiable. For this let
us first define
G(u, v) = γ 1 (u) − γ 2 (v).
CHAPTER 2. MANIFOLD
28
Then the columns of the derivative G′ corresponding to the variable u, i.e. γ ′1 , are linearly
independent.
Consider then a point
r0 = γ 1 (u0 ) = γ 2 (v0 )
of M. Since the k columns of γ ′1 (u0 ) are linearly independent, then some k rows of γ ′1 (u0 ) are
also linearly independent. Picking from G the corresponding k components we get the function
F. In the open set S = U × V the function F satisfies the conditions 1.– 4. of the Implicit
Function Theorem, and then the obtained function f clearly is η.
Since the point v0 was an arbitrary point of V, η is continuously differentiable. On the other
hand, η ′ is also nonsingular because γ 2 = γ 1 ◦ η, and by the chain rule
γ ′2 (v) = γ ′1 η(v) η ′ (v).
If now η ′ would be singular in some point of V, then γ ′2 would not have full rank there.
A parametrized manifold may be localized also in the parameter domain: Take an open
subset U ′ of U and interprete it as a new parameter domain. The thus parametrized set is again
a manifold, and it has a parameter representation (cf. Theorem 2.6).
This in fact also leads to a generalization of manifolds. Just parametrize a set N as above
using a parameter domain U and a function γ : U → N which is continuously differentiable
and whose derivative γ ′ is of full rank, but do not require that γ is bijective. If now for each
point p of N there is an open subset Up of U such that
• p = γ(u) for some point u of Up , and
• γ is bijective when restricted into Up ,
then as in Theorem 2.6, the parametrization defines a manifold when restricted into Up . The set
N itself then need not be a manifold. Generalized manifolds of this kind are called trajectory
manifolds. A trajectory manifold can reparametreized exactly as the usual manifold.
Example. The subset of R2 parametrized by the polar angle φ given by
(x, y) = γ(φ) = r(φ) cos φ, r(φ) sin φ (0 < φ < 2π),
where
φ
,
12
is a complicated plane curve, but not a manifold since it passes through the origin six times, see
the left figure below (Maple). It is however a 1-dimensional trajectory manifold. The figure on
the right is the hodograph
(x, y) = γ ′ (φ)T (0 < φ < 2π).
r(φ) = ecos φ − 2 cos 4φ + sin5
It shows that γ ′ is of full rank (the curve does not pass through the origin). It also indicates
the parameter value φ = 0 (or φ = 2π) could have been included locally. This would not
destroy smoothness of the curve. This is common in polar parametrizations.
It should also
be remembered that even though atan is discontinuous, sin atan(x, y) and cos atan(x, y)
are continuously differentiable. So the parameter interval could have been, say, 0 < φ < 4π
containg the parameter value φ = 2π. Note also that the polar parametrization allows even
negative values of the radius!
CHAPTER 2. MANIFOLD
29
3
6
2
4
2
1
–6
–4
–2
2
4
6
0
–1
0
1
2
3
–2
–1
–4
–2
–6
–8
–3
Many more self-intersections appear when the curve is drawn for the ”full” parameter
interval 0 < φ < 24π, outside of which it starts to repeat itself:
CHAPTER 2. MANIFOLD
30
2.6 Tangent Spaces
Locally, near a point p0 of Rn , a k-dimensional manifold M is a graph of some k-variate
function f, i.e.
s = f(r) (r ∈ A)
in Rn , and in particular
p0 = r0 , f(r0 ) .
Geometrically the tangent space of M in the point p0 consists of all tangent vectors touching
M in p0 . The dimensions k = n and k = 0 are dealt with separately. In the former the tangent
space consists of all vectors, in the latter of only the zero vector. In the sequel we assume that
0 < k < n.
Locally, near the point r0 , f comes close to its affine approximation, i.e.
f(r) ∼
= f(r0 ) + (r − r0 )f ′ (r0 )T .
The affine approximation of a function in a point gives correctly the values of the function and
its derivative in this point. Let us denote
g(r) = f(r0 ) + (r − r0 )f ′ (r0 )T
(whence g′ (r0 ) = f ′ (r0 )). Then s = g(r) is a graph which locally touches the manifold M in
the point p0 . Geometrically this graph is part of a k-dimensional hyperplane, or a plane or a
line in lower dimensions.
The tangent space of M in the point p0 , denoted by Tp0 (M), consists of all (tangent)
vectors such that the initial point of their representative directed line segments is p0 and the
terminal point is in the graph s = g(r), i.e., it consists of exactly all vectors
T
T
r, g(r) − r0 , f(r0 )
= r − r0 , (r − r0 )f ′ (r0 )T
!
!
(r − r0 )T
Ik
=
(r − r0 )T ,
=
′
T
′
f (r0 )(r − r0 )
f (r0 )
where r ∈ Rk and Ik is the k × k identity matrix. In particular r = r0 is included, so the zero
vector always is in a tangent space.
In a sense the above tangent space is thus the graph of the vector-valued function
T(h) = f ′ (r0 )h
of the vector variable h. Apparently T is a linear function and f ′ (r0 ) is the corresponding
matrix.
Note that replacing the graph s = f(r) by another graph t = h(u) (as needs to be done when
moving from one chart to another) simply corresponds to a local reparametrization u = η(r)
and change of basis of the tangent space using the matrix η ′ (r0 ) (cf. Theorem 2.7 and its proof).
The space itself remains the same, of course.
Example. A smooth space curve or a 1-dimensional manifold MÊof R3 is locally a graph
(y, z) = f(x) = f1 (x), f2 (x)
(or one of the other two alternatives). The tangent space of M in the point p0 = (x0 , y0, z0 ),
where (y0 , z0 ) = f(x0 ), consists of exactly all vectors
CHAPTER 2. MANIFOLD

h
 f ′ (x0 )h 

 1

f2′ (x0 )h
31
z
(h ∈ R).
p0 = (x0 , f1(x0) , f2(x0))
Geometrically the vectors are directed
along the line r = p0 + tv (t ∈ R),
where
v = 1, f1′ (x0 ), f2′ (x0 ) .
y
space curve + tangent vector
x
Example. A smooth surface in R3 is a 2-dimensional manifold M. Locally M is the graph
z = f (x, y) (or then one of the other two alternatives). The tangent space of M in the point
p0 = (x0 , y0, z0 ), where z0 = f (x0 , y0 ), consists of exactly all vectors


1
0

 h1
0
1


((h1 , h2 ) ∈ R2 ).
 ∂f (x0 , y0 ) ∂f (x0 , y0 )  h2
∂x
∂y
Geometrically the vectors are thus in the plane
r = p0 + t1 v1 + t2 v2
z
(t1 , t2 ∈ R),
v1
tangent plane
where
and
∂f (x0 , y0 ) v1 = 1, 0,
∂x
∂f (x0 , y0) .
v2 = 0, 1,
∂y
p0 = (x0 , y0 , f(x0,y0))
y
v2
x
What about when a manifold M is given by local loci, say locally by the condition
F(r, s) = 0
(assuming a proper order of variables)? According to Corollary 2.3, then M is given locally
near the point p0 = (r0 , s0 ) also as a graph s = f(r) and (cf. the Implicit Function Theorem)
−1
f ′ (r0 ) = −F′s r0 , f(r0 ) F′r r0 , f(r0 )
where
F′ = F′r F′s .
The tangent space Tp0 (M) consists of the vectors
!
Ik
f ′ (r0 )
h.
But these are exactly all vectors
m=
h
k
CHAPTER 2. MANIFOLD
32
satisfying the condition
i.e.
F′s r0 , f(r0 ) k + F′r r0 , f(r0 ) h = 0,
F′ (p0 )m = 0.
So we get
Theorem 2.8. If a manifold M is locally near the point p0 given as the locus defined by the
condition F(p) = 0 (with the assumptions of Corollary 2.3), then the tangent space Tp0 (M) is
the null space of the matrix F′ (p0 ).
In practice it of course suffices to find a basis for the tangent space (or null space). Coordinate-freeness of tangent spaces has not been verified yet, but as a consequence of the theorem,
Corollary 2.9. Tangent spaces of manifolds are coordinate-free.
Proof. This follows because a null space is coordinate-free, and a manifold can be given as a
local locus (Theorem 2.4). More specifically, if we take a coordinate transform p∗ = pQ + b
and denote
F∗ (p∗ ) = F (p∗ − b)QT
and m∗ = QT m,
then (cf. Section 1.5)
F∗ ′ (p∗0 )m∗ = F′ (p∗0 − b QT )QQT m = F′ (p0 )m.
Moreover, 0-dimensional and n-dimensional manifolds of Rn (points and open sets) of
course are coordinate-free.
Example. The tangent space of a circle
F (x, y) = x2 + y 2 − R2 = 0
in the point (x0 , y0) is then the null space of the 1 × 2 matrix
F ′ (x0 , y0 ) = (2x0 , 2y0).
h
satisfying
It consists of vectors
k
2x0 h + 2y0 k = 0
(cf. the equation of a line).
Similarly the tangent space of a sphere
F (x, y, z) = x2 + y 2 + z 2 − R2 = 0
in the point (x0 , y0, z0 ) is the null space of the 1 × 3 matrix
F ′ (x0 , y0 , z0 ) = (2x0 , 2y0 , 2z0 ).
 
h

It consists of vectors k  satisfying
l
2x0 h + 2y0 k + 2z0 l = 0
(cf. the equation of a plane).
Tangent spaces of second degree curves and surfaces are sometimes called polar spaces.
CHAPTER 2. MANIFOLD
33
Example. In general, the tangent space of a smooth surface (manifold) M in R3 defined implicitly by the equation
F (x, y, z) = 0
in the point p0 = (x0 , y0 , z0 ) is the null space of the 1 × 3 matrix F ′ (p0 ), i.e., the set of vectors
m = (h, k, l)T satisfying
F ′ (p0 ) • m =
∂F (p0 )
∂F (p0 )
∂F (p0 )
h+
k+
l = 0.
∂x
∂y
∂z
As a further consequence of Theorem 2.8 we see that if a manifold is parametrized, then its
tangent space can be parametrized, too.
Corollary 2.10. If the k-dimensional manifold M of Rn has the parametrization γ : U → M
and p0 = γ(u0 ), then the tangent space Tp0 (M) consists of exactly all vectors of the form
γ ′ (u0 )v
(v ∈ Rk ),
that is, Tp0 (M) is the column space of γ ′ (u0 ).
Proof. Locally near the point p0 the manifold M can be given as a locus determined by some
suitable condition F(p) = 0. Thus the equation
F γ(u) = 0
is an identity valid in a neighborhood of the parameter value u0 . Applying the chain rule we get
another identity
F′ (γ u) γ ′ (u) = O,
where O is a zero matrix of appropriate dimensions. Substituting u = u0 we get the equation
F′ (p0 )γ ′ (u0 ) = O,
showing that columns of γ ′ (u0 ) are in the null space of F′ (p0 ).
On the other hand, since the dimension of the null space is k and the k columns of γ ′ (u0 )
are linearly independent, the columns of γ ′ (u0 ) span the tangent space Tp0 (M).
Example. If the parametrization of a smooth space curve C (a 1-dimensional manifold of R3 )
is
r = γ(u) (u ∈ U),
then its tangent space Tr0 (C) in the point r0 = γ(u0 ) consists of exactly all vectors
hγ ′ (u0 )
(h ∈ R).
Example. If the parametrization of a smooth surface of R3 (a 2-dimensional manifold) is
r = γ(u)
(u ∈ U),
then its tangent space in the point r0 = γ(u0 ) consists of exactly all vectors


∂γ1 (u0 ) ∂γ1 (u0 )
 ∂u1
∂u2 

 

∂γ(u0 ) ∂γ(u0 )
∂γ
(u
)
∂γ
(u
)

2
0  h1
2
0
′
h
=
γ (u0 )h = 

 ∂u1
∂u2  h2
∂u1
∂u2


 ∂γ3 (u0 ) ∂γ3 (u0 ) 
∂u1
∂u2
(h ∈ R2 ).
CHAPTER 2. MANIFOLD
34
2.7 Normal Spaces
Geometrically the normal space of a k-dimensional manifold M of Rn in the point p0 , denoted
Np0 (M), consists of exactly all (tangent) vectors perpendicular to to all vectors of the tangent
space. In other words, the normal space is the orthogonal complement of the tangent space.
Again the cases k = n and k = 0 are special and are omitted in the sequel. In the former the
normal space consists of the the zero vector only, and in the latter of all vectors. Vectors in a
normal space are called normals or normal vectors.
Basic properties of a normal space Np0 (M) follow fairly directly from those of the tangent
space. We just list them here. From basic courses of mathematics we remember that the null
space of a matrix is the orthogonal complement of the column space of its transpose, and that
the column space is the orthogonal complement of the null space of its transpose.
• If the k-dimensional manifold M of Rn is near the point p0 locally a graph s = f(r),
then its normal space Np0 (M) consists of exactly all vectors
′
−f (r0 )T
k (k ∈ Rn−k ).
In−k
• If the k-dimensional manifold M of Rn is near the point p0 locally given as a locus
determined by the condition F(p) = 0 (with the assumptions of Corollary 2.3), then
its normal space Np0 (M) is the column space of the matrix F′ (p0 )T , i.e., it consists of
exactly all vectors
F′ (p0 )T k (k ∈ Rn−k ).
• If the k-dimensional manifold M of Rn is parametrized by γ : U → M and p0 = γ(u0 ),
then the normal space Np0 (M) is the null space of γ ′ (u0 )T , i.e., it consists of exactly all
vectors n satisfying
γ ′ (u0 )T n = 0.
• A normal space is coordinate-free.
• The dimension of the a normal space of a k-dimensional manifold of Rn is always n − k.
z
Example. A smooth space curve, i.e. a 1dimensional manifold M of R3 , is locally
a graph
(y, z) = f(x) = f1 (x), f2 (x)
(or one of the other two alternatives). The
normal space of M in the point p0 =
(x0 , y0 , z0 ), where (y0 , z0 ) = f(x0 ), then
consists of exactly all vectors
p0 = (x0 , f1(x0) , f2(x0))
y
space curve + normal plane
x

−f1′ (x0 ) −f2′ (x0 )  k1

1
0
k2
0
1

(k1 , k2 ∈ R).
Geometrically these vectors are in the plane
(x − x0 ) + f1′ (x0 )(y − y0 ) + f2′ (x0 )(z − z0 ) = 0.
CHAPTER 2. MANIFOLD
35
Some normals are more interesting than others, dealing with curvature, torsion, and the plane
most accurately containing the curve near p0 .
Example. A smooth surface in R3 is a 2-dimensional manifold M. Locally M is a graph
z = f (x, y) (or one of the other two alternatives). The normal space of M in the point
p0 = (x0 , y0, z0 ), where then z0 = f (x0 , y0 ), consists of exactly all vectors


∂f (x0 , y0 )
k
−
∂x


 ∂f (x , y ) 
0 0

 (k ∈ R).
k
−
∂y


k
Geometrically these vectors are in the line r = p0 + tv (t ∈ R), where
∂f (x , y ) ∂f (x , y ) 0 0
0 0
v= −
,−
,1 .
∂x
∂y
Example. If the parametrization of a smooth surface (a 2-dimensional manifold of R3 ) is
r = γ(u)
(u ∈ U),
then its normal space in the point r0 = γ(u0 ) consists of exactly all vectors
 
h
n = k 
l
satisfying

∂γ1 (u0 ) ∂γ2 (u0 ) ∂γ3 (u0 )  
h
 ∂u1
∂u1
∂u1 
0
′
T




.
k =
γ (u0 ) n = 

0
∂γ1 (u0 ) ∂γ2 (u0 ) ∂γ3 (u0 )
l
∂u2
∂u2
∂u2
The basis vector of the null space is now obtained in the usual way using cross product, thus
the vectors are
∂γ(u ) ∂γ(u ) 0
0
t
×
(t ∈ R).
∂u1
∂u2
(The cross product is not the zero vector because the columns of γ ′ (u0 ) are linearly independent.)
For instance, the normal space of a sphere parametrized by spherical coordinates as

(x, y, z) = γ(θ, φ) = (R sin θ cos φ, R sin θ sin φ, R cos θ)
(0 < θ < π , 0 < φ < 2π)
in the point γ(θ0 , φ0 ) consists of the vectors

 

 2 2

R cos θ0 cos φ0
−R sin θ0 sin φ0
R sin θ0 cos φ0
t  R cos θ0 sin φ0  ×  R sin θ0 cos φ0  = t  R2 sin2 θ0 sin φ0 
−R sin θ0
0
R2 sin θ0 cos θ0
i.e. vectors
tγ(θ0 , φ0 )T
(t ∈ R).
(t ∈ R),
CHAPTER 2. MANIFOLD
36
2.8 Manifolds and Vector Fields
To deal with the spaces we choose bases for the tangent space Tp0 (M) and the normal space
Np0 (M) of a k-dimensional manifold M of Rn in the point p0 :
t1 , . . . , tk
and n1 , . . . , nn−k ,
respectively. (These bases need not be normalized nor orthogonal.) Note that it was always
easy to get a basis for one of the spaces above.
Since the two spaces are orthogonal complements of each other, combining the bases gives
a basis of Rn . A vector field F may be projected to these spaces in the point p0 :
F(p0 ) = Ftan (p0 ) + Fnorm (p0 ).
Here Ftan (p0 ) is the flux of the field in the manifold M and Fnorm (p0 ) is the flux of the field
through the manifold M in the point p0 . (Cf. flow of a liquid through a surface.) It naturally
suffices to have one of these, the other is obtained by subtraction.
From the bases we get the matrices
T = (t1 , . . . , tk ) and
N = (n1 , . . . , nn−k )
and further the nonsingular matrices (so-called Gramians)
G = TT T = (Gij ) ,
T
H = N N = (Hij ) ,
Let us further denote
where
and
where
where Gij = ti • tj , and
where Hij = ni • nj .

a1
 
a = TT F(p0 ) = TT Ftan (p0 ) =  ...  ,
ak

ai = F(p0 ) • ti = Ftan (p0 ) • ti ,


b1


b = NT F(p0 ) = NT Fnorm (p0 ) =  ...  ,
bn−k
bi = F(p0 ) • ni = Fnorm (p0 ) • ni .
Since dot product is coordinate-free, elements of these matrices and vectors are so, too.
The components being in their corresponding spaces we can write
Ftan (p0 ) = Tc and
Fnorm (p0 ) = Nd
for vectors c and d. Solving these from the equations
a = TT Tc and
b = NT Nd
we see that the components of the field are given by the formulas14
Ftan (p0 ) = T(TT T)−1 TT F = TG−1 a and
Fnorm (p0 ) = N(NTN)−1 NT F = NH−1 b.
14
The least squares formulas, probably familiar from many basic courses of mathematics. Note that TT T is
nonsingular because otherwise there would be a nonzero vector c such that cT TT Tc = 0, i.e. Tc = 0.
CHAPTER 2. MANIFOLD
Example. The flux of a vector field in a
smooth surface (a 2-dimensional manifold of
R3 ) and through it is obtained by projecting the field to a (nonzero) normal vector n.
Here
N = n , H = n • n = knk2 ,
b = F(p0 ) • n
37
z
F
normal vector
Fnorm
tangent plane
p0
Ftan
y
x
and (as is familiar from basic courses)
n n
1
Fnorm (p0 ) = F(p0 ) •
=n
F(p
)
•
n
.
0
knk knk
knk2
Note. A k-dimensional manifold in Rn together with its k-dimensional tangent spaces may
be thought of, at least locally, as a 2k-dimensional manifold (in Rn+k ), the so-called tangent bundle, and the tangent spaces are its fibres. Similarly the manifold together with its
n − k-dimensional normal spaces can be thought of as an n-dimensional manifold (in R2n−k ),
the normal bundle.
The flux of a vector field in the manifold may then be seen locally as a cross-section of the
tangent bundle parametrized by the manifold, and similarly the flux through the manifold as
a cross-section of the normal bundle. This kind of geometric view of vector fields (and more
generally tensor fields) is popular in modern physics, but is pursued no further here.
A more general concept is the fiber bundle. See e.g. A BRAHAM & M ARSDEN & R ATIU .
”Calculating surface area is a foolhardy enterprise;
fortunately one seldom needs to know the area of a
surface. Moreover, there is a simple expression for dA
which suffices for theoretical considerations.”
(M ICHAEL S PIVAK : Calculus on Manifolds)
Chapter 3
VOLUME
3.1 Volumes of Sets
Geometrical things such as the area of a square or the volume of a cube can be defined using
lengths of their sides and edges. If the objects are open sets, these are already examples of
volumes of manifolds, as is length of an open interval. A square may be situated in threedimensional space, squares having the same length of sides are congruent and they have the
same area. In the n-dimensional space the n-dimensional volume of an n-dimensional cube (or
n-cube) is similar: If the edge length is h, then the volume is hn . Such a cube may be situated
in an even higher-dimensional space and still have the same (n-dimensional) volume. Such
volumes may be thought of as geometric primitives.
A way to grasp the volume of a bounded set A ⊂ Rn is the so-called Jordan measure. For
this we need first two related concepts. An outer cover P of A consists of finitely many similar
n-cubes in an n-dimensional grid, the union of which contains A. Adjacent n-cubes share a
face but no more or no less. The family of all outer covers is denoted by Pout . An inner cover
P consists of finitely many similar n-cubes in a grid, the union of which is contained in A. The
family of all inner covers is denoted by Pin . (This Pin may well be empty.) Note that lengths of
edges or orientations of the grid are in no way fixed, and neither is any coordinate system.
The volume of a cover P , denoted by |P |, is the sum of the volumes its n-cubes. (And this
is a geometric primitive.) The volume of the empty cover is = 0. Quite obviously, the volume
of any inner cover is at most the volume of every outer cover, outer covers covering all inner
covers.
inner cover
outer cover
The Jordan outer and inner measures of the set A are
|A|out = inf |P | and
P ∈Pout
|A|in = sup |P |,
P ∈Pin
respectively. A set A is Jordan measurable if |A|out = |A|in, and the common value |A| is its
38
CHAPTER 3. VOLUME
39
Jordan’s measure1 . This measure is now defined to be the volume of A. Such a volume clearly
is coordinate-free.
The precise same construct can be used in a k-dimensional space embedded in an n-dimensional space (where n > k). The k-dimensional space is then an affine subspace of Rn , i.e., a
manifold R parametrized by
γ(u) = b +
k
X
ui vi
i=1
(u ∈ Rk ),
where v1 , . . . , vk are linearly independent. Taking b as the origin and, if needed, orthogonalizing v1 , . . . , vk we may embed Rk in an obvious way in Rn , and thus define the k-dimensional
volume of a bounded subset of R. (The n-dimensional volume of these subsets in Rn is = 0, as
is easily seen.) Such affine subspaces are e.g. planes and lines of R3 .
Note. Not all bounded subsets of Rn have a volume! There are bounded sets not having a
Jordan measure.
The inner and outer covers used in defining Jordan’s inner and outer measures remind us
of the n-dimensional Riemann integral, familiar from basic courses of mathematics, with its
partitions and lower and upper
sums. It is indeed fairly easy to
see that whenever the volume exa3
ists, it can be obtained by integrating the constant 1, with improper integrals, if needed:
a1
Z
|A| = 1 dr.
A
An important special case is the
volume of a parallelepiped of Rn
(of R3 on the right).
a2
Theorem 3.1. The volume of the parallelepiped P in Rn with edges given by the vectors
a1 , a2 , . . . , an (interpreted as line segments) is
|P| = det(A),
where A = (a1 , a2 , . . . , an ) is the matrix with columns a1 , a2 , . . . , an .
Proof. The case is clear if a1 , a2 , . . . , an are linearly dependent, the volume is then = 0. Let
us then consider the more interesting case where a1 , a2 , . . . , an are linearly independent. The
volume is given by the integral above, with a change of variables by r = uAT + b. A wellknown formula then gives
Z
Z
|P| = 1 dr = det(A) du = det(A)|C|,
P
C
where C is the unit cube in Rn given by 0 ≤ ui ≤ 1 (i = 1, . . . , n). The volume of this cube is
= 1.
1
Also called Jordan–Peano measure or Jordan’s content.
CHAPTER 3. VOLUME
40
We already know volumes are coordinate-free. For the parallelepiped this is also clear
because the volume can be written as
p
|P| = det(G),
where G is the Gramian (cf. Section 2.8)
G = AT A = (Gij )
and Gij = ai • aj .
Being dot products, elements of a Gramian are coordinate-free. The same formula is valid
when we are dealing with the k-dimensional volume of a k-dimensional parallelepiped P as
part of Rk embedded as an affine subspace in Rn , and P is given by the n-dimensional vectors
a1 , a2 , . . . , ak . The Gramian is here, too, a k × k matrix. Note that we need no coordinate
transforms or orthogonalizations, the volume is simply given by the n-dimensional vectors.
An example would be a parallelogram embedded in R3 , with its sides given by 3-dimensional
vectors.
A bounded subset A of Rn is called a null set if its volume (or Jordan’s measure) is = 0.
For a null set A then
|A|out = inf |P | = 0
P ∈Pout
(the empty inner cover
is always available).
An unbounded subset A is a null set if its all
(bounded) subsets r r ∈ A and krk ≤ N (N = 1, 2, . . . ) are null sets.
Using the above definition it is often quite difficult to show a set is a null set. A helpful
general result is
Theorem 3.2. If M is a k-dimensional manifold of Rn and k < n, then bounded subsets of M
are null sets of Rn .
Proof. The proof is based on a tedious and somewhat complicated estimation, see e.g. H UB BARD & H UBBARD . Just about the only easy thing here is that if the volume of a bounded
subset A exists, then it is = 0. Otherwise |A|in > 0 and some inner cover of A would have at
least one n-dimensional cube contained in M. But near the center of the cube M is locally a
graph of a function, which is not possible.
Example. Bounded subsets of smooth surfaces and curves of R3 (1- and 2-dimensional manifolds) are null sets.
Null sets—as well as the k-null-sets to be defined below—are very important for integration
since they can be included and excluded freely in the region of integration without changing
values of integrals.
Rather than the ”full” n-dimensional one, it is possible to define a lower-dimensional volume for subsets of Rn , but this is fairly complicated in the general case.2 On the other hand, it
is easy to define the k-dimensional volume of an n-cube: It is hk if the edge length of the cube
is h. This gives us the k-dimensional volume of an outer cover P of a subset A of Rn , and then,
via the infimum, the k-dimensional Jordan outer measure. The corresponding inner measure is
of course always zero if A does not contain an open set, i.e. its interior A◦ is empty.
Thus, at least it is easy to define the k-dimensional zero volume for a subset A of Rn , that
is, the k-null-set of Rn : It is a set having a zero k-dimensional Jordan outer measure. Again
this definition is not that easy to use. Theorem 3.2 can be generalized, however, the proof then
becoming even more complicated (see H UBBARD & H UBBARD):
2
And has to do e.g. with fractals.
CHAPTER 3. VOLUME
41
Theorem 3.3. If M is a k-dimensional manifold of Rn and k < m ≤ n, then bounded subsets
of M are m-null-sets of Rn .
For instance, bounded subsets of smooth curves of R3 (1-dimensional manifolds) are 2-null-sets
of R3 .
3.2
Volumes of Parametrized Manifolds
In general, the k-dimensional volume of a k-dimensional manifold M of Rn is a difficult concept. For a parametrized manifold it is considerably easier. If the parametrization of M is
r = γ(u)
then
(u ∈ U),
Z q
det γ ′ (u)T γ ′ (u) du,
|M|k =
U
or briefly denoted
|M|k =
Z
dr.
M
Such a volume may be infinite. Note that inside the square root there is a Gramian determinant
which is coordinate-free. The whole integral then is coordinate-free.
Comparing with the k-dimensional volume of a parallelepiped
p
|P|k = det(AT A)
in the previous section we notice that in a sense we obtain |M|k by ”summing” over the points
r = γ(u) of the manifold k-dimensional volumes of parallelepipeds whose edges are given by
the vectors
∂γ(u)
dui
(i = 1, . . . , k).
∂ui
Moreover, (as a directed line segment) the vector
dui
∂γ(u)
∂ui
approximatively defines the movement of a point in the manifold when the parameter ui changes
a bit the other parameters remaining unchanged.
Note. In a certain sense definition of the volume of a parametrized manifold then is natural,
but we must remember that it is just that, a definition. Though various problems concerning volumes are largely solved for parametrized manifolds—definition, computing, etc.—others
remain. There are parametrizations giving a volume for a manifold even when it does not otherwise exist (e.g. as Jordan measure). On the other hand, a manifold having a volume may
sometimes be given a parametrization such that the above integral does not exist.
Things thus depend on the parametrization. In the sequel we tacitly assume such ”pathological” cases are avoided.
CHAPTER 3. VOLUME
42
It is important to check that the definition above gives the open parallelepiped P the same
volume as before. The parallelepiped is a manifold and can be parametrized naturally as
r = γ(u) = b +
k
X
ui aTi
(U : 0 < u1 , . . . , uk < 1),
i=1
whence
′
γ (u) = A and
|P|k =
Z p
det(AT A) du =
U
p
det(AT A).
Another important fact to verify is freeness of parametrization.
Theorem 3.4. The volume of a parametrized manifold does not depend on the parametrization.3
Proof. By Theorem 2.7 different parametrizations can be obtained from each other by reparametrization. When reparametrizing a k-dimensional manifold M originally parametrized by
some γ : U → M a new parameter domain V ⊆ Rk is taken and a continuously differentiable
bijective function η : V → U having a nonsingular derivative matrix η ′ . The new parametrization then is given by the composite function δ = γ ◦ η as
r = γ η(v) = δ(v) (v ∈ V).
Via the chain rule
δ ′ (v) = γ ′ η(v) η ′ (v).
In the integral giving |M|k this corresponds to the change of variables u = η(v) and the
corresponding transform of the region of integration from U to V. Thus we only need to check4
the form of the new integrand:
Z q
det γ ′ (u)T γ ′ (u) du
|M|k =
=
=
=
=
U
Z
q
V
Z
q
V
Z
q
V
Z
q
V
T
det γ ′ η(v) γ ′ η(v) det η ′ (v) dv
T
2
det γ ′ η(v) γ ′ η(v)
det η ′ (v) dv
T
det η ′ (v)T det γ ′ η(v) γ ′ η(v) det η ′ (v) dv
det δ ′ (v)T δ ′ (v) dv.
Example. The 1-dimensional volume of a smooth parametrized space curve
C : r = γ(u) (u ∈ U)
is its length since
Z p
Z
Z q
′
T
′
′
′
det γ (u) γ (u) du =
γ (u) • γ (u) du = γ ′ (u) du.
|C|1 =
U
3
4
U
U
Remember that we tacitly assume all parametrizations do give a volume.
Recall that for square matrices det(AB) = det(A) det(B) and det(AT ) = det(A).
CHAPTER 3. VOLUME
43
Example. Similarly the 2-dimensional volume of a parametrized surface
S : r = γ(u)
is
|S|2 =
Z q
U
(u ∈ U)
det γ ′ (u)T γ ′ (u) du.
This is the same as the familiar area
Z ∂γ(u) ∂γ(u) ×
du
∂u1
∂u2
U
since the area (2-dimensional volume) of the parallelogram in the integrand (a 2-dimensional
parallelepiped) can be given as the length of the cross product.
Note. The k-dimensional volume of a k-dimensional trajectory manifold of Rn is defined analogously, and it, too, does not depend on the parametrization. Of course, the difficulties mentioned
in the previous note remain true for trajectory manifolds, too.
3.3 Relaxed Parametrizations
Since the k-dimensional volume of k-null-sets of Rn is = 0, adding such sets in a k-dimensional
manifold (via union) does not change the k-dimensional volume of the manifold.
When defining a parametrized manifold as in Section 2.5 often parts of the manifold must
be removed first. For instance, parametrizing a sphere using spherical coordinates does not
work as such, but only after parts of the surface are removed (e.g. a half great circle). Properly
extending parametrization these excluded parts may be included again, at least as far as volumes
and other integrations are concerned.
A relaxed parametrization5 of a k-dimensional manifold M of Rn is obtained by spesifying
an extended parameter domain U ⊆ Rk , an exception set X ⊂ U, and a continuous function
γ : U → Rn , such that
1. M ⊆ γ(U) and γ(U − X ) ⊆ M (frequently M = γ(U)).
2. The boundary ∂U of U is a null set of Rk .6 Often ∂U is at least partly included in the
exception set X .
3. X is a null set of Rk .
4. γ(X ) is a k-null-set of Rn .
5. The set M′ = γ(U − X ) is a k-dimensional manifold of Rn parametrized by
r = γ(u)
(u ∈ U − X )
(in the sense of Section 2.5).
5
This is in no way a standard concept in the literature, where several kinds of relaxed parametrizations appear. Here the concept is so general that according to H UBBARD & H UBBARD every manifold has an relaxed
parametrization.
6
This strange-looking condition is needed to exclude certain ”pathological” situations. Open subsets of Rk may
actually have a boundary that is not a null set.
CHAPTER 3. VOLUME
44
Note that item 5. implies γ is continuously differentiable and bijective in U − X . In U it is just
continuous and not necessarily bijective. Furthermore, U − X is an open set but U might not
be. Thus, should U contain points of its boundary ∂U, they must be in X , too.
φ
Example. Parametrization of a sphere by spherical coordinates
in a form where the parametrization is extended to the whole
sphere is an example of a relaxed parametrization:
2π
r = γ(θ, φ) = (R sin θ cos φ, R sin θ sin φ, R cos θ)
X= U
U
(U : 0 ≤ θ ≤ π , 0 ≤ φ ≤ 2π).
The exception set X is here the boundary ∂U of U. The corresponding ball (a 3-dimensional manifold of R3 ) in turn is obtained
by the relaxed parametrization
π
θ
r = γ(ρ, θ, φ) = (ρ sin θ cos φ, ρ sin θ sin φ, ρ cos θ)
(U : 0 ≤ ρ ≤ R , 0 ≤ θ ≤ π , 0 ≤ φ ≤ 2π).
U is a rectangular prism whose boundary is the exception set.
Example. After removing the four vertices, the perimeter of
a square is a 1-dimensional manifold M of R2 , consisting
of four separate parts, with the relaxed parametrization


(u + 3, −1) for −4 ≤ u ≤ −2




(1, u + 1) for −2 ≤ u ≤ 0
γ(u) =

(1 − u, 1) for 0 ≤ u ≤ 2





(−1, 3 − u) for 2 ≤ u ≤ 4,
y
1
M
1
1
x
1
where the parameter domain is U : −4 ≤ u ≤ 4. The exception set is X = {−4, −2, 0, 2, 4}.
Similarly, relaxed parametrizations could be obtained for surfaces of cubes, or more generally, surfaces of polyhedra.
In volume computations the exception set X has no contribution since it is mapped to γ(X ),
a k-null-set of Rn :
Z q
′
det γ ′ (u)T γ ′ (u) du.
|M|k = |M |k =
U −X
q
If the integrand det γ ′ (u)T γ ′ (u) has a continuous extension onto the whole U, as is sometimes the case, we can write further
Z q
det γ ′ (u)T γ ′ (u) du,
|M|k =
U
since X is a null set of Rk . Improper integrals are here allowed, too, giving even more ”relaxation”. Thus relaxed parametrizations can be used in volume computations and integrations
more or less as the ”usual” parametrizations of Section 2.5.
CHAPTER 3. VOLUME
45
Even relaxed reparametrization7 is possible and preserves volumes. A new extended parameter domain V ⊆ Rk , an exception set Y ⊂ V, and a continuous function η : V → U are
then sought, such that
1. restricted to Y, η is a bijective function Y → X .
2. restricted to V − Y, η is a continuously differentiable bijective function V − Y → U − X ,
and its derivative η ′ is nonsingular (i.e. η then gives the ”usual” reparametrization).
In such a relaxed reparametrization the exception set is mapped to an exception set, which
guarantees preservation of the volume in reparametrization (as a consequence of Theorem 3.4).
To make reparametrization in this sense possible, exception sets often need to be modified.
Example. The circle x2 + y 2 = 1 is a 1-dimensional manifold of R2 having e.g. the following
two relaxed parametrizations:
r = γ 1 (φ) = (cos φ, sin φ)
(0 ≤ φ < 2π)
(the usual polar coordinate parametrization with the exception set {0}), and

p
 − u − 2, 1 − (u + 2)2 for −3 ≤ u ≤ −1
r = γ 2 (u) =
(U : −3 ≤ u < 1)
 u, −√1 − u2 for −1 ≤ u < 1
(the exception set is {−3, −1}). These two relaxed parametrizations cannot be directly reparametrized to each other. If, however, in the exception set of the first parametrization the ”unnecessary” number π is added, then a reparametrization is possible by the function
(
−2 − cos φ for 0 ≤ φ ≤ π
u = η(φ) =
cos φ for π ≤ φ < 2π.
Note. Relaxed parametrizations are possible for trajectory manifolds, too. And volumes will
then also be preserved in relaxed reparametrizations.
7
Again in literature this is defined in many different ways.
”Gradient a 1-form? How so? Hasn’t one always known
the gradient as a vector? Yes, indeed, but only because
one was not familiar with the more appropriate 1-form
concept. The more familiar gradient is the vector
corresponding to the 1-form gradient.”
(C.W. M ISNER & K.S. T HORNE & J.A. W HEELER : Gravitation)
Chapter 4
FORMS
4.1 k-Forms
Whenever vector fields are integrated—the result being a scalar—the fields first need to be
”prepared” in one way or another, so that the integrand is scalar-valued. A pertinent property
of a vector field is its direction, which then must be included somehow in the preparation.
Integration regions are here parametrized manifolds, possibly in a relaxed sense. Directions
related to the manifold can be included via tangent spaces or normal spaces. And in many
cases orientation of the manifold should be fixed, too. Examples of integrals of this type are the
familiar line and surface integrals
Z
Z
F(r) • ds and
F(r) • dS.
C
S
A general apparatus for the ”preparation” is given by so-called forms. In integration they
appear formally as a kind of differentials, and are therefore often called differential forms. In
various notations, too, differentials appear for this reason. Other than this, they do not have any
real connection with differentials.
An n-dimensional k-form is a function φ mapping n-dimensional vectors1 r1 , . . . , rk (thus
k vectors, and the order is relevant!) to a real number φ(r1 , . . . , rk ), satisfying the conditions
1. φ is antisymmetric, i.e., interchanging any two vectors ri and rj changes the sign of the
value φ(r1 , . . . , rk ). Thus if for instance k = 4, then
φ(r3 , r2 , r1 , r4 ) = −φ(r1 , r2 , r3 , r4 ).
2. φ is multilinear, i.e., it is linear with respect to any argument position (vector ri ):
φ(r1 , . . . , ri−1 , c1 ri1 + c2 ri2 , ri+1 , . . . , rk ) = c1 φ(r1 , . . . , ri−1 , ri1 , ri+1, . . . , rk )
+ c2 φ(r1 , . . . , ri−1 , ri2 , ri+1, . . . , rk )
(similarly the cases i = 1 and i = n).
As a limiting case, 0-forms can be included, too: they are constant functions which do not have
any variable vectors at all.
1
Often tangent vectors where only the vector part is used. Note that in 1-, 2- and 3-dimensional spaces geometric vectors could be used.
46
CHAPTER 4. FORMS
47
Thinking about the intended use of k-forms—as devices for coupling vector fields and
k-dimensional volumes for integration—this definition looks fairly minimal. Antisymmetry
makes it possible the change direction by ”reflecting”, that is, interchanging two vector variables. Multilinearity on the one hand guarantees the volume of a combination of two disjoint
sets is duly obtained by adding the volumes of the two parts, and on the other hand implies that
scaling works correctly, i.e., if the vector variables are multiplied (scaled) by the scalar λ, then
the value is scaled by λk .
Example. A familiar example of an n-dimensional n-form is the determinant det. Take n vectors r1 , . . . , rn , consider them as columns of a matrix: (r1 , . . . , rn ), and compute its determinant
det(r1 , . . . , rn ). It follows directly from basic properties of determinants that this is an n-form.
Example. An equally familiar example of an n-dimensional 1-form is dot product with a fixed
vector. If a is a fixed constant vector, then the function φ(r) = a • r is a 1-form. Note that
antisymmetry is not relevant here because there is only on vector variable.
The dot product r1 • r2 is however not a 2-form (and why not?).
Example. A further familiar example of a 3-dimensional 2-form is the first component (or any
other component) of the cross product r1 × r2 . Properties of the cross product immediately
imply that
(r2 × r1 )1 = (−r1 × r2 )1 = −(r1 × r2 )1
(antisymmetry) and that
r1 × (c1 r21 + c2 r22 )
(multilinearity).
1
= (c1 r1 × r21 + c2 r1 × r22 )1 = c1 (r1 × r21 )1 + c2 (r1 × r22 )1
Let us list some basic properties of forms:
Theorem 4.1. (i) If φ(r1 , . . . , rk ) is an n-dimensional k-form and c is a constant, then
cφ(r1 , . . . , rk ) is also a k-form.
(ii) If φ1 (r1 , . . . , rk ) and φ2 (r1 , . . . , rk ) are n-dimensional k-forms, then so is their sum
φ1 (r1 , . . . , rk ) + φ2 (r1 , . . . , rk ).
(iii) If φ(r1 , . . . , rk ) is an n-dimensional k-form and c is a constant vector, then
φ1 (r1 , . . . , rk−1) = φ(r1 , . . . , rk−1, c)
is an n-dimensional k −1-form. (And similarly when c is in any other argumant position.)
(iv) If the vectors a1 , . . . , ak are linearly dependent, then the value of a k-form φ(a1 , . . . , ak )
is = 0. In particular, if some ai is the zero vector, then φ(a1 , . . . , ak ) = 0.
Proof. Items (i), (ii) and (iii) are immediate, so let us proceed to item (iv). We notice first that
if two of the vectors a1 , . . . , ak are the same, then the value is = 0. Indeed, interchanging these
vectors the value changes its sign, and also remains the same. If then a1 , . . . , ak are linearly
dependent, then one of them can be expressed as a linear combination of the others, say,
ak =
k−1
X
i=1
ci ai .
CHAPTER 4. FORMS
48
But then by the multilinearity
φ(a1 , . . . , ak ) =
k−1
X
ci φ(a1 , . . . , ak−1 , ai ),
i=1
which is = 0. In particular, if say a1 = 0, then
φ(0, a2 , . . . , ak ) = φ(0 · 0, a2 , . . . , ak ) = 0φ(0, a2 , . . . , ak ) = 0.
As a consequence of item (iv), an n-dimensional k-form where k > n is rather uninteresting: it
always has the value 0.
Forms in fact are a kind of generalizations of determinants. To see this, let us see how an
n-dimensional k-form φ(r1 , . . . , rk ) is represented when the vectors r1 , . . . , rk are represented
in an orthonormalized basis e1 , . . . , en as
ri =
n
X
xj,i ej
(i = 1, . . . , k).
j=1
Multilinearity of the last argument position rk gives
φ(r1 , . . . , rk ) =
n
X
xj,k φ(r1 , . . . , rk−1, ej ).
j=1
Continuing using the representation of rk−1 in the basis gives further
φ(r1 , . . . , rk ) =
n
n X
X
xl,k−1 xj,k φ(r1 , . . . , rk−2, el , ej ).
j=1 l=1
Note that all terms having i = j can be omitted in the sum. In this way we finally get a
representation of φ(r1 , . . . , rk ) as a sum of terms of the form
xj1 ,1 xj2 ,2 · · · xjk ,k φ(ej1 , ej2 , . . . , ejk )
where j1 , j2 , . . . , jk are disjoint indices.
Let us then collect together terms where the indices j1 , j2 , . . . , jk are the same, just possibly
in a different order. As an example take the case where the indices are the numbers 1, 2, . . . , k
in various permuted orders. The other cases are of course similar. These then are the k! terms
xj1 ,1 xj2 ,2 · · · xjk ,k φ(ej1 , ej2 , . . . , ejk ),
where j1 , j2 , . . . , jk are the numbers 1, 2, . . . , k in some order. Interchanging the vectors in
φ(ej1 , ej2 , . . . , ejk ) repeatedly to make the order ”correct” this term can be written as
±axj1 ,1 xj2 ,2 · · · xjk ,k ,
where a = φ(e1 , e2 , . . . , ek ) and the sign ± is determined by the parity of the number of uses
of antisymmetry (’even’ = +, ’odd’ = −). On the other hand, this is exactly the way the
determinant
x1,1 x1,2 · · · x1,k x2,1 x2,2 · · · x2,k ..
.. ..
.
.
.
.
. .
xk,1 xk,2 · · · xk,k CHAPTER 4. FORMS
49
is expanded as a sum of products, the only difference is that then a is = 1.
The above indicates that in a coordinate representation, certain k-forms of a special type
seem to be central. These forms, the so-called elementary k-forms, are defined as follows.
Of the indices 1, 2, . . . , n take k indices in increasing order: j1 < j2 < · · · < jk . Then the
elementary k-form2
dxj1 ∧ dxj2 ∧ · · · ∧ dxjk
is defined by
xj ,1 xj ,2 · · · xj ,k 1
1
1
xj ,1 xj ,2 · · · xj ,k 2
2
2
(dxj1 ∧ dxj2 ∧ · · · ∧ dxjk )(r1 , . . . , rk ) = ..
..
.. .
.
.
.
.
.
. xjk ,1 xjk ,2 · · · xjk ,k Thus we get the value of the form by taking elements having the indices j1 , j2 , . . . , jk from
r1 , . . . , rk , forming the corresponding k × k determinant, and computing the value of the determinant. As a special case an elementary 0-form is included, it always has the value = 1.
Obviously there are
n!
n
=
k!(n − k)!
k
elementary n-dimensional k-forms. In particular there is one elementary 0-form and one elementary n-form. This makes it possible to ”embed” single scalars in forms in two ways. On the
other hand there are n elementary 1-forms, and also n elementary n − 1-forms, which makes it
possible to ”embed” vectors in forms via the coefficients, again in two ways (even when n = 2,
in a sense!).
Note. The definition of the above k-form
dxj1 ∧ dxj2 ∧ · · · ∧ dxjk
using a determinant actually works even when the indices j1 , j2 , . . . , jk are not in order of
magnitude. This possibility is a convenient one to allow. For instance, interchanging dxjl
and dxjh interchanges the ith and the hth rows in the determinant and thus changes its sign.
Furthermore, if two (or more) of the indices j1 , j2 , . . . , jk are the same, then we naturally agree
that the k-form dxj1 ∧ dxj2 ∧ · · · ∧ dxjk has the value 0.
Example. n-dimensional elementary 1-forms are simply the component functions. If
 
x1
 x2 
 
r =  ..  ,
 . 
xn
then (dxj )(r) = xj (j = 1, . . . , n). As determinants these are 1 × 1 determinants.
2
This particular notation is traditional. It does not have much to do with differentials. The symbol ’∧’ is read
”wedge”. The corresponding binary operation, the so-called wedge product (or sometimes the exterior product), is
generally available for forms, as will be seen.
CHAPTER 4. FORMS
50
Example. The first component of the cross product r1 × r2 is a 3-dimensional elementary
2-form. If
 
 
x1
x2



r1 = y 1
and r2 = y2  ,
z1
z2
then the first component is
y1 y2 = (dy ∧ dz)(r1 , r2).
(r1 × r2 )1 = y1 z2 − z1 y2 = z1 z2 Other components are 2-forms, too:


(dy ∧ dz)(r1 , r2 )
r1 × r2 =  (dz ∧ dx)(r1 , r2 )  .
(dx ∧ dy)(r1 , r2)
Here in the 2-form dz ∧ dx the variables are not in the correct order, cf. the above note.
Example. n-dimensional elementary n-forms are simply n × n determinants:
(dx1 ∧ dx2 ∧ · · · ∧ dxn )(r1 , r2 , . . . , rn ) = det(r1 , r2 , . . . , rn ).
If the variables are not in the correct order and/or are repeated, then
(dxj1 ∧ dxj2 ∧ · · · ∧ dxjn )(r1 , r2 , . . . , rn ) = det(rj1 , rj2 , . . . , rjn ),
since it does not matter whether it is the rows or the columns that are interchanged.
The construct above gives a representation of forms by elementary forms:
Theorem 4.2. If φ is an on n-dimensional k-form, then
X
φ(r1 , . . . , rk ) =
aj1 ,j2 ,...,jk (dxj1 ∧ dxj2 ∧ · · · ∧ dxjk )(r1 , . . . , rk ),
1≤j1 <j2 <···<jk ≤n
where
aj1 ,j2,...,jk = φ(ej1 , ej2 , . . . , ejk ).
The representation is unique, i.e., the coefficients aj1 ,j2,...,jk can only be chosen in one way (and
why is that?).
Example. The n-dimensional 1-forms are exactly all forms
φ(r) =
n
X
j=1
aj xj = a • r,
i.e., exactly all forms obtained by taking a dot product with a constant vector a = (a1 , . . . , an )T .
It might be a bit confusing to consider this using elementary 1-forms:
a1 dx1 + a2 dx2 + · · · + an dxn .
CHAPTER 4. FORMS
51
Example. The 3-dimensional elementary 2-forms are exactly all forms
φ(r1 , r2 ) = a • r1 × r2 ,
where a is a constant vector. This mostly explains the central role of the scalar triple product
(and the cross product) in vector analysis. This can also be written using determinants as
φ(r1 , r2 ) = det(a, r1 , r2 ).
In general, thinking about expanding a determinant along the first column, the n-dimensional
n − 1-forms are exactly all forms
φ(r1 , . . . , rn−1 ) = det(a, r1 , . . . , rn−1 ),
where a is a constant vector.
Example. The n-dimensional n-forms are exactly all forms
φ(r1 , . . . , rn ) = a det(r1 , . . . , rn ),
where a is a scalar constant, or, given by elementary n-forms,
φ = a dx1 ∧ dx2 ∧ · · · ∧ dxn .
By Theorem 4.2, the general wedge product can now be defined. If the forms are given
using elementary forms as
X
φ(r1 , . . . , rk ) =
aj1 ,j2,...,jk (dxj1 ∧ dxj2 ∧ · · · ∧ dxjk )(r1 , . . . , rk )
1≤j1 <j2 <···<jk ≤n
(a k-form) and
ψ(s1 , . . . , sl ) =
X
bh1 ,h2 ,...,hl (dxh1 ∧ dxh2 ∧ · · · ∧ dxhl )(s1 , . . . , sl )
1≤h1 <h2 <···<hl ≤n
(an l-form), then their wedge product is the k + l-form
(φ ∧ ψ)(r1 , . . . , rk , s1 , . . . , sl )
whose representation by elementary forms is
X
aj1 ,j2,...,jk bh1 ,h2 ,...,hl dxj1 ∧ dxj2 ∧ · · · ∧ dxjk ∧ dxh1 ∧ dxh2 ∧ · · · ∧ dxhl ,
φ∧ψ =
1≤j1 <j2 <···<jk ≤n
1≤h1 <h2 <···<hl ≤n
with the proviso of the above note concerning order and/or repetetition of indices. If k = 0, that
is φ is a 0-form (constant), this simply is multiplication by a constant. Note especially that the
wedge product is associative and distributive, i.e.,
φ ∧ (ψ ∧ ξ) = (φ ∧ ψ) ∧ ξ ,
φ ∧ (ψ + ξ) = φ ∧ ψ + φ ∧ ξ and
(φ + ψ) ∧ ξ = φ ∧ ξ + φ ∧ ξ.
Note. The concept of a form was defined without using any coordinate systems. On the other
hand, the wedge product above was defined via elementary forms, and thus using a coordinate
system. It can, however, also be defined without elementary forms, starting from the original
definition of forms, and so is coordinate-free. See e.g. H UBBARD & H UBBARD .
CHAPTER 4. FORMS
52
4.2 Form Fields
A k-form field3 of Rn is a function which for each point p gives a k-form:
Φ(p; r1 , . . . , rk ).
Here r1 , . . . , rk are the vector variables for the k-form given by the form field for the point p.
Often they are considered as tangent vectors with the point of action p. By Theorem 4.2, to
define a form field Φ it suffices to give the coefficients in the presentation by elementary forms,
and they will depend on p:
X
Φ(p; r1 , . . . , rk ) =
aj1 ,j2 ,...,jk (p)(dxj1 ∧ dxj2 ∧ · · · ∧ dxjk )(r1 , . . . , rk ),
1≤j1 <j2 <···<jk ≤n
n
coefficients are
where the needed
k
aj1 ,j2 ,...,jk (p) = Φ(p; ej1 , ej2 , . . . , ejk ).
Often a form field is not defined in the whole space Rn but only in a subset, e.g. in a manifold.
Example. A constant k-form field is one depending only on the vector variables and not on the
point p, i.e. a form φ(r1 , . . . , rk ).
Example. The n-dimensional 1-form fields are the fields
Φ(p; r) = F(p) • r,
where F is a vector-valued function (vector field) defined in a suitable subset of Rn (say, a
manifold).
Example. A general 2-form field of R3 has the form
Φ(p; r1 , r2) = F(p) • r1 × r2 ,
where F is a vector field defined in a suitable subset of Rn (say, a manifold).
Example. The n-dimensional n − 1-form fields are the fields
Φ(p; r1 , . . . , rn−1 ) = det F(p), r1 , . . . , rn−1 ,
where F is a vector-valued function (vector field) defined in a suitable subset of Rn (say, a
manifold).
Example. A general n-form field of Rn has the form
Φ(p; r1 , . . . , rn ) = a(p) det(r1 , . . . , rn ),
where a is a real-valued valued function (scalar field) defined in a suitable subset of Rn (say, a
manifold).
3
For some reason form fields are also called differential forms in some literature, though they have little to do
with differentials.
CHAPTER 4. FORMS
53
These examples already indicate how manifolds, vector and scalar fields and forms are
connected: Using forms vector and scalar fields are transformed into form fields which then
are integrated over a region, often a manifold or a parameter domain, possibly in a relaxed
parametrization. In the general case such integrals are hard to deal with. For parametrizations
it is easier, in principle anyway.
If a k-dimensional manifold M of Rn has a (relaxed) parametrization
r = γ(u) (u ∈ U)
and Φ(p; r1 , . . . , rk ) is an n-dimensional k-form field defined in M, then its integral over M is
Z
Z
Φ = Φ γ(u); γ ′ (u) du.
M
U
′
Here the k columns of γ (u) are interpreted as the vector variables of the form field. Many
integrals familiar from basic courses on mathematics are of this type, and they now have a
uniform formulation.
Example. The line integral of a vector field F over the smooth curve
C : r = γ(u) (u ∈ U),
i.e.,
Z
F(r) • ds =
C
Z
U
is the integral of the 1-form field
F γ(u) • γ ′ (u) du
Φ(p; r) = F(p) • r
over C. The curve C could here also be only piecewise smooth via a relaxed parametrization. It
certainly should be obvious why this integral often is given (in R3 ) as
Z
F1 (r) dx + F2 (r) dy + F3 (r) dz.
C
Example. Similarly the surface integral of the vector field F over the smooth surface
S : r = γ(u)
i.e.,
Z
F(r) • dS =
S
Z
U
is the integral of the 2-form field
(u ∈ U),
∂γ(u) ∂γ(u)
F γ(u) •
×
du
∂u1
∂u2
Φ(p; r1 , r2 ) = F(p) • r1 × r2
over S. Here, too, the surface S could be only piecewise smooth via a relaxed parametrization.
Thinking about the connection of the cross product and elementary 2-forms, it should not be
surprising that this integral is sometimes given as
Z
F1 (r) dy ∧ dz + F2 (r) dz ∧ dx + F3 (r) dx ∧ dy,
S
or even
Z
S
F1 (r) dy dz + F2 (r) dz dx + F3 (r) dx dy.
CHAPTER 4. FORMS
54
Example. The integral of a real-valued function f over an n-dimensional parametrized manifold of Rn
M : r = γ(u) (u ∈ U)
(possibly in a relaxed parametrization), i.e.
Z
Z
f (r) dr = f γ(u) det γ ′ (u) du
M
U
(recall change of variables in an integral), is not the integral of a form field over M, because
of the absolute value of the determinant. But the integral
Z
f γ(u) det γ ′ (u) du
U
is, i.e., it is the integral of the n-form field
Φ(p; r1 , . . . , rn ) = f (p) det(r1 , . . . , rn )
over M.
And what about the case of a 0-form field Φ of Rn ? It simply is a real-valued function Φ(p)
of n variables. On the other hand, a 0-dimensional manifold M of Rn consists of separate
points (possibly infinitely many), so
Z
X
Φ=
Φ(p).
M
p∈M
4.3 Forms and Orientation of Manifolds
Geometrically a surface of R3 can be oriented by giving in each point a normal vector which
varies continuously, and never equals the zero vector, and thus cannot jump or move to the other
side of the surface. Similarly, a curve can be oriented by giving in each point a tangent vector
which varies continuously and never equals the zero vector, and thus cannot turn around.
In general, a k-dimensional manifold M of Rn can be oriented with a similar idea, although it might be difficult to accomplish. The orientation is obtained by taking, if possible, an
n-dimensional k-form field
Φ(p; r1 , . . . , rk )
such that
1. in each point p0 of the manifold the k-form
Φ(p0 ; r1 , . . . , rk )
is defined in the tangent space Tp0 (M). I.e., Φ(p0 ; r1 , . . . , rk ) is defined for all vectors
r1 , . . . , rk in the tangent space Tp0 (M), cf. Section 2.6. Of course, the k-form may well
be defined for other vectors as well but they are not needed here.
CHAPTER 4. FORMS
55
2. Φ is continuous with respect to the variable p in M and does not change sign.
More specifically, for every point p of M there exist vectors
t1 (p) , . . . , tk (p)
of the tangent space Tp (M) such that
Φ p; t1 (p), . . . , tk (p)
is continuous with respect to p in M, and is either positive in M or negative in M.
The sign can then used for orientation. Note that the vectors t1 (p), . . . , tk (p) then must
constitute a basis for the tangent space Tp (M), otherwise the form would have the value
= 0. The question thus is whether or not the tangent space can always be chosen a basis
with the same orientation (same ”handedness”).
If a manifold M is parametrized by
p = γ(u)
(u ∈ U),
then it is possible to use this to orient M. Item 1. remains the same but item 2. is replaced by
2.’ Φ γ(u); γ ′ (u) is continuous with respect to the parameter u, and does not change sign.
This is not a different type of orientation than the general one above, parametrization is just used
to choose the bases for the tangent spaces. As was seen in the proof of Theorem 2.6, locally
the parameter u can be given as a continuous—and even continuously differentiable—function
u = g(r) of some components r of p. Thus γ −1 is continuous
in M, and according to item 2.,
the orientation is given by the value Φ p; γ ′ γ −1 (p) .
Not all manifolds can be oriented. Classical examples are various Möbius’ bands. An
example is given by the relaxed parametrization

u2 
γ
(u)
=
1
+
u
cos
cos u2
1
1


2


u2
(U : −1 < u1 < 1, 0 ≤ u2 < 2π).
sin u2
γ2 (u) = 1 + u1 cos

2



γ (u) = u sin u2
3
1
2
If the parameter value u2 = 0 is omitted, the band
is ”severed” and is a parametrized 2-dimensional
manifold of R3 . (Proving this takes a bit of tedious
calculation.) The severed band can be oriented using the parametrization. On the other hand, if in
the relaxed parametrization we take another interval for the parameter u2 , say π ≤ u2 < 3π, the
break would move elsewhere. Since ”manifoldness” is a local property, this basically means that
the ”unsevered” Möbius band is a manifold, too—
but not a parametrized one. Looking at the figure on the right (Maple) it is at least geometrically
fairly obvious that the band cannot be oriented (the
exact proof again being tedious).
Manifolds that can be oriented are called orientable. If the orientation of an orientable
−
→
manifold M is fixed (using some form field), this is indicated by denoting M. The opposite
−
→
←
−
orientation is then often denoted by −M (sometimes by M).
CHAPTER 4. FORMS
56
Orientations of surfaces and curves are actually examples of this method of orientation.
Example. A smooth parametrized surface
S : r = γ(u)
z
t1
tangent plane
(u ∈ U)
can be oriented remembering that its tangent
space in the point γ(u) is the column space
of γ ′ (u) (Corollary 2.10), i.e., the two
y
columns of γ ′ (u) constitute a basis of the
space. Agreeing that the first column is the
t2
first basis vector t1 , and the second column
x
the second basis vector t2 already orients
the surface. Since the surface is smooth (a
2-dimensional parametrized manifold of R3 ), these vectors are always linearly independent,
and thus cannot be ”interchanged” (see the figure below). Geometrically, the normal vector
∂γ(u) ∂γ(u)
×
∂u1
∂u2
always points to the same side of the surface.
But what is the 2-form field needed for the orientation? The field is known to be of the form
Φ(p; r1 , r2) = F(p) • r1 × r2 ,
where F is a vector field defined in the surface. We simply choose
∂γ(u) ∂γ(u)
×
.
F(p) = F γ(u) =
∂u1
∂u2
Then the value of the 2-form is
∂γ(u) ∂γ(u) 2
×
∂u1
∂u2
and it is always positive. Other choices are of course possible, too, but maybe not quite as
natural.
Let us try the orientation on the torus. A relaxed parametrization of a torus is


γ1 (u) = (2 + cos u2 ) cos u1
4
γ2 (u) = (2 + cos u2 ) sin u1
3


–4
–4
γ3 (u) = sin u2 ,
2
–3
–3
1
–2
where the parameter domain is U : 0 ≤ u1 , u2 < 2π. As
–1
0 –1
for the Möbius band, it may be noticed that this torus is a
1 –1
1
manifold, but not a parametrized one. For the orientation we
2
–2 2
omit the value u1 = 0 (although the whole torus is of course
3
–3
4
orientable, too). See the figure on the right (Maple). Basis
–4
vectors of the tangent space are (in this order)




−(2 + cos u2 ) sin u1
− sin u2 cos u1
∂γ(u) 
∂γ(u) 
(2 + cos u2 ) cos u1  and
=
= − sin u2 sin u1  .
∂u1
∂u2
0
cos u2
–2
3
4
CHAPTER 4. FORMS
57
Which side of the torus is the one where the basis vectors are positively ordered (as in the
xy-coordinate system)? Take the basis vectors for the parameter values u1 = u2 = π, i.e. in the
point (−1, 0, 0) of the x-axis:
 
 
0
0
∂γ(π, π)  
∂γ(π, π)  
0 .
= −1
=
and
∂u1
∂u2
0
−1
These basis vectors are ordered as the negative y- and z-axes, thus this side is the ”outside”.
(In the ”inside” view the axes are interchanged).
Example. A parametrized smooth space curve
C : p = γ(u) (u ∈ U)
can be oriented similarly. In the point γ(u) the basis vector γ ′ (u) of the tangent space is chosen,
and the curve is oriented. The 1-form field needed is of the form
Φ(p; r) = F(p) • r
and we choose
F γ(u) = γ ′ (u),
2
the value of the 1-form is then γ ′ (u) > 0. We could have chosen, say,
(which gives the same orientation) or
F γ(u) = 2γ ′ (u)
F γ(u) = −γ ′ (u)
(which gives the opposite orientation).
If a manifold consists of several disjoint parts, orientation in these parts can be chosen
arbitrarily, the continuity condition does not connect the parts in any way. If for instance in a
3-dimensional manifold of R3 consisting of two disjoint parts, coordinate system in one of the
parts is right-handed, nothing prevents us from taking a left-handed system for the other part
(unless it is the general preference for right-handed systems).
For a connected manifold orientation is coherent all over the manifold. A manifold M of
Rn is said to be connected, if for each two points p0 and p1 of the manifold there is a continuous
function f : [0, 1] → M such that
f(0) = p0
and
f(1) = p1 .
In other words, any two points of the manifold can always be connected by a continuous curve
in the manifold.
Theorem 4.3. If the oriented manifold M is connected then an orientation in one point gives
the orientation everywhere in the manifold.
Proof. Suppose M is oriented as above. The function of the point p of the manifold in item 2.
Φ p; t1 (p), . . . , tk (p)
CHAPTER 4. FORMS
58
is then in the chosen point p0 either positive or negative. Any other point p1 of the manifold
can be connected to p0 using some continuous function f : [0, 1] → M, where p0 = f(0) and
p1 = f(1). But then the function
Φ f(s); t1 f(s) , . . . , tk f(s)
is a continuous function of s in the interval [0, 1] and does not attain the value 0. Thus
Φ p0 ; t1 (p0 ), . . . , tk (p0 ) and Φ p1 ; t1 (p1 ), . . . , tk (p1 )
have the same sign.
In reparametrization the orientation of a parametrized manifold may be changed for the
opposite. This can be prevented by setting a further condition:
Theorem 4.4. If the oriented parametrized k-dimensional manifold
M : r = γ(u) (u ∈ U)
of Rn is reparametrized by
where
u = η(v) (v ∈ V),
det η ′ (v) > 0,
then the orientation remains the same. If, on the other hand, det η ′ (v) < 0, then the orientation will change. (It is naturally assumed here that the orientation is defined using the same
k-form field.)
Proof. As above, the manifold M is oriented using some k-form field Φ(p; r1 , . . . , rk ). Consider then orientation in some point p0 = γ(u0 ) of M, given by the value Φ p0 ; γ ′ (u0 ) . By
Theorem 4.2 the k-form Φ(p0 ; r1 , . . . , rk ) can be represented as a linear combination of elementary k-forms. Let us show that these forms will be multiplied by the same positive number
in reparametrization.
The elementary k-forms above are of the form
xj ,1 xj ,2 · · · xj ,k 1
1
1
xj ,1 xj ,2 · · · xj ,k 2
2
2
(dxj1 ∧ dxj2 ∧ · · · ∧ dxjk )(r1 , . . . , rk ) = ..
..
.. .
..
.
.
.
. xjk ,1 xjk ,2 · · · xjk ,k On the other hand, in the new parametrization a basis for the tangent space is given by the
columns of the matrix
γ ′ (u0 )η ′ (v0 )
(via the chain rule), where u0 = η(v0 ), and the value of the corresponding new elementary
k-form is
(dxj1 ∧ dxj2 ∧ · · · ∧ dxjk ) γ ′ (u0 )η ′ (v0 ) = (dxj1 ∧ dxj2 ∧ · · · ∧ dxjk ) γ ′ (u0 ) det η ′ (v0 ) .
(Remember matrix multiplication and that for square matrices det(AB) = det(A) det(B).)
All elementary
k-forms appearing here are thus multiplied by the same positive number
′
det η (v0 ) . So
Φ p0 ; γ ′ η(v0 ) η ′ (v0 ) = Φ p0 ; γ ′ (u0 ) det η ′ (v0 ) ,
and the orientation does not change.
Showing that the orientation does change in case det η ′ (v) < 0 is analogous.
CHAPTER 4. FORMS
59
Except for sketching the geometric picture, orientation is important for integration in reparametrization.
Theorem 4.5. Orientation preserving reparametrization of a parametrized oriented manifold
does not change integrals of form fields over the manifold. If, on the other hand, reparametrization changes the orientation, integrals change sign.
Proof. If the parametrized oriented manifold
−
→
M : r = γ(u) (u ∈ U)
is reparametrized preserving orientation, then in the integration variable in the integral
u = η(v) (v ∈ V)
is changed and the integral
Z
Z
Z
′
Φ = Φ γ(u); γ (u) du = Φ γ η(v) ; γ ′ η(v) det η ′ (v) dv
−
→
M
U
V
is obtained. Since the reparametrization was orientation preserving, det η ′ (v) must be positive by the previous theorem. Thus the integral can be written as
Z
Z
Z
′
′
Φ = Φ γ η(v) ; γ η(v) det η (v) dv = Φ γ η(v) ; γ ′ η(v) η ′ (v) dv
−
→
M
V
V
(cf. the previous proof), and this is the integral of the form field Φ over M given in the new
parametrization.
On the other hand, if the reparametrization does change the orientation, then det η ′ (v) is
negative, and the sign is changed.
Change of sign of an integral by change of orientation is denoted as
Z
Z
Z
Φ = Φ = − Φ.
−
→
−M
←
−
M
−
→
M
4.4 Basic Form Fields of Physical Fields
The basic form fields of a vector field F(p) of Rn are the work form field
ΦF–work (p; r) = F(p) • r
(a 1-form field), and the flux form field
ΦF–flux (p; r1 , . . . , rn−1 ) = det F(p), r1 , . . . , rn−1
(an n − 1-form field).
The basic form fields of a scalar field f (p) are the field itself (as a 0-form field), and the
density form field
Φf –density (p; r1 , . . . , rn ) = f (p) det(r1 , . . . , rn )
(an n-form field, also called the mass form field).
CHAPTER 4. FORMS
60
Physically, in a work form field F is typically a force field—e.g. gravitation—and in a flux
form field it is a flux—e.g. the flux of a fluid flow. The scalar field of a density form field is a
density quantity—e.g. mass density. Integrals of these over manifolds give the corresponding
net quantities—work needed to move a particle in gravitational field, net flow of fluid through
a surface, mass of a solid body, etc.
Using wedge product the familiar operations of fields in R3 can be transferred to form fields:
1. For scalar fields f and g and vector field F we of course have f ∧ g = f g, and also
f ∧ ΦF–work = Φf F–work ,
f ∧ ΦF–flux = Φf F–flux
and
f ∧ Φg–density = Φf g–density .
2. For vector fields F and G
ΦF–work ∧ ΦG–work = ΦF×G–flux
and
ΦF–work ∧ ΦG–flux = ΦF•G–density .
As an example, let us verify the formula
ΦF–work ∧ ΦG–flux = ΦF•G–density ,
leaving the rest for the reader as an exercise:
(F1 dx + F2 dy + F3 dz) ∧ (G1 dy ∧ dz + G2 dz ∧ dx + G3 dx ∧ dy)
= F1 G1 dx ∧ dy ∧ dz + F2 G2 dy ∧ dz ∧ dx + F3 G3 dz ∧ dx ∧ dy
= (F • G) dx ∧ dy ∧ dz.
Further convenience is given by the Hodge star operator ∗ which transforms a k-form field
Φ in an n-dimensional space to an n − k-form field ∗Φ (the Hodge dual) as follows. If
X
Φ=
aj1 ,...,jk (p) dxj1 ∧ · · · ∧ dxjk ,
j1 <···<jk
then
∗Φ =
X
±aj1 ,...,jk (p) dxjk+1 ∧ · · · ∧ dxjn ,
j1 <···<jk
where in each summand the indices j1 , . . . , jn are exactly all numbers 1, . . . , n and the sign ±
is determined by the condition
dxj1 ∧ · · · ∧ dxjn = ±dx1 ∧ · · · ∧ dxn .
For a 0-form field (scalar field) f and an n-form field Φf –density we define ∗f = Φf –density and
∗Φf –density = f . In R3 we then see (check!) that ∗ΦF–work = ΦF–flux and ∗ΦF–flux = ΦF–work . All
this makes it easy to change types of fields. Note also that for a k-form field Φ of Rn the dual
of the dual is ∗∗Φ = (−1)k(n−k) Φ, and in particular in R3 then ∗∗Φ = Φ.
CHAPTER 4. FORMS
61
There are physical fields which are neither scalar nor vector fields. For instance, the electric
field E is not a vector field (and of course not a scalar field either). Lorentz’s law says that the
force acting on a charged particle with charge q in the point r is
q E(r) + v × B(r) ,
where v is the speed of the particle and B is the magnetic flux density. If there is no ambient
electric field, the particle does not move, and the coordinate frame is stationary, there is no force
(i.e. the force is a zero vector). In a coordinate system moving with a constant speed (i.e. an
inertial frame) the force remains a zero vector (the particle does not no accelerate), but now v
is not a zero vector, the moving charged particle creates a magnetic field and a compensating
electric field. Seen from a moving frame then E is not a zero field. No physical vector field
can behave in this way, indeed, electric and magnetic fields should be dealt with together in
space-time (in R4 , the xyzt-space).
Even though electro-magnetic fields then cannot be modelled as vector fields, they can be
modelled as 2-form fields of R4 , the so-called Faraday form field and Maxwell form field 4
E1 dx ∧ dt + E2 dy ∧ dt + E3 dz ∧ dt + B1 dy ∧ dz + B2 dz ∧ dx + B3 dx ∧ dy
and
−c2 B1 dx ∧ dt − c2 B2 dy ∧ dt − c2 B3 dz ∧ dt + E1 dy ∧ dz + E2 dz ∧ dx + E3 dx ∧ dy,
where c is the speed of light in vacuum, often written simply—with only a slight abuse of
notation—as
ΦFaraday = ΦE–work ∧ dt + ΦB–flux
and
ΦMaxwell = ΦE–flux − c2 ΦB–work ∧ dt.
4
Michael Faraday and James Clerk Maxwell certainly did not consider form fields like these. The names
probably originated in the celebrated book M ISNER & T HORNE & W HEELER . The exact definitions of these
forms seem to vary a bit in the literature, the gist remaining the same, of course.
”Stokes theorem was stated by Sir George Stokes as
a Cambridge examination question, having been raised
by Lord Kelvin in a letter to Stokes in 1850.”
(A Dictionary of Science, Oxford University Press, 1999)
Chapter 5
GENERALIZED STOKES’ THEOREM
Many integration theorems in basic courses on mathematics share a common overall form:
Zb
a
df (x)
dx = f (b) − f (a)
dx
(Fundamental Theorem of Integral Calculus)
Zr2
∇f • ds = f (r2 ) − f (r1)
(Gradient Theorem)
r1
Z I
∂F2 (r) ∂F1 (r) dr = F(r) • ds
−
∂x
∂y
−→
A
∂A
Z
I
∇ • F(r) dr = F(r) • dS
A
Z
−
→
S
∇ × F(r) • dS =
(Green’s Theorem)
(Gauß’ Theorem)
−→
∂A
I
−
→
∂S
F(r) • ds
(Stokes’ Theorem)
The common features are the following:
• In the left hand side the integration region is a manifold, possibly in a relaxed parametrization, and oriented.
• In the left hand side the integrand is a derivative of some sort, coupled with the manifold
via tangent space and a suitable form (as a form field).
• In the right hand side the integration region (or summation region) is the oriented ”boundary” of the left hand side manifold, possibly in a relaxed parametrization. Note that in
the two first theorems an oriented boundary also appears, and one of the points has a plus
sign and the other one a minus sign (orientation).
• In the right hand side the integrand (or summand) is a field appearing in the left hand side
differentiated in some way.
To get a coherent form for all these theorems, and others, we must define a general derivative for form fields, and the concept of an oriented boundary over which form fields can be
integrated. The final result then is the Generalized Stokes Theorem.
62
CHAPTER 5. GENERALIZED STOKES’ THEOREM
63
5.1 Regions with Boundaries and Their Orientation
In general the boundary1 is of a k-dimensional manifold M of Rn is easily defined. It consists
of all points p satisfying the following two conditions:
(i) p is not in M.
(ii) Any open subset containing p also contains a point of M. In particular this is true for
p-centered open balls.
The boundary of a manifold may be empty. E.g. the boundary of a sphere in R3 is empty. On
the other hand, the boundary of a manifold need not be a (lower-dimensional) manifold. E.g.
the boundary of the open square 0 < x, y < 1 of R2 is the perimeter of the square, and thus not
a manifold. In fact, the boundary of a bounded manifold can have a very complicated structure,
it may e.g. not have a Jordan measure. A more serious problem often is existence of tangent
spaces, needed for the orientation.
A better way to start is to define the boundary of a subset within a manifold. This is so,
because inside the manifold its smoothness and tangent spaces can be used for the orientation.
The boundary of a subset A ⊂ M
within the manifold, denoted by ∂M A,
consists of exactly all points of M such
a nonsmooth point
that in every open set containing the point
of the boundary
M
there is a point of A and a point of M−A.
(Cf. the condition (ii) above.) Note that a
a smooth point
A
point of the boundary ∂M A may itself be
of the boundary
in A.
Consider then a point p of the boundary ∂M A. Since the point p0 is in the
MA
manifold M, for some open subset B containing the point the intersection M ∩ B
is a local locus, cf. Theorem 2.4. In other
words, there is a continuously differentiable function Fp0 : B → Rn−k such that F′p0 has full
rank and p is in M ∩ B if and only if Fp0 (p) = 0. There may be several such functions Fp0
to choose from. The point p0 is called a smooth point of the boundary ∂M A if, in addition to
the function Fp0 , another continuously differentiable function gp0 : B → R can be chosen such
that
1. gp′ 0 cannot be given as a linear combination of rows of F′p0 .
2. the point p is in the intersection A ∩ B exactly when
Fp0 (p) = 0
and gp0 (p) ≥ 0.
The subset of the boundary ∂M A consisting of exactly all smooth points is called the smooth
s
boundary of A in the manifold M, and denoted by ∂M
A.
The definition is easily interpreted even when k = n. Item 1. is then not needed and the
only condition is gp0 (p) ≥ 0.
1
Note that this (usually) is not the boundary as a boundary of a set in Rn .
CHAPTER 5. GENERALIZED STOKES’ THEOREM
64
Example. The locus condition
0.4
F (x, y, z) = z − xy = 0
0.2
0
determines in R3 a smooth surface (a 2-di–0.2
mensional manifold) S, one of the so-called
–0.4
saddle surfaces. The condition kpk ≤ 1
–1
specifies a subset of A. The boundary ∂S A
x 0
is then the locus given by the conditions
(
z − xy = 0
1 − x2 − y 2 − z 2 = 0,
1
1
0.5
–0.5
0
–1
y
in fact a 1-dimensional manifold. See the figure above (Maple). All points of the boundary ∂S A
are smooth points, since g can be chosen as
g(x, y, z) = 1 − x2 − y 2 − z 2 ,
and
F ′ (x, y, z) = (−y, −x, 1) and g ′(x, y, z) = (−2x, −2y, −2z)
are locally linearly independent (check!). Thus ∂Ss A = ∂S A.
Example. The square in R3 given by
N : 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, z = 1
is a subset of the plane z = 1 (a 2-dimensional manifold). Its boundary is the perimeter, i.e.,
the union of the four line segments
0≤x≤1, y=0,
0≤x≤1, y=1,
x=0, 0≤y≤1,
x=1, 0≤y≤1,
z=1,
z=1,
z = 1 and
z = 1.
With the exception of the vertices
(0, 0, 1) ,
(0, 1, 1) ,
(1, 0, 1) and (1, 1, 1),
points of the boundary are smooth, the function g can be taken as
y
,
1−y
,
x and 1 − x,
respectively, and F as
F (x, y, z) = z − 1.
It is to be anticipated in these examples that
s
Theorem 5.1. The smooth boundary ∂M
A of a subset A of a k-dimensional manifold M of Rn
is a k − 1-dimensional manifold (or empty).
CHAPTER 5. GENERALIZED STOKES’ THEOREM
65
Proof. We use the (local) notation above, and consider an arbitrary point p0 of the smooth
boundary. By Corollary 2.3, the condition
(
Fp0 (p) = 0
gp0 (p) = 0
then determines a k − 1-dimensional manifold K in some open subset B1 ⊆ B containing p0 .
s
We need to show now that the intersection K ∩ B1 equals the intersection ∂M
A ∩ B1 . Since
s
p0 was an arbitrary point of the smooth boundary, this shows that ∂M A is a local locus and thus
a manifold.
s
Take first a point p1 in the intersection ∂M
A ∩ B1 , and show that it is also in the intersection
K ∩ B1 . Let us assume the contrary: p1 is not in K ∩ B1 . Then
gp0 (p1 ) 6= 0.
(Since p1 is in M, we have Fp0 (p1 ) = 0.) By continuity then gp0 (p) 6= 0 also in some (small)
open subset B2 containing p1 . Thus in the set B2 , gp0 has only positive values or only negative
values. But then B2 cannot contain both points of M and points of M − A, and p1 cannot be a
point of the boundary of A in M. This contradiction shows that p1 ∈ K ∩ B1 .
Proving inclusion in the other direction we take a point p2 in the intersection K ∩ B1 , and
s
s
show that it is also in the intersection ∂M
A ∩ B1 . Assume again the contrary: p2 ∈
/ ∂M
A ∩ B1 .
Now gp0 (p2 ) = 0, and in some (small) open subset B3 containing p2 , the function gp0 has either
only positive values or only negative values. Otherwise p2 would be a point of the smooth
boundary (take Fp2 = Fp0 and gp2 = gp0 ). This means that p2 is a local solution of the
constrained optimization problem
(
gp0 (p) = extremum!
Fp0 (p) = 0
(either a point of local maximum or local minimum, the local extremum value being = 0).
Using the method of Lagrange’s multipliers, we introduce the Lagrange function
L(p, λ) = gp0 (p) + λFp0 (p)T ,
where λ is the (row) vector of Lagrange multipliers. In the point (p2 , λ2 ) of local extremum its
derivative is zero:

∂L(p2 , λ2 )


= gp′ 0 (p2 ) + λ2 F′p0 (p2 ) = 0

∂p


 ∂L(p2 , λ2 ) = Fp0 (p2 ) = 0.
∂λ
This cannot possibly be true since the first equation tells that gp′ 0 (p2 ) is a linear combination of
s
the rows of F′p0 (p2 ). This contradiction proves that p2 ∈ ∂M
A ∩ B1 .
Finally we can now define what would be a proper region of integration. A subset A of a
k-dimensional manifold M of Rn is its region with boundary, if
(1) A is a closed subset of Rn .
All points of the boundary ∂M A are there in A.
CHAPTER 5. GENERALIZED STOKES’ THEOREM
66
s
(2) The set ∂M A − ∂M
A (i.e. nonsmooth points of the
boundary) is a k − 1-null-set of Rn , see Section 3.1.
Nonsmooth points do not affect integrals over the
boundary of A.
This condition is needed, because it is even possible
that all points of the boundary ∂M A are nonsmooth.
E.g., in the figure on the right is the so-called Koch
snowflake, its boundary is everywhere nonsmooth and
has infinite length.
(3) For each positive number ǫ (however small) there
is an open subset Bǫ such that Bǫ contains all nons
smooth points of the boundary ∂M A and that the k − 1-dimensional volume of ∂M
A ∩ Bǫ
s
is < ǫ. Note that ∂M A ∩ Bǫ is a k − 1-dimensional manifold by Theorem 2.2. This
strange-looking condition is needed because it is possible that the k − 1-dimensional
volume of the smooth boundary is = ∞, even when restricted to arbitrarily small open
subsets containing a smooth point. In some sense the condition prevents integrals being
improper.
For instance, the obviously nonsmooth boundary points of the subset (of R2 ) in the figure on
the right (Maple) are the two vertices, and the
origin where the two spiralling curves
r=
1
φ
and
r=
1
1+φ
(1 ≤ φ < ∞)
(hyperbolic spirals, in polar coordinate parametrization) converge to. Now conditions (1) and
(2) are satisfied, but not the above condition,
since the spirals have infinite length in any open
set containing the origin.
s
An important rôle in integration is played by orientation of the boundary ∂M
A and the way
it is related to the orientation of the manifold M. Of course, the boundary can be oriented only
in its smooth points.
In the smooth point p0 a vector t of the tangent space Tp0 (M) of the manifold M is called
an exterior vector of A, if
gp′ 0 (p0 ) • t < 0,
and an interior vector of A, if
gp′ 0 (p0 ) • t > 0.
These names are apt. The function g increases when we move locally near the point p0 over
the boundary of the region A from outside to inside. Outside gp0 has a negative value, inside a
positive one, and in the boundary it is = 0. Thus, if
gp′ 0 (p0 ) • t < 0,
then the direction of the vector t is opposite to the direction of increasing gp0 , that is, away from
the region A. Similarly, if
gp′ 0 (p0 ) • t > 0,
CHAPTER 5. GENERALIZED STOKES’ THEOREM
67
then the direction of the vector t is the direction of increasing gp0 , into the regionA.
s
Using exterior vectors we can couple an orientation for the smooth boundary ∂M
A and an
orientation of M. Cf. Section 4.3. Consider then the case of a manifold M oriented by a k-form
field Φ. Thus in its points p vectors
t1 (p) , . . . , tk (p),
of its tangent space Tp (M) can be chosen such that
Φ p; t1 (p), . . . , tk (p)
is continuous with respect to p and 6= 0 in M.
s
Orienting the boundary ∂M
A in its smooth points p then means choosing some k − 1-form
field Φboundary , the smooth boundary here being a k − 1-dimensional manifold, and vectors
s1 (p) , . . . , sk−1 (p)
s
of the tangent space Tp (∂M
A) (depending on p) such that
Φboundary p; s1 (p), . . . , sk−1 (p)
is continuous with respect to p in the smooth boundary and 6= 0. We choose
Φboundary (p; s1 , . . . , sk−1 ) = Φ p; texterior (p), s1 , . . . , sk−1 ,
where texterior (p) is a suitable exterior vector depending on p. This choice couples orientations
s
of the boundary ∂M
A and the manifold M. This oriented boundary of a region with boundary
→
−
−
→
of an oriented manifold M is denoted by ∂ s−→ A.
M
Note that the vectors
s1 (p), . . . , sk−1(p)
are in the tangent space of the manifold M in the point p, so that the form field Φ is available.
s
Indeed, the tangent space of of the smooth boundary ∂M
A in the point p is the null space of the
matrix
!
F′p0 (p)
,
gp′ 0 (p)
cf. Theorem 2.8 and the proof of Theorem 5.1. Thus it is included in the null space of the matrix
F′p0 (p), which again is the tangent space of the manifold M. What then is needed is a basis
texterior (p), s1 (p), . . . , sk−1 (p)
for the tangent space of M in the points p of its smooth boundary having the same ”handidness”
as the basis t1 (p), . . . , tk (p).
Example. A parametrized smooth space curve
C : p = γ(u) (u ∈ U)
is oriented using a 1-form field
Φ(p; r) = F(p) • r
choosing
F γ(u) = γ ′ (u).
CHAPTER 5. GENERALIZED STOKES’ THEOREM
2
The resulting 1-form has the value γ ′ (u) > 0.
The smooth boundary of a region with orientation A of C (part of the curve) consists of disjoint
points. A ”positive” orientation in the smooth
point p = γ(u) of the boundary takes place by
the form field condition
68
z
texterior(p)
A
p
y
Φboundary (p) = γ ′ (u) • texterior (p) > 0.
x
In particular, if the smooth boundary of A consists of two points, the initial point and the terminal point, the initial point is ”negative” and
the terminal point is ”positive”. See the figure above.
Example. A smooth parametrized surface
S : r = γ(u)
z
p
n(p)
A
(u ∈ U)
s(p)
is oriented using a 2-form field
Φ(p; r1 , r2 ) = n(p) • r1 × r2 ,
where
y
x
∂γ(u) ∂γ(u)
n(p) = n γ(u) =
×
∂u1
∂u2
texterior(p)
is the normal vector in the point p. The value of the 2-form is then
∂γ(u) ∂γ(u) 2
×
∂u1
∂u2
and it is always positive. The smooth boundary of a region with orientation A of S consists of
smooth space curves, and it is oriented in a smooth point p = γ(u) by the form field condition
Φboundary p; s(p) = n(p) • texterior (p) × s(p) > 0,
where s(p) is the tangent vector of the boundary (curve) in the point p. The vectors
n(p)
,
texterior (p)
and s(p)
then form a right-handed system, and the orientation is given by the familiar ”right hand rule”,
see the figure below.
Example. The tangent space of a 3-dimensional manifold (or open subset) M of R3 always
consists of the whole vector space R3 . A region with boundary (or a solid body) A of M can
be oriented, say, by the 3-form field condition
Φ(p; r1 , r2 , r3 ) = det(r1 , r2 , r3 ) > 0.
s
The boundary ∂M
A consists of smooth surfaces. The exterior vector in the point p could be
the exterior normal n(p) of the surface, and the tangent vectors s1 (p) and s2 (p) could be the
spanning vectors of its tangent plane. If the orienting condition is
det n(p), s1 (p), s2 (p) = n(p) • s1 (p) × s2 (p) > 0,
CHAPTER 5. GENERALIZED STOKES’ THEOREM
69
then it is all about orienting the boundary (surface) in such a way that the normal vector n(p)
is the exterior normal, and that
,
n(p)
s1 (p)
and s2 (p)
form a right-handed system.
Example. Similarly, orienting an n-dimensional manifold M of Rn and the smooth boundary
of its region with boundary A can be done by the condition
det n(p), s1 (p), . . . , sn−1 (p) > 0,
where n(p) is a vector (exterior normal) of the normal space and s1 (p), . . . , sn−1 (p) form a
s
basis of the tangent space of the smooth boundary ∂M
A in the point p.
5.2 Exterior Derivatives
We still need a sufficiently general derivative, the so-called exterior derivative. The usual
derivative of a univariate function f is defined as the limit of the difference quotient:
f (x + h) − f (x)
.
h→0
h
f ′ (x) = lim
In basic courses of calculus also the corresponding 1-form field, the so-called differential
(df )(x; r) = f ′ (x)r,
is introduced, and it, too, can be defined as a difference quotient:
f (x + hr) − f (x)
.
h→0
h
f ′ (x)r = lim
The exterior derivative is an extension of this for k-form fields. Note in particular that the
function f is a 0-form field and that the differential df is a 1-form field. Taking an exterior
derivative increases the degree of a form by one.
The interval
[x, x + h] (or [x + h, x])
appearing in the difference quotient is replaced by a k + 1-dimensional (closed) parallelepiped
P(p; r1 , . . . , rk+1) =
p+
k+1
X
ui rTi
i=1
0 ≤ u1 , . . . , uk+1 ≤ 1 .
For the real line this is a closed interval, in the plane R2 a parallelogram and in the space R3
a ”geometric” parallelepiped. Here k + 1 ≤ n, so we could be dealing with a parallelepiped
embedded in a higher-dimensional space.
Embedded in the affine subspace
k+1
X
T A(p; r1, . . . , rk+1 ) = p +
ui ri u1 , . . . , uk+1 ∈ R
i=1
CHAPTER 5. GENERALIZED STOKES’ THEOREM
70
of Rn (a k + 1-dimensional manifold of Rn ) the parallelepiped P(p; r1 , . . . , rk+1) is a region
with boundary where the boundary consists of faces. There are 2(k + 1) faces and they appear
in pairs, the ith pair is
−
Ti =
+
Ti =
p+
p+
k+1
X
i=1
k+1
X
i=1
ui rTi
ui rTi
0 ≤ u1 , . . . , uk+1 ≤ 1 and ui = 0
and
0 ≤ u1 , . . . , uk+1 ≤ 1 and ui = 1 .
In the former face ri is an interior vector and in the latter it is an exterior vector. These faces
thus have opposite orientations, and it is agreed that A(p; r1, . . . , rk+1 ) is oriented according to
the orientations of the faces Ti+ by the exterior vectors ri .
The smooth boundary consists of faces minus their edges. By Theorem 3.3 the edges are
k-null-sets, and by Theorem 5.1 the smooth boundary is a k-dimensional manifold. The oriented
sum in the difference quotient, where f (x + hr) has a plus-sign and f (x) a minus-sign, is now
replaced by an integral over the oriented (smooth) boundary of the parallelepiped. Alternation
of the signs in included, too, since the faces Ti− and Ti+ always have opposite orientations.
We thus get the exterior derivative of a k-form field Φ as
Z
1
(dΦ)(p; r1 , . . . , rk+1) = lim k+1
Φ,
h→0 h
−
→
− Ph
∂→
A
where, for brevity, we denoted
A = A(p; r1, . . . , rk+1)
and Ph = P(p; hr1 , . . . , hrk+1 ).
Note that the integration region can be given by a relaxed parametrization, so that the integral
is of the form in Section 4.2. (As an oriented sum if k = 0.) In addition it is agreed that if
k + 1 > n, then the exterior derivative is = 0.
Exterior derivative calculation is tedious starting from the very definition, as often is calculation of the usual univariate derivative, too. To make things simpler, differentiation rules can
be given. Together they form a sufficiently extensive toolbox to make derivative calculations
relatively easy. Proving these rules then can be quite tedious.
Let us give a basic collection of differentiation rules. The vector variables r1 , . . . , rk are
mostly omitted for brevity.
(I) If Φ and Ψ are k-form fields, and c1 and c2 are constants, then
d(c1 Φ + c2 Ψ) = c1 (dΦ) + c2 (dΨ).
Taking an exterior derivative thus is a linear operation. This follows directly from the
definition by the integral.
(II) If a k-form field Φ is constant (i.e. a constant form), then dΦ is a k + 1-form field having
only the value 0 (i.e. a zero form field).
As is usual, derivative of a constant is zero. This, too, follows directly from the definition
by the integral, since as a consequence of the opposite orientations of the pairs of faces
the integral then is always = 0. (The integrand having the same value in the matching
points of the faces.)
CHAPTER 5. GENERALIZED STOKES’ THEOREM
71
(III) The exterior derivative of a function f or a 0-form field is
′
(df )(p; r) = f (p)r =
n
X
∂f (p)
i=1
∂xi
dxi (r).
According to the definition this exterior derivative is the limit
f (p + hrT ) − f (p) d
T
=
f (p + tr ) = f ′ (p)r
lim
h→0
h
dt
t=0
(by the chain rule). In particular, if krk = 1, this is the usual directional derivative in the
direction of r.
(IV) If f is a function, then
d(f (p) dxj1 ∧ · · · ∧ dxjk ) = (df )(p; ·) ∧ dxj1 ∧ · · · ∧ dxjk
n
X
∂f (p)
=
dxi ∧ dxj1 ∧ · · · ∧ dxjk .
∂xi
i=1
This is the key rule since, combined with the previous rules, it makes it possible to calculate the exterior derivative starting from the expansion of the form field as a linear
combination of elementary form fields, and in fact also shows that the exterior derivative
is a k + 1-form field. The proof of this rule (even via Theorem 6.1) is a bit tedious, see
e.g. H UBBARD & H UBBARD .
(V) The k + 2-form d2 Φ = d(dΦ) field obtained by taking the exterior derivative of a k-form
field Φ twice is a constant form field having the value 0 (a zero form field).
Form fields are thus always ”of the first degree” in a sense. Differentiating twice always
gives zero. This is a consequence of the previous rules, since if f is a function, then
(d2 f )(p; ·, ·) =
n
n X
n
∂f (p)
X
X
∂ 2 f (p)
d
dxi =
dxj ∧ dxi = 0.
∂x
∂x
∂x
i
j
i
i=1
j=1 i=1
(Recall that dxi ∧ dxj = −dxj ∧ dxi and dxi ∧ dxi = 0.)
Using these rules calculation of the exterior derivative of a k-form field
X
Φ(p; r1 , . . . , rk ) =
aj1 ,j2 ,...,jk (p)(dxj1 ∧ dxj2 ∧ · · · ∧ dxjk )(r1 , . . . , rk )
1≤j1 <j2 <···<jk ≤n
is basically straightforward and is brought back to calculation of partial derivatives.
Note. As a matter of fact, the whole exterior derivative could be defined via these rules. It is,
however, nice to see that it is a generalization of the ordinary univariate derivative. And the
definition by the integral does give a ”geometric” way of considering the exterior derivative,
not just a symbolical machinery.
Example. Let us calculate the exterior derivative of the 2-form field
Φ = x1 x2 dx2 ∧ dx4 − x22 dx3 ∧ dx4
CHAPTER 5. GENERALIZED STOKES’ THEOREM
72
of R4 . (For brevity p = (x1 , x2 , x3 , x4 ) as well as r1 and r2 are omitted.) Applying the rules we
get
dΦ =(I) d(x1 x2 dx2 ∧ dx4 ) − d(x22 dx3 ∧ dx4 )
∂(x x )
∂(x1 x2 )
∂(x1 x2 )
∂(x1 x2 )
1 2
=(IV)
dx1 +
dx2 +
dx3 +
dx4 ∧ dx2 ∧ dx4
∂x1
∂x2
∂x3
∂x4
∂(x2 )
∂(x22 )
∂(x22 )
∂(x22 )
2
−
dx1 +
dx2 +
dx3 +
dx4 ∧ dx3 ∧ dx4
∂x1
∂x2
∂x3
∂x4
= (x2 dx1 + x1 dx2 ) ∧ dx2 ∧ dx4 − (2x2 dx2 ∧ dx3 ∧ dx4 )
= x2 dx1 ∧ dx2 ∧ dx4 + x1 dx2 ∧ dx2 ∧ dx4 − 2x2 dx2 ∧ dx3 ∧ dx4
= x2 dx1 ∧ dx2 ∧ dx4 − 2x2 dx2 ∧ dx3 ∧ dx4 .
For exterior derivatives of wedge products of form fields we have the famous
Cartan’s Magic Formula.2 The exterior derivative of the wedge product of a k-form field Φ
and an l-form field Ψ is given by
d(Φ ∧ Ψ) = (dΦ) ∧ Ψ + (−1)k Φ ∧ (dΨ).
Proof. By the definition of wedge product it suffices to show the rule for elementary form fields
Φ = a(p) dxj1 ∧ · · · ∧ dxjk
and Ψ = b(p) dxh1 ∧ · · · ∧ dxhl
and their wedge product
Φ ∧ Ψ = a(p)b(p) dxj1 ∧ · · · ∧ dxjk ∧ dxh1 ∧ · · · ∧ dxhl .
By rule (IV)
n
X
∂ a(p)b(p)
d(Φ ∧ Ψ) =
dxi ∧ dxj1 ∧ · · · ∧ dxjk ∧ dxh1 ∧ · · · ∧ dxhl
∂x
i
i=1
=
n X
∂a(p)
i=1
=
∂xi
n
X
∂a(p)
i=1
∂xi
b(p) +
∂b(p)
a(p) dxi ∧ dxj1 ∧ · · · ∧ dxjk ∧ dxh1 ∧ · · · ∧ dxhl
∂xi
dxi ∧ dxj1 ∧ · · · ∧ dxjk ∧ b(p) dxh1 ∧ · · · ∧ dxhl
n
X
∂b(p)
dxi ∧ dxh1 ∧ · · · ∧ dxhl
+ a(p) dxj1 ∧ · · · ∧ dxjk ∧ (−1)k
∂xi
i=1
= (dΦ) ∧ Ψ + (−1)k Φ ∧ (dΨ).
2
La ”formule magique de Cartan”. Élie Cartan was the father of form theory.
CHAPTER 5. GENERALIZED STOKES’ THEOREM
73
5.3 Exterior Derivatives of Physical Form Fields
As noted in Section 4.4, the basic form fields of a vector field F(p) in R3 are the work form
field
ΦF–work (p; r) = F(p) • r (a 1-form field)
and the flux form field
ΦF–flux (p; r1 , r2 ) = det F(p), r1 , r2 = F(p) • r1 × r2
(a 2-form field), and for a scalar field f (p) the basic form field (in addition to itself as a 0-form
field) is the density form field
Φf –density (p; r1 , r3 , r3 ) = f (p) det(r1 , r2 , r3 )
(a 3-form field). The connection between these basic form fields and exterior derivatives is the
following:
(A) df = Φ∇f –work
(df is the work form field of the gradient ∇f .)
(B) dΦF–work = Φ∇×F–flux
(dΦF–work is the flux form field of the curl ∇ × F.)
(C) dΦF–flux = Φ∇•F–density
(dΦF–flux is the density form of the divergence ∇ • F.)
All these can be proved by straightforward calculation. (A) follows immediately from rule (III).
(B) is shown as follows:
dΦF–work = d(F1 dx + F2 dy + F3 dz) = (dF1 ) ∧ dx + (dF2 ) ∧ dy + (dF3 ) ∧ dz
∂F
∂F
∂F1
∂F1 ∂F2
∂F2 1
2
=
dx +
dy +
dz ∧ dx +
dx +
dy +
dz ∧ dy
∂x
∂y
∂z
∂x
∂y
∂z
∂F
∂F3
∂F3 3
+
dx +
dy +
dz ∧ dz
∂x
∂y
∂z
∂F
∂F
∂F
∂F2 ∂F3 ∂F1 1
2
3
dy ∧ dz +
dz ∧ dx +
dx ∧ dy
−
−
−
=
∂y
∂z
∂z
∂x
∂x
∂y
= Φ∇×F–flux .
Item (C) is similar (left as an exercise for the reader).
The whole thing can be given as a condensed scheme:
scalar field

gradient
y
work form field
0-form field


yd
vector field −−−−−−−−−→ 1-form field




ycurl
yd
flux form field
vector field −−−−−−−−→

divergence
y
density form field
2-form field


yd
scalar field −−−−−−−−−−→ 3-form field
CHAPTER 5. GENERALIZED STOKES’ THEOREM
74
According to the scheme, curl of a gradient corresponds to exterior derivation twice, as does
divergence of a curl. By the rule (V) these are zero form fields, and indeed
∇ × ∇f = 0
and
∇ • (∇ × F) = 0.
On the other hand, the other second order derivatives ∇ • (∇f ) = ∇2 f = ∆f (Laplacian),
∇(∇ • F) and ∇ × (∇ × F) (double curl) do not correspond to second exterior derivatives, and
in general do not vanish.
Cartan’s Magic Formula gives fairly directly nabla formulas (i), (iii), (iv) and (v) in Section
1.5 since, as noted in Section 4.4, for scalar fields f and g, and vector fields F and G we have
f ∧ g = fg ,
f ∧ ΦF–work = Φf F–work ,
ΦF–work ∧ ΦG–work = ΦF×G–flux ,
ΦF–work ∧ ΦG–flux = ΦF•G–density ,
f ∧ ΦF–flux = Φf F–flux .
For instance, the formula
(v) ∇ • F × G = ∇ × F • G − F • ∇ × G
is obtained as follows:
Φ∇•F×G–density = dΦF×G–flux = dΦF–work ∧ ΦG–work + (−1)1 ΦF–work ∧ dΦG–work
= Φ∇×F–flux ∧ ΦG–work − ΦF–work ∧ Φ∇×G–flux
= Φ∇×F•G–density − ΦF•∇×G–density
= Φ(∇×F•G−F•∇×G)–density .
Note. Formulas (vi) and (vii) in Section 1.5, on the other hand, are of a quite different form,
and do not follow directly from the Magic Formula. They are more closely related to another
rule of derivation given in Theorem 6.1.
Let us then give some physical intuition for the differential operators. Thinking about the
definition of the exterior derivative as the limit of the integral in the previous section, it may
be noticed that in a very small parallelogram Ph (p; r1 , r2 ) (where h is small) embedded in R3
hr1 x hr2
exterior normal
hr2
hr2
p
hr1
p
Ph
Ph
hr3
hr1
the scaled projection of the curl on the normal of the parallelogram (see the figure on the left
above) is
Z
Z
I
′
∇ × F(p) • (hr1 ) × (hr1 ) ∼
ΦF–work = F γ(u) • γ (u) du =
F(r) • ds,
=
−
→
− Ph
∂→
A
U
−
→
− Ph
∂→
A
CHAPTER 5. GENERALIZED STOKES’ THEOREM
75
→
−
→ Ph is r = γ(u) (u ∈ U). The projection thus is
where the (relaxed) parametrization of ∂ −
A
approximatively the line integral of the vector field around the perimeter of the parallelogram
in the direction given by the right hand rule. For this reason the curl is sometimes also called
vorticity or vortex density.
Similarly in a very small (meaning h is small) parallelepiped Ph (p; r1 , r2 , r3 ) of R3 the
scaled divergence is
Z
I
∼
∇ • F(p) det(hr1 , hr2 , hr3 ) =
ΦF–flux =
F(r) • dS.
−
→
− Ph
∂→
A
−
→
− Ph
∂→
A
Locally the divergence thus is the flux through the boundary of a very small parallelepiped from
the inside to the outside, see the figure on the right above. This could be a change of mass of
a fluid in the parallelepiped. For this reason the divergence is also called source density. It is a
typical density since det(hr1 , hr2 , hr3 ) is the volume of the parallelepiped.
Combining the Hodge star (see Section 4.4) and exterior derivation gives a handy tool, and
a way to get higher derivatives. We see (check!) that in R3
d∗ΦF–work = Φ∇•F–density
and
d∗d∗ΦF–work = Φ∇(∇•F)–work
and
d∗ΦF–flux = Φ∇×F–flux
and
d∗d∗ΦF–flux = Φ∇×(∇×F)–flux
and
d∗Φf –density = Φ∇f –work
d∗d∗Φf –density = Φ∆f –density
and
and
∗d∗ΦF–work = ∇ • F ,
∗d∗d∗ΦF–work = Φ∇(∇•F)–flux ,
∗d∗ΦF–flux = Φ∇×F–work ,
∗d∗d∗ΦF–flux
= Φ∇×(∇×F)–work ,
∗d∗Φf –density = Φ∇f –flux ,
∗d∗d∗Φf –density = ∆f.
Electro-magnetic fields are governed by the famous Maxwell equations:
(M1) ∇ • D = ρ
(Gauß’ law)
(M2) ∇ • B = 0
(Gauß’ law for magnetism)
(M3) ∇ × E = −
∂B
∂t
(M4) ∇ × H = J +
∂D
∂t
(Faraday’s law)
(Ampère’s law)
The various fields are connected by the so-called matter laws3
D = εE ,
B = µH ,
J = σE
(Ohm’s law),
and
∂ρ
(continuity equation).
∂t
As noticed in Section 4.4 these really are not physical vector fields, and certain 2-form fields
would seem to better fit the bill. Let us compute the exterior derivatives of these form fields.
∇•J=−
3
We are dealing with the isotropic case only. Here E is the electric field [V/m], D is the electric flux density
[As/m2 ], H is the magnetic field [A/m], B is the magnetic flux density [Vs/m2 ] (tesla), ε is the permittivity
[As/V/m], µ is the permeability [Vs/A/m], ρ is the charge density [As/m3 ], J is the current density [A/m2 ], and σ
is the conductivity [A/V/m].
CHAPTER 5. GENERALIZED STOKES’ THEOREM
76
First the Faraday 2-form field:
dΦFaraday = d(ΦE–work ∧ dt) + dΦB–flux
= d(E1 dx ∧ dt) + d(E2 dy ∧ dt) + d(E3 dz ∧ dt)
+ d(B1 dy ∧ dz) + d(B2 dz ∧ dx) + d(B3 dx ∧ dy)
= Φ∇×E–flux ∧ dt + Φ∇•B–density
+
∂B1
∂B2
∂B3
dt ∧ dy ∧ dz +
dt ∧ dz ∧ dx +
dt ∧ dx ∧ dy
∂t
∂t
∂t
=(M2) Φ∇×E–flux ∧ dt + Φ ∂B –flux ∧ dt =(M3) 0.
∂t
The exterior derivative of the Maxwell 2-form field is obtained similarly:
dΦMaxwell = d(−c2 ΦB–work ∧ dt) + dΦE–flux
= −c2 Φ∇×B–flux ∧ dt + Φ∇•E–density + Φ ∂E –flux ∧ dt.
∂t
Assuming constant ε and µ (homogeneity), recalling that then
1
c= √ ,
εµ
and applying the matter laws and (M1) and (M4), the exterior derivative is simplified (check!)
to
1
dΦMaxwell = (Φρ–density − ΦJ–flux ∧ dt).
ε
This exterior derivative, the so-called current density form field, is often denoted as a Hodge
dual ∗J. (What is then J?). Second exterior derivatives being zeros, we have then also
d∗J = 0
(charge conservation law).
All in all, we then note that (in the homogeneous case) Maxwell’s equations can be written
using the Faraday form field and the Maxwell form field in an utterly simple form:
(
dΦFaraday = 0
dΦMaxwell = ∗J.
(We did need all Maxwell’s equations in deriving these two equations!)
CHAPTER 5. GENERALIZED STOKES’ THEOREM
77
5.4 Generalized Stokes’ Theorem
We have now collected all pieces for
−
→
Generalized Stokes’ Theorem. If M is a k + 1-dimensional oriented manifold of Rn , A its
→
−
bounded region with boundary, the smooth boundary ∂ s−→ A correspondingly oriented, and Φ a
M
k-form field, continuously differentiable in some open subset containing A, then4
Z
Z
dΦ = Φ.
A
−
→s
∂−
→A
M
→
−
In case the smooth boundary ∂ s−→ A is empty, the right-hand side integral is = 0.
M
Proof. A rigorous proof consists of a long and complicated estimation5, see e.g. H UBBARD &
H UBBARD . The rough idea however is the following. We approximate A by a set of very small
k + 1-dimensional parallelepipeds6 :
Pj = Pj (pj ; rj,1, . . . , rj,k+1) (j = 1, . . . , N).
Parametrized these are
Pj : γ j (u) = pj +
k+1
X
ui rTj,i
i=1
(0 ≤ u1 , . . . , uk+1 ≤ 1),
whence
γ ′j (u) = rj,1 , · · · , rj,k+1 .
The larger the number of parallelepipeds is, the better the approximation:
4
This is often written in the short form
Z
Z
dΦ = Φ
→
−
A
→
−
∂A
or even
Z
A
dΦ =
Z
Φ
∂A
assuming all conventions about orientations, boundaries, etc.
5
But cf. the example in Section A2.2.
6
Why not just triangulation or similar? A well-known counterexample in R2 is Schwarz’s lantern.
CHAPTER 5. GENERALIZED STOKES’ THEOREM
78
(figure by Maple). Thus, by the definition of the parametrized integral as a Riemann integral,
Z
dΦ ∼
=
A
N
X
(dΦ)(pj ; rj,1, . . . , rj,k+1).
j=1
On the other hand, by the definition of the exterior derivative (taking h = 1, since the parallelepipeds are very small),
Z
∼
(dΦ)(pj ; rj,1, . . . , rj,k+1) = Φ.
−
→s
∂−
→ Pj
M
Exterior normals of faces shared by adjacent parallelepipeds are opposite, i.e. the faces have
opposite orientations. Thus integrals over such faces are cancelled, and only integrals over
faces approximating the smooth boundary of A remain. So
N
X
Z
Φ∼
=
j=1 −
→s
∂−
→ Pj
M
Z
Φ.
−
→s
∂−
→A
M
Proving convergence of the various approximations is quite tedious.
Let us take the standard integral theorems of vector analysis as examples,
Example. (Gauß’ Theorem or Divergence Theorem) As the manifold take an open subset M
of R3 (oriented by the right hand rule), and as the region a correspondingly oriented subset (a
→
−
→
−
solid) K whose smooth boundary ∂ s−→ K is oriented by exterior normals. Then
M
dΦF–flux = Φ∇•F–density ,
and so, according to the Generalized Stokes Theorem,
Z
Z
I
I
∇ • F(r) dr = Φ∇•F–density = ΦF–flux = F(r) • dS
−
→
K
−
→
K
−
→s
∂−
→K
M
−
→
∂K
(note the brief notation). It is important here that the exterior normal corresponds to the righthandedness of the coordinate system.
Example. (General Divergence Theorem) The Divergence Theorem is valid in any dimension
in the form
Z
I
Φ∇•F–density =
−
→
K
ΦF–flux
−
→s
∂−
→K
M
(cf. also the example in Section A2.2). The 1-dimensional version is the usual Fundamental
Theorem of Integral Calculus.
In particular the Divergence Theorem is valid in two dimensions. Then
ΦF–flux = −F2 (p)dx + F1 (p)dy
and so
d − F2 (p)dx + F1 (p)dy = ∇ • F(p) dx ∧ dy.
CHAPTER 5. GENERALIZED STOKES’ THEOREM
79
Thus, according to the Generalized Stokes Theorem,
Z
Z
I −F2 (r)
Φ∇•F–density = ∇ • F(r) dr =
• ds.
F1 (r)
−
→
A
−
→
A
−
→
∂A
Here the manifold is an open subset of R2 , and the region A is its subset whose smooth boundary
is oriented by exterior normals. Note (check!) that then integration around the boundary (curve)
of A is in the counterclockwise direction. The right hand side is sometimes written in one of the
forms
I
I
I F1 (r) dx .
F(r) • dn = −F2 (r) dx + F1 (r) dy = F2 (r) dy −
→
∂A
−
→
∂A
−
→
∂A
Example. (Green’s Theorem) Applying the 2-dimensional Divergence Theorem above to the
vector field
F2 (r)
G(r) =
−F1 (r)
(i.e., F(r) rotated by −90◦ ) we get Green’s Theorem:
Z
I I
Z ∂F2 ∂F1 −G2 (r)
dr = ∇ • G(r) dr =
−
• ds = F(r) • ds.
G1 (r)
∂x
∂y
−
→
A
−
→
A
−
→
∂A
−
→
∂A
Again integration around the boundary (curve) of A is in the counterclockwise direction (corresponding to the exterior normals).
→
−
Example. (Stokes’ Theorem) As the manifold take an oriented surface S of R3 , and as the
→
−
→
−
region its correspondingly oriented part A whose smooth boundary ∂ s−
→ A is oriented by the
S
right-hand rule. Since
dΦF–work = Φ∇×F–flux ,
then according to the Generalized Stokes Theorem
Z
Z
I
I
∇ × F(r) • dS = Φ∇×F–flux = ΦF–work = F(r) • ds.
−
→
A
−
→
A
−
→s
∂→
−A
S
−
→
∂A
In case A = S is a closed surface having no smooth boundary, the right hand side is = 0.
These formulations of the classical integral theorems are fairly general, relaxed parametrizations and sufficiently continuous integrands is more or less what is needed. Manifolds and their
regions with boundaries may consist of several separate parts, as may their smooth boundaries.
Note however, that in Stokes’ Theorem the surface S must be orientable, so e.g. the Möbius
band does not qualify.
There are also nonorientable closed surfaces, e.g. the so-called Klein bottles, a version of
which is depicted below (Maple). This surface intersects itself, in R4 there however are versions
of Klein’s bottles which do not. Gauß’ Theorem is not applicable to Klein’s bottles.
CHAPTER 5. GENERALIZED STOKES’ THEOREM
80
These classical integral theorems are not
the only ones, other similar results can be
obtained by applying the Generalized Stokes
Theorem to form fields of various kinds.
Example. (Vectoral Gauß’ Theorem) If in
the 3-form field
f (p)r1 • r2 × r3
the vector variable r1 is fixed as a constant
vector a, we get the 2-form field
Φ(p; r2 , r3 ) = f (p)a • r2 × r3
(flux form field). Applying the Generalized Stokes Theorem to this, as we did for the Gauß’
Theorem, we get
Z
Z
I
Φ∇•(f a)–density = ∇ • f (r)a dr =
f (r)a • dS.
−
→
K
−
→
K
−
→
∂K
Since
∇ • f (r)a = ∇f (r) • a
and a was an arbitrary constant vector (choosing a = i, j, k we get the coordinates), this result
is often written in the vectoral form
Z
I
∇f (r) dr = f (r) dS.
−
→
K
−
→
∂K
The classical integral theorems, and others, find their use in the manipulation of physical
formulas.
Example. By the Maxwell equation (M4), in the stationary case
∇ × H = J.
→
−
−
→
If the closed curve C (loop) is sufficiently smooth and oriented, there is an oriented surface S ,
→
−
having C as its oriented boundary (via the right hand rule) and satisfying the assumptions of
Stokes’s Theorem, and
I
Z
Z
H(r) • ds = ∇ × H(r) • dS = J(r) • dS.
−
→
C
−
→
S
−
→
S
The left hand side is the integral of the magnetic field around the loop, and the right hand side
tells it is equal to the net current through the loop. This is the so-called Ampère Piercing law.
Example. Similarly, by the Maxwell equation (M3)
∇×E= −
∂B
.
∂t
CHAPTER 5. GENERALIZED STOKES’ THEOREM
81
So, as above, around a loop satisfying sufficient assumptions
Z
I
Z
Z
d
∂B(r, t)
E(r, t) • ds = ∇ × E(r, t) • dS = −
B(r, t) • dS.
• dS = −
∂t
dt
−
→
C
−
→
S
−
→
S
−
→
S
Thus the electromotive force around a loop equals the time derivative of the net magnetic flux
through the loop.
Example. As a first approximation—and often as a last one, too—in many fluid dynamics problems the fluid is assumed to be ideal, inviscid and incompressible. Let us consider a stationary
flow. The so-called Thomson (Kelvin) circulation law tells us that, with certain assumptions, if
the curl of the velocity is a zero vector initially it will remain so afterwards. Thus then all the
time
∇ × v = 0.
If C is a closed curve (loop) and a boundary of a surface satisfying the assumptions of the Stokes
Theorem, and v is defined and continuously differentiable, then the integral around the loop is
zero:
I
v(r) • ds = 0.
−
→
C
y
z
Especially in fluid flow modelling it is often not possible to
assume existence of such a surface,
even a virtual one. The reason is
mostly an obstacle in the flow, a
C
solid object, e.g. a wing, inside
C
which there is no fluid and of course
x
y
z
no flow either. (In electric fields a
similar situation occurs when there
x
is insulating material or a hole
inside conducting material.) Wings
and other obstacles in the flow can often be assumed to be infinitely long, say in the z-axis
direction, which means a loop encircling the obstacle cannot be the boundary of a surface
where the flow velocity is defined.
If, however, it is known the flow velocity satisfies ∇ × v = 0, it can be shown that for any
loop encircling the obstacle, the line integral
I
Γ = v(r) • ds
always has the same value, and this can then be calculated using the best loop for the purpose.
This is important e.g. when the Zhukovsky7 lift law is applied. The law tells the lift per unit of
length of an airfoil is
F = ρΓv∞
for air of density ρ flowing from infinity with speed v∞ . The force is perpendiculat to the velocity,
as is familiar from airplane wings.
Independence of the value of the line integral of the loop is seen roughly as follows. If
→
−
→
−
C1 and C2 are two loops encircling the same (infinitely long) obstacle, they can be cut and
7
Often Joukowsky or Joukowski or Schukowski etc.!
CHAPTER 5. GENERALIZED STOKES’ THEOREM
82
→
−
recombined using a double connecting line L to a single closed oriented curve C as in the
figure below. (The connecting line is drawn here as two lines for clarity.) The closed loop
C then is a boundary curve of a surface where the velocity v is defined, and since the curl
vanishes,
I
L
y
y
v(r) • ds = 0.
C1 C2
−
→
C
On the other hand, in the line integral
→
−
over C the connecting line L is traversed
twice in opposite directions. These two
integrals over L cancel each other, and
x z
z
so (think about the orientations)
I
I
I
v(r) • ds = v(r) • ds − v(r) • ds = 0.
−
→
C
−
→
C2
C
x
−
→
C1
Example. In an even-dimensional space R2m the 2-form field
Φ=
m
X
i=1
dxi ∧ dxm+i
is called the (standard) symplectic form. We see immediately that Φ = dΨ where
Ψ=
m
X
xi dxm+i .
i=1
Ψ is called the (standard) symplectic potential8 (see the next chapter). Consider then a
−
→
2-dimensional oriented manifold M of R2m and its bounded region A, where the boundary
of A has the (relaxed) parametrization
C : r = γ 1 (u), γ 2 (u) (u ∈ U)
and γ 1 contains the first m coordinates. By the Generalized Stokes Theorem then
Z
Z
I
Z
Φ = dΨ = Ψ = γ 1 (u)γ ′2 (u) du.
−
→
A
−
→
A
−
→
C
U
In R2 this also follows from Green’s Theorem (and then the integral equals the area of A).
Symplectic forms are important in mechanics where the phase spaces may have a quite high
(even) dimension.
Note. The foremost problem in results like these is that in our formalism the regions of integration must in principle be embedded in larger manifolds, and that the fields must exist (as
continuous or continuously differentiable) in regions larger than the region of integration. In
many practical situations there are no such extensions, not even theoretical or artificial ones.
8
And also the Liouville 1-form, the Poincaré 1-form, the canonical 1-form or the tautological 1-form, etc. As
they say: ”A dear child has many names.”
”Why is this choice obvious? We are trying to undo a
gradient (a derivative), so it is natural to integrate.”
(J.H. H UBBARD & B.B. H UBBARD : Vector
Calculus, Linear Algebra, and Differential Forms)
Chapter 6
POTENTIAL
6.1 Exact Form Fields and Potentials
For definite integrals the Generalized Stokes Theorem gives a characterization quite similar to
the familiar Fundamental Theorem of Integral Calculus. But what about indefinite integrals?
From basic calculus we know that
Z
Z
d
df (x)
f (x) dx =
dx = f (x)
dx
dx
which says indefinite integral is the inverse derivative, that is, the antiderivative. For form fields
this then would correspond to inverse exterior derivative. Such an operation would come handy
in many cases, since exterior differentiation increases the degree of the form field by one, and
thus makes it more complex.
A form field Φ is said to be exact if it is the exterior derivative of another form field Ψ, i.e.
Φ = dΨ. The form field Ψ is then called the potential of Φ. Since always dΓ = 0, whenever
Γ is a constant form field (rule (II) in Section 5.2), then Ψ + Γ is another potential of Φ. A
potential thus is not unique, but then neither is an indefinite integral. And often there are even
more potentials of Φ.
Evidently, if Φ is exact, then dΦ = 0, because
dΦ = d2 Ψ = 0.
Thus not all form fields are exact. On the other hand, as will be seen shortly, if dΦ = 0, then at
least locally Φ is exact.1
In this chapter, to not clutter the notation too much, we do not consistently use the column
array notation for vectors. Note, however, that mostly the rules of thumb ”point minus point is
a vector” and ”point plus vector is a point” hold.
Example. The Newton form field
Φ(p; r) = −C
(p − p1 ) • r
kp − p1 k3
(a 1-form field of R3 ), where C is a constant and p1 a fixed point, is exact in any region not
containing p1 . (The difference p−p1 is here interpreted as a vector.) It is the exterior derivative
of the scalar field (0-form field)
f (p) =
1
C
kp − p1 k
(Newton’s potential),
A form field whose exterior derivative is = 0, is often called closed.
83
CHAPTER 6. POTENTIAL
84
i.e. the work form field ∇f (p) • r. On the other hand, we have dΦ = 0, since the exterior
derivative is the flux form field of the curl
∇×
−C(p − p1 )
=0
kp − p1 k3
(check!).
Here of course
−C
(p − p1 )
= ∇f (p)
kp − p1 k3
is the familiar Newton vector field. It will be remembered that according to Newton’s law of
gravitation Earth attracts a point mass m in p by the force
−GmM
p − p1
,
kp − p1 k3
where p1 is the center of Earth, M is Earth’s mass, and G is the gravitational constant.
Example. Newton’s vector field can also be connected to the 2-form field
Φ(p; r1 , r2 ) = −C
(p − p1 ) • r1 × r2
.
kp − p1 k3
This form field is exact, too, since it is a flux form field and its exterior derivative is the density
form field of the divergence
∇•
−C(p − p1 )
= 0 (check!).
kp − p1 k3
Here, however, existence of a potential may actually depend on the region, as will be seen.
To deal with general potentials one more exterior differentiation rule is needed, in addition
to those in Section 5.2. This could also be used as a (coordinate-free) definition of the exterior
derivative.
Theorem 6.1. The exterior derivative of a k-form field Φ can be written as
(dΦ)(p; r1 , . . . , rk+1) =
k+1
X
i=1
(−1)i−1 Φ′ (p; r1 , . . . , ri−1 , rbi , ri+1, . . . , rk+1)ri ,
where Φ′ is the derivative of Φ with respect to the point variable p and the circumflex above a
vector variable means that it is omitted from the list.
Proof. By Theorem 4.2, Φ can be expanded as a combination of elementary form fields as
X
Φ(p; r1 , . . . , rk ) =
aj1 ,j2 ,...,jk (p)(dxj1 ∧ dxj2 ∧ · · · ∧ dxjk )(r1 , . . . , rk ),
1≤j1 <j2 <···<jk ≤n
and it suffices to prove the formula for each term
aj1 ,j2 ,...,jk (p) dxj1 ∧ dxj2 ∧ · · · ∧ dxjk .
By rule (IV) of Section 5.2,
d aj1 ,j2,...,jk (p) dxj1 ∧ · · · ∧ dxjk =
n
X
∂aj
l=1
1 ,j2 ,...,jk
∂xl
(p)
dxl ∧ dxj1 ∧ · · · ∧ dxjk .
CHAPTER 6. POTENTIAL
85
On the other hand, by the definition of an elementary form,
xl,1 xl,2 · · · xl,k+1 xj ,1 xj ,2 · · · xj ,k+1 1
1
1
(dxl ∧ dxj1 ∧ · · · ∧ dxjk )(r1 , . . . , rk+1) = ..
..
.. .
.
.
.
.
.
. xjk ,1 xjk ,2 · · · xjk ,k+1 When this determinant is expanded along the first row, we get the sum
k+1
X
i=1
(−1)i−1 xl,i (dxj1 ∧ · · · ∧ dxjk )(r1 , . . . , ri−1 , rbi , ri+1, . . . , rk+1 ).
Exchanging the summings
Pn
l=1
and
Pk+1
i=1
we then get the differentiation formula.
The general result will be stronger than just a local one: It will deal with the so-called starshaped sets. A subset A of Rn is said to be star-shaped with respect to the point p0 , if whenever
a point p is in A, then the whole line segment connecting it to p0 , i.e. the set
p0 + t(p − p0 ) 0 ≤ t ≤ 1 ,
is in A. Some familiar star-shaped regions (with respect to which points?) are
and some familiar non-star-shaped regions are
The operation corresponding to indefinite integration of a k +1-form field Φ in a star-shaped
region with respect to the point p0 is
(IΦ)(p; r1 , . . . , rk ) =
Z1
0
tk Φ p0 + t(p − p0 ); p − p0 , r1 , . . . , rk dt.
Poincaré’s Lemma. If dΦ = 0 in a star-shaped region A with respect to the point p0 , then Φ
is exact in the region and Φ = d(IΦ).
Proof. The proof is a bit tricky. For brevity of notation, let us just consider the case p0 = 0, the
general case is quite similar. Take the coordinate representation
p=
n
X
pl el ,
l=1
whence by multilinearity
Φ(tp; p, r1 , . . . , rk ) =
n
X
l=1
pl Φ(tp; el , r1 , . . . , rk )
CHAPTER 6. POTENTIAL
86
and by differentiating
′
Φ(tp; p, r1 , . . . , rk ) = q +
where
n
X
tpl Φ′ (tp; el , r1 , . . . , rk ) = q + tΦ′ (tp; p, r1 , . . . , rk ),
l=1
q = Φ(tp; e1 , r1, . . . , rk ), . . . , Φ(tp; en , r1, . . . , rk ) .
Let us first calculate d(IΦ) as in Theorem 6.1 using the above derivative and exchanging
exterior derivation and integration2:
(d(IΦ))(p; r1 , . . . , rk+1)
=
k+1
X
i=1
=
k+1
X
(−1)i−1 (IΦ)′ (p; r1 , . . . , ri−1 , rbi , ri+1 , . . . , rk+1)ri
(−1)
i−1
i=1
+
k+1
X
Z1
0
(−1)i−1
i=1
Z1
tk Φ(tp; ri , r1 , . . . , ri−1 , rbi , ri+1, . . . , rk+1 ) dt
0
Note that
tk+1 Φ′ (tp; p, r1 , . . . , ri−1 , rbi , ri+1 , . . . , rk+1)ri dt.
qri = Φ(tp; ri , r1 , . . . , ri−1 , rbi , ri+1 , . . . , rk+1).
The first sum is simplified a lot by antisymmetry when the ri ’s are moved to their ”correct”
positions:
Z1
Z1
k+1
X
(−1)i−1 tk Φ(tp; ri , r1 , . . . , ri−1, rbi , ri+1 , . . . , rk+1) dt = (k+1) tk Φ(tp; r1 , . . . , rk+1) dt.
i=1
0
0
Similarly using Theorem 6.1 we can calculate I(dΦ):
I(dΦ) (p; r1 , . . . , rk+1) =
=
Z1
Z1
tk+1 (dΦ)(tp; p, r1 , . . . , rk+1 ) dt
0
tk+1 Φ′ (tp; r1 , . . . , rk+1 )p dt
0
+
k+1
X
(−1)
i=1
k+1
Z1
i
0
i
tk+1 Φ′ (tp; p, r1 , . . . , ri−1 , rbi , ri+1 , . . . , rk+1 )ri dt.
(Note the powers t
and (−1) . Why these?) Combining these two results we see that
d(IΦ) (p; r1 , . . . , rk+1) + I(dΦ) (p; r1 , . . . , rk+1)
Z1
Z1
k
= (k + 1)t Φ(tp; r1 , . . . , rk+1) dt + tk+1 Φ′ (tp; r1 , . . . , rk+1 )p dt
0
=
Z1
0
0
d k+1
t Φ(tp; r1 , . . . , rk+1) dt = Φ(p; r1 , . . . , rk+1 ).
dt
Since we assumed that dΦ = 0 in A, the theorem follows.
2
This is allowed in a fairly general situation.
CHAPTER 6. POTENTIAL
87
Example. If the n-form field Φ is the density form field of the sclar field f , i.e. of the form
Φ(p; r1 , . . . , rn ) = Φf –density (p; r1 , . . . , rn ) = f (p) det(r1 , . . . , rn ),
then dΦ = 0 and Φ is exact. By Poincaré’s Lemma in a star-shaped region with respect to some
point p0 then Φ has the potential
(IΦ)(p; r1 , . . . , rn−1 ) =
Z1
0
tn−1 f p0 + t(p − p0 ) dt det(p − p0 , r1 , . . . , rn−1 ),
which is the flux-form field ΦF–flux (p; r1 , . . . , rn−1 ) of the vector field
F(p) =
Z1
0
tn−1 f p0 + t(p − p0 ) dt (p − p0 ).
In a sense this is the one and only analogue of the integral function of a univariate function.
For instance, in R3 Gauß’ Theorem implies
Z
Z
I
f (r) dr = ∇ • F(r) dr = F(r) • dS
−
→
K
−
→
K
−
→
∂K
for any (sufficiently regular) region K.
As a concrete example, let us calculate the potential in R3 for
f (x, y, z) = x + y + z
and p0 = 0.
The result is
F(x, y, z) =
Z1
0
 
 
x
x
x+y+z  
2


y .
t (tx + ty + tz) dt y =
4
z
z
6.2 Scalar Potential of a Vector Field in R3
By Poincaré’s Lemma the 1-form field
Φ(p; r) = F(p) • r
(the work form field of the vector field F) has a potential in star-shaped regions with respect to
some point p0 whenever
dΦ = Φ∇×F–flux = 0,
i.e. whenever ∇ × F = 0, or the field is irrotational, and one such potential is
f (p) =
Z1
0
F p0 + t(p − p0 ) • (p − p0 ) dt.
The potential (a 0-form field, or a scalar field) f is then called a scalar potential of the vector
field F, and F = ∇f .
CHAPTER 6. POTENTIAL
88
In engineering applications the vector field F is in general unknown, and the task is to find
it—analytically or numerically—based on given initial data. This would mean finding three
scalar-valued functions (the components of the field). In the irrotational case the unknown
vector field can be given as the gradient of an unknown scalar field: F = ∇f . To find the vector
field it thus suffices to find only one scalar-valued function, the potential f . Irrotationality
usually follows from the statement of the problem (by Thomson’s circulation law, Maxwell’s
equations, etc.).
As noticed, if f is a scalar potential and c is a constant, then f + c is another scalar potential
for the same vector field, because
∇(f + c) = ∇f.
On the other hand, if f and g both are scalar potentials of a vector field, then ∇(f − g) = 0 and
f − g is thus constant. A scalar potential is thus unique modulo a scalar additive constant.
There is a connection between existence of a scalar potential and line integrals:
Theorem 6.2. The following three conditions are equivalent for a continuous vector field F
defined in an open subset (manifold) K of R3 .
(i) F has in K a scalar potential f .
(ii) For any closed curve (1-dimensional manifold in relaxed parametrization) C in K
I
F(r) • ds = 0.
C
(iii) If the points r0 and r1 are in K and the curve (1-dimensional oriented manifold in relaxed
→
−
parametrization) C in K connects them, i.e., in the orientation r0 is a fixed initial point
and r1 a variable terminal point, then the value of integral
Z
F(r) • ds = h(r1 )
−
→
C
→
−
depends on the choice of the curve C only through the terminal point r1 , and thus defines
a function h(r1 ) of r1 .
The function h in item (iii) is a potential of F in K.
Proof. Let us first establish the implication (i) ⇒ (iii). So, let us assume item (i), and take a
→
−
relaxed parametrization r = γ(u) (a ≤ u ≤ b) of C . Using the assumed potential f we get
Z
−
→
C
F(r) • ds =
=
Z
−
→
C
Zb
a
∇f (r) • ds =
Zb
a
f ′ γ(u) γ ′ (u) du
.
d
f γ(u) du = f γ(u) = f (r1 ) − f (r0 ),
du
b
a
which implies item (iii) when we take
h(r1 ) = f (r1 ) − f (r0 ).
(Note that this h now is a potential.)
CHAPTER 6. POTENTIAL
89
y
Next we establish the implication (iii) ⇒ (i). We
only need to show that the function h given by item
(iii) really is a potential. For this, let us denote r1 =
(x1 , y1 , z1 ) and show, as an example, that
r1
r2
C2
∂h(r1 )
= F1 (r1 ),
∂x1
C1
r0
the other partial derivatives being treated similarly.
→
−
For this, assume r1 is in the manifold K and take C to
be an oriented curve where the last ”piece” is a short
z
line segment
→
−
C 2 : r = r2 ± u(1, 0, 0) (0 < u ≤ ±(x1 − x2 ))
x
parallel to the x-axis, where r2 = (x2 , y2 , z2 ) and the sign depends on the direction (i.e., plus if
→
−
x1 > x2 , minus otherwise), see the figure below. Clearly ±i is a tangent vector of C2 . The value
of the integral does not depend on the path, so this choice is possible. Since K is an open subset,
→
−
→
−
→ −
−
→ −
→
there is the space available, too. Let us denote the initial part of C by C1 , so C = C 1 + C2 .
If the direction is from left to right (meaning that the sign is plus, the case of a minus sign is
similar), then
Z
Z
Z
h(r1 ) = F(r) • ds = F(r) • ds + F(r) • ds
−
→
C
=
Z
−
→
C1
=
Z
−
→
C1
−
→
C1
xZ
1 −x2
F(r) • ds +
0
−
→
C2
F r2 + u(1, 0, 0) • i du
xZ
1 −x2
F(r) • ds +
F1 (r2 + u(1, 0, 0)) du.
0
Thus, taking the derivative with respect to x1 ,
∂h(r1 )
= F1 (r1 ).
∂x1
The implication (ii) ⇒ (iii) is fairly obvious. Assume item (ii) holds and take in item (iii) two curves
C
→
−
→
−
C and C ′ , of the kind given. Reverse the orientation
C
→
−
→ −
−
→
of C ′ and connect the resulting curve − C ′ to C , see
the figure on the right. For the resulting closed curve
r0
→ −
−
→
C − C ′,
I
Z
Z
0=
F(r) • ds = F(r) • ds − F(r) • ds.
− −
→
→
C − C′
r1
r1
−
→
C
C
C
r0
−
→′
C
by item (ii).
−
→
Implication (iii) ⇒ (ii) is also fairly obvious. Assume item (iii) and take a closed curve C ,
→
−
and two points r0 and r1 of C. Thus C is divided into two parts, the oriented curve C ′ connecting
→
−
r0 and r1 , and the oriented curve C ′′ connecting r1 and r0 . Reversing the orientation of the latter
→
−
→
−
curve we get two curves C ′ and − C ′′ , as in item (iii), see the figure below. It then follows from
CHAPTER 6. POTENTIAL
90
item (iii) that
I
Z
Z
F(r) • ds = F(r) • ds − F(r) • ds = 0.
−
→
C
−′
→
C
r1
−
→
− C ′′
C
r1
C
A vector field satisfying item (iii) of the theorem
is called conservative. For a conservative vector field
r0
F we often write just
Z
Zr1
F(r) • ds = F(r) • ds,
−
→
C
C
r0
r0
the value of the integral depending only on the endpoints.
Existence of a scalar potential does not depend only on the vector field F but also on the
manifold (or solid body) K. By Poincaré’s Lemma the potential does exist in star-shaped manifolds, and that goes fairly far already. But what about more general manifolds? The result
depends on the topology of the manifold, i.e., what can be done to to curves and surfaces in the
manifold using continuous transforms.
We say that a manifold K is simply connected, if any closed curve C (a 1-dimensional manifold in a relaxed parametrization) in K can be shrunk to a point using continuous transforms
without leaving K. More exactly, there is in K a surface with a relaxed parametrization
S : r = γ(t, u) (U : a(u) ≤ t ≤ b(u), 0 < u < 1)
such that
• a(u) and b(u) are both continuous in the interval [0, 1),
• the curve Cu : r = γ(t, u) (a(u) ≤ t ≤ b(u)) is closed for all 0 < u < 1,
• C is one of the curves Cu (0 < u < 1), and
• the ”curve” C0 : r = γ(t, 0) (a(0) ≤ t ≤ b(0)) is a point (a degenerate curve).
A prototype of such a surface is the segment of a sphere
with radius R parametrized in spherical coordinates as
S : r = γ(t, u) = (R sin u cos t, R sin u sin t, R cos u)
(0 ≤ t ≤ 2π, 0 < u < 1).
Its boundary circle can be shrunk into the centre pole
by continuous transforms, see the figure on the right
(Maple).
The pertinent result is now
Theorem 6.3. If F is an irrotational continuously differentiable vector field in a simply connected manifold (solid body) of R3 , then it has there a scalar potential.
Proof. The proof is tedious and technical, see e.g. N IKOLSKY & VOLOSOV. The basic idea
of course is to apply the (Generalized) Stokes Theorem to the above line C, a surface S, and a
subsurface having C as its boundary. The vector field being irrotational, its curl is a zero vector,
so the line integral around C is vanishes, and Theorem 6.2 becomes available.
The problems come with the orientation. For instance, even though the Möbius band cannot
be oriented, its boundary is a single closed curve (see the figure below: Maple). But of course
there are other orientable surfaces having this boundary curve.
CHAPTER 6. POTENTIAL
91
It may also be noticed that the surface S can be globally extremely complicated, just think
of a case where C is a knot (figure: Maple):
Example. As an example of a manifold which is not simply connected take the inside of a torus,
say the one parametrized as
4

3

x = (R1 + R2 cos u2 ) cos u1
–4
–4
2
y = (R1 + R2 cos u2 ) sin u1 (0 ≤ u1 , u2 ≤ 2π).
–3
–3

1

–2
–2
z = R2 sin u2
–1
–1
0
See the figure on the right (Maple).
Then e.g. the vector field
1
–1
1
–2 2
3
3

–3
4
4
−y
1
–4


x
F(r) = 2
x + y2
0
does not have a potential even though it is easily verified that it is irrotational (check!). This
follows from Theorem 6.2 since for e.g. the centre circle
→
−
C : γ(u1 ) = (R1 cos u1 , R1 sin u1 , 0) (0 ≤ u1 ≤ 2π)
we have
I
−
→
C
F(r) • ds =
Z2π
0


2
 

Z2π
−R1 sin u1
−R1 sin u1
1 
R1 cos u1  •  R1 cos u1  du1 = du1 = 2π 6= 0.
R12
0
0
0
CHAPTER 6. POTENTIAL
92
On the other hand, after removing the part of the torus in the right xy-plane the potential
appears, it is atan (see the example in Section 2.5).
If the manifold is not simply connected, an irrotational vector field may still have a local
scalar potential, but it need not be globally unique modulo an additive constant. Since numerical methods can only find unique solutions, the manifold must be made artificially simply
connected by ”cutting it open” using a suitable surface, as was done in the above example.
Note. The condition in Theorem 6.3 is sufficient but not necessary. In some cases an irrotational
vector field has a unique scalar potential (modulo the additive constant) even when the manifold
is not simply connected. This is the case e.g. for some problems dealing with the electric field
E when the magnetic field is stationary and there are no electromotive forces. For reasons of
energy conservation, the integral around a closed curve C is then
I
E(r) • ds = 0.
C
Thus the integral
Zr1
E(r) • ds
r0
is path-independent and defines a (unique) electric potential.
A scalar potential may also be obtained in an approximate sense, cf. Section 6.6.
6.3 Vector Potential of a Vector Field in R3
By Poincaré’s Lemma the 2-form field
Φ(p; r1 , r2 ) = F(p) • r1 × r2
(the flux form field of F) has a potential in star-shaped regions with respect to the point p0 , if
dΦ = Φ∇•F–density = 0,
i.e. ∇ • F = 0, or the vector field is solenoidal, and one such potential is
U(p) • r =
Z1
tF(p0 + t(p − p0 )) × (p − p0 ) • r dt.
0
(the work form field of the vector field U). A vector field U such that ΦU–work is a potential of
Φ is called a vector potential of the vector field F, and then F = ∇ × U.
Example. Let us find a vector potential for the vector field


x
F(x, y, z) =  y 
−2z
in R3 . It is easily verified that this is solenoidal. Choosing p0 = 0 and integrating we get
(omitting the vector variable r)



  


Z1
Z1
3yz
tx
yz
x
U(x, y, z) = t  ty  ×  y  dt = t2  −3xz  dt =  −xz  .
0
−2tz
0
z
0
0
CHAPTER 6. POTENTIAL
93
Example. As another example let us take a more complicated but sometimes very useful case
(cf. Appendix 3). Let us find a vector potential for the Newton vector field
F(p) = −C
p − p1
,
kp − p1 k3
in the manifold which we get by removing from R3 the ray (so-called Dirac’s string)
p1 + u(p1 − p0 )
(u ≥ 0).
The resulting region is star-shaped with respect to the point p0 assuming—as is done—that
p0 6= p1 . Note that the point p1 is then removed but the point p0 is not. For brevity we denote
a = kp0 − p1 k2 = (p0 − p1 ) • (p0 − p1 ) ,
b = 2(p0 − p1 ) • (p − p0 ) = 2(p0 − p1 ) • (p − p1 ) − 2a ,
c = kp − p0 k2 = kp − p1 k2 − 2(p0 − p1 ) • (p − p1 ) + a
and
2
d = 4ac − b2 = 4kp0 − p1 k2 kp − p0 k2 − 4 (p0 − p1 ) • (p − p0 ) .
Some simple vector manipulation then shows that
4a + 2b = 4(p0 − p1 ) • (p − p1 ) ,
a + b + c = kp − p1 k2
and
2
d = 4kp0 − p1 k2 kp − p1 k2 − 4 (p0 − p1 ) • (p − p1 ) .
Calculate then the vector potential (omitting the vector variable r and the constant −C):
U(p) =
Z1
0
p0 + t(p − p0 ) − p1
t
× (p − p0 ) dt
p0 + t(p − p0 ) − p1 3
= (p0 − p1 ) × (p − p1 )
Z1
t dt
(a + bt + ct2 )3/2
0
.1 −2(2a + bt)
√
= (p0 − p1 ) × (p − p1 )
d
a + bt + ct2
0
√ √
4 a a + b + c − (4a + 2b)
√
= (p0 − p1 ) × (p − p1 )
d a+b+c
= (p0 − p1 ) × (p − p1 )
=
kp0 − p1 kkp − p1 k − (p0 − p1 ) • (p − p1 )
kp − p1 k kp0 − p1 k2 kp − p1 k2 − (p0 − p1 ) • (p − p1 )
(p0 − p1 ) × (p − p1 )
.
kp0 − p1 kkp − p1 k2 + (p0 − p1 ) • (p − p1 ) kp − p1 k
2 This result is in many ways the best possible, it is scarcely possible to remove anything less than
the ray from R3 to get the vector potential. (See the example in Section 3 of Appendix 3.)
CHAPTER 6. POTENTIAL
94
An unknown solenoidal vector field can then be given by as the curl of another (also so-far
unknown) vector field, the vector potential: F = ∇ × U. Solenoidality usually follows from
the statement of the problem. For form fields, often a work form field is somehow ”simpler”
than a flux form field.
If U is a vector potential of the vector field F and ∇f is a gradient field, then U + ∇f is
also a vector potential of F, because
∇ × (U + ∇f ) = ∇ × U + ∇ × ∇f = ∇ × U.
On the other hand, if both U and V are vector potentials of the same vector field, then
∇ × (U − V) = 0 and U − V is irrotational. A vector potential is thus unique modulo an
additive irrotational vector field. In case the manifold allows existence of a scalar potential—
e.g. being simply connected—vector potential is unique modulo an additive gradient field. But
if scalar potentials do not exist, then this might not be true. For instance, in the torus of the
example in the previous section the zero vector field 0 surely has a vector potential, one such is
of course 0, but
 
−y
1
 x
F(r) = 2
x + y2
0
is also a vector potential of 0, and it is not a gradient field.
There is a connection between existence of vector potentials and surface integrals given by
the Generalized Stokes Theorem, once we remember that a surface integral is the integral of
the flux form field of the curl of the vector potential, and that the flux form field is the exterior
derivative of the work form field of the vector potential:
Theorem 6.4. If a continuously differentiable vector field F has a vector potential U in the open
→
−
subset K of R3 , and S is a closed oriented surface in K (a 2-dimensional oriented manifold in
extended parametrization having an empty smooth boundary), then
I
I
F(r) • dS = ∇ × U(r) • dS = 0.
−
→
S
−
→
S
As was the case for the scalar potential, existence of a vector potential does not depend only
on the field but also on the manifold K. In star-shaped manifolds solenoidal vector fields have a
vector potential by Poincaré’s Lemma. In other cases the situation is more complicated. In any
case, the necessary condition of the theorem above must hold true.
Example. A typical example of a case where a solenoidal vector field does not have a vector
potential is the Newton vector field
F(r) =
r − r0
kr − r0 k3
in a ”punctured” manifold K, where the point r0 is removed but which otherwise completely
”surrounds” the point r0 . As was noted before, this vector field is solenoidal. Integrating over
a small r0 -centered sphere
S : r = γ(θ, φ) = r0 + δ(sin θ cos φ, sin θ sin φ, cos θ)
of radius δ (oriented by the exterior normal) we get
(0 ≤ θ ≤ π, 0 ≤ φ < 2π)
CHAPTER 6. POTENTIAL
I
−
→
S
95
F(r) • dS =
Z2π Zπ
=
Z2π Zπ
0
0
0

 

sin θ cos φ
sin θ cos φ
1 
sin θ sin φ  •  sin θ sin φ  δ 2 sin θ dθ dφ
2
δ
cos θ
cos θ
sin θ dθ dφ = 4π 6= 0.
0
The conclusion of Theorem 6.4 is thus not valid, so no vector potential exists.
Gauß’ Theorem, too, has its consequences for vector potentials. If the closed oriented sur→
−
face S in Theorem 6.4 encloses a solid L which is included in the manifold K, then, in line
with the theorem, according to Gauß’ Theorem
I
Z
F(r) • dS = ∇ • F(r) dr = 0,
−
→
S
L
because in order to have a vector potential the vector field F must be solenoidal.
But what if not all of L is contained in the manifold K, but K has ”cavities” which are
included in L? Taking a sufficiently small surface S we may assume that L contains only one
such cavity. Let us denote by N the part of the manifold K included in L, and the boundary
→
−
→
−
of the cavity by T . Then T is a closed surface, oriented by the exterior normal directed to the
→ −
−
→
inside of the cavity. When we now apply Gauß’ Theorem to N and its boundary S + T (where
→
−
S is oriented using an exterior normal) and also Theorem 6.4 we get
Z
I
I
I
I
0 = ∇ • F(r) dr =
F(r) • dS = F(r) • dS + F(r) • dS = F(r) • dS.
N
− −
→
→
S +T
−
→
S
→
−
T
−
→
T
Thus even through the boundary of each cavity inside K the flux of the vector field must be = 0,
in order for the field to have a vector potential in K.
Ideally this necessary condition for existence of a vector potential should also be sufficient
for solenoidal vector fields. The situation is however complicated. In the literature there are
some sufficient conditions of this classical type—in addition to the case of star-shaped manifolds—e.g. the following:
• The vector field F is continuously differentiable and solenoidal. Each manifold L enclosed by a closed surface inside K ∪ ∂K (manifold + boundary) is included in K (no
cavities) (A POSTOL).
• A magnetic field B in vacuum where there are moving charges (F EYNMAN , R.P. et al.:
The Feynman Lectures on Physics. Volume II. Addison–Wesley (1998)).
• The vector field F is differentiable and its flux through any closed piecewise smooth
surface inside the manifold K vanishes. K can be deformed by continuous transforms
to a star-shaped manifold possibly with finitely many ball-shaped cavities (TON , T.-C.:
On the Potential of a Solenoidal Vector Field. Journal of Mathematical Analysis and
Applications 151 (1990), 557–580).
As a more physical treatment, the classic reference M ISNER , C.W. & W HEELER , J.A.: Classical Physics as Geometry: Gravitation, Electromagnetism, Unquantified Charge, and Mass as
CHAPTER 6. POTENTIAL
96
Properties of Curved Empty Space. Annals of Physics 2 (1957), 525–603, gives a relativitytheoretic abstract characterization for existence of a vector potential in some fairly general
cases.
Advances have been made using manifold theory, too. A classic reference is W EYL , H.: The
Method of Orthogonal Projection in Potential Theory. Duke Mathematical Journal 7 (1940),
411–444. More modern ideas are based on the fundamental work of Georges de Rham. See
DE R HAM , G.H.: Differentiable Manifolds. Springer–Verlag (1984). It could be said that in
this way we are already very close to the above mentioned ideal characterization for existence
of vector potential.
6.4 Helmholtz’s Decomposition
As was noted, a vector potential of F(r), if it exists, is only unique modulo an additive irrotational vector field. This gives a possibility for the vector potential to satisfy extra conditions.
Consider first the case where continuously differentiable irrotational vector fields have a
scalar potential in the manifold K. Now, what if we want for F a solenoidal vector potential
U1 , when we already know some vector potential U. Denoting ∇ • U = g, the required vector
potential is of the form U1 = U + ∇φ and
0 = ∇ • U1 (r) = ∇ • U(r) + ∆φ(r) = g(r) + ∆φ(r).
(The case g = 0 is of course clear anyway, and can be omitted.) Thus for the needed scalar field
φ we get the equation
∆φ(r) = −g(r).
This is a partial differential equation, a so-called Poisson equation. The solution of the equation
is not unique, in fact any function φ+h, where ∆h = 0, is also a solution. The partial differential
equation
∆h(r) = 0
in turn is called Laplace’s equation, and its solutions are called harmonic functions 3 , see also
Chapter 7 and Appendix 1. So any φ (if it exists) is unique modulo an additive harmonic
function. To fix a φ, some extra conditions—so-called boundary conditions—for the Poisson
equation need to be set, an important topic not dealt with here any further, however.
All in all, since Poisson’s equations have solutions in very general situations, if there is a
vector potential, there also is a solenoidal vector potential.4
A similar idea can be used to get an irrotational vector field U2 with a given divergence f .
Then U2 = ∇ψ and
∇ • U2 (r) = ∆ψ(r) = f (r).
So again we have a Poisson equation whose solution ψ is unique up to an additive harmonic
function. And finally, if we want a vector field U with a given curl ∇ × U = F and a given
divergence ∇ • U = f , then it is simply U = U1 + U2 . To fix U uniquely, extra (boundary)
conditions are needed.
From these we get the celebrated Helmholtz’s decomposition. In a fairly general situation a
vector field U can be expressed as a sum of a solenoidal vector field and an irrotational vector
field. On way to get decomposition would be the following:
3
Harmonic functions have a central rôle in the investigation and modelling of scalar fields.
And as a consequence, if F has a vector potential, then it also has a double vector
potential V such that
F = ∇ × (∇ × V), and a triple vector potential W such that F = ∇ × ∇ × (∇ × W) , and so on.
4
CHAPTER 6. POTENTIAL
97
1. Take F = ∇ × U and find the solenoidal vector potential U1 above.
2. Write the vector field U as
U = U1 + (U − U1 ).
Then U1 is solenoidal and (U − U1 ) is irrotational.
Another way would be:
1. Take f = ∇ • U and find the irrotational vector field U2 above.
2. Write the vector field U as
U = (U − U2 ) + U2
Then U − U2 is solenoidal and U2 is irrotational.
The field U can thus be written as the sum of gradient and a curl.
It should be emphasized, of course, that all this works with the assumption that in the manifold K scalar and vector potentials exist, and that the Poisson equations have solutions.
Note. The double-curl expansion rule in Chapter 1 (in a bit different form)
∆F = ∇(∇ • F) + ∇ × ∇ × (−F)
gives one such Helmholtz’s decomposition.
For form fields there is a corresponding decomposition, the so-called Hodge decomposition,
see e.g. A BRAHAM & M ARSDEN & R ATIU.
6.5 Four-Potential
A four-potential is the potential of a 4-dimensional 2-form field Φ. By Poincaré’s Lemma it
exists in a star-shaped region if dΦ = 0.
It was noted in Section 5.3 that dΦFaraday = 0. Thus in a star-shaped region the Faraday
form field has a potential A, the so-called electromagnetic four-potential. This potential is a
1-form field, (traditionally) written as
A = A1 dx + A2 dy + A3 dz − φ dt.
Let us compute the exterior derivative and find the coefficients A1 , A2 , A3 and φ. By rule
(IV) in Section 5.2,
∂A
∂A1
∂A1
∂A1 1
dA =
dx +
dy +
dz +
dt ∧ dx
∂x
∂y
∂z
∂t
∂A
∂A2
∂A2
∂A2 2
+
dx +
dy +
dz +
dt ∧ dy
∂x
∂y
∂z
∂t
∂A
∂A3
∂A3
∂A3 3
dx +
dy +
dz +
dt ∧ dz
+
∂x
∂y
∂z
∂t
∂φ
∂φ
∂φ
∂φ −
dx +
dy +
dz +
dt ∧ dt,
∂x
∂y
∂z
∂t
and this must be
E1 dx ∧ dt + E2 dy ∧ dt + E3 dz ∧ dt + B1 dy ∧ dz + B2 dz ∧ dx + B3 dx ∧ dy.
CHAPTER 6. POTENTIAL
98
Expanding and comparing we see that
E=−
∂A
− ∇φ and
∂t


A1
B = ∇ × A , where A =  A2  .
A3
Thus we get A by first finding a vector potential A for the magnetic flux density B (by Maxwell’s
equation (M2) B is solenoidal), and then a scalar potential φ for the vector field
−
∂A
− E.
∂t
This is possible since by Maxwell’s equations (and some weak extra conditions)
∇×
−
∂
∂B ∂B
∂A
− E = − (∇ × A) − ∇ × E = −
+
= 0.
∂t
∂t
∂t
∂t
The four-potential A is not unique, we can add a gradient field ∇ψ in A, and replace φ by
φ − ∂ψ/∂t, without the exterior derivative changing. Often A and φ are chosen to satisfy the
so-called Lorenz gauge condition
∇•A+
1 ∂φ
= 0.
c2 ∂t
In a sense this separates A and φ. If originally
∇•A+
1 ∂φ
= g 6= 0,
c2 ∂t
then Lorenz’s gauge condition will be satisfied if ψ is a solution of the partial differential equation
1 ∂ 2 ψ(r, t)
= −g(r, t)
∆ψ(r, t) − 2
c
∂t2
(check!). This equation is a so-called wave equation and it has a solution in some very general
situations. Lorenz’s gauge condition can then be assumed quite generally.
6.6 Dipole Approximations and Dipole Potentials
r
A dipole field is a vector field generated by two potentials
of opposite signs. A typical situation is a pair of electric
charges of equal magnitude but opposite sign, separated by
a small distance, and their electrostatic field.
Let us first consider this electric dipole. The electric
charges +q and −q are close to each other in the points
r′ + h and r′ , see the figure on the right. The electrostatic
field is observed in the point r. For the dipole approximation to work the point r must be far from the charges, that is
kr − r′ k ≫ khk.
Electrophysics tells us that the Coulomb potential of
these charges in the point r is
q 1
1
φ(r) =
.
−
4πε kr − r′ − hk kr − r′ k
+q
r +h
q
r
CHAPTER 6. POTENTIAL
99
On the other hand, it was noticed in Section 5.3 that the exterior derivative of the function (or
0-form field)
q
1
f (r′) =
4πε kr − r′ k
is the work form field of the gradient of f , i.e.
(df )(r′ ; h) = ∇′ f (r′ ) • h ∼
= f (r′ + h) − f (r′ ) = φ(r),
where the derivation is with respect to r′ (the primed nabla). The approximation follows immediately from the definition of the exterior derivative. Thus in the point of observation r
φ(r) ∼
=
q ′
1
q r − r′
∇
•
h
=
• h.
4πε
kr − r′ k
4πε kr − r′ k3
Here
qh = p′
is called the electric dipole moment. With this notation we get the usual dipole approximation
of the potential as
1 p′ • (r − r′ )
φ(r) ∼
,
=
4πε kr − r′ k3
where the connection between r′ and p′ is indicated by the primes.
But what about the electric field in r? In electrophysics negative potentials are used, so
E(r) = −∇φ(r), and
1
p′ • (r − r′ )
E(r) ∼
∇
.
=−
4πε
kr − r′ k3
The gradient is is taken with respect to the variable r. The vector p′ does not depend on r.
Applying the derivation rule (i) in Section 1.5 we get
∇
p′ • (r − r′ )
1
1
′
′
′
′
=
p
•
(r
−
r
)
∇
+
∇
p
•
(r
−
r
)
kr − r′ k3
kr − r′ k3 kr − r′ k3
p′
p′ • (r − r′ )
′
(r
−
r
)
+
.
= −3
kr − r′ k5
kr − r′ k3
Thus the dipole approximation of the electric field is
E(r) ∼
=
1
′ 2 ′
′
′
′
−
kr
−
r
k
p
+
3
p
•
(r
−
r
)
(r
−
r
)
4πεkr − r′ k5
Except for the dipoles themselves, such an approximation can be useful for other fields closely
resembling dipoles. The dipole moment is then obtained by physical considerations.
A magnetic dipole is very similar to the electric dipole above. We approach it using a current
loop remembering the definition of curl via exterior derivation in Section 5.3.
→
−
So, let us consider a small parallelogram-shaped current loop ∂P with current I. A normal
vector for the plane of the parallelogram is given by r1 × r2 = An, where A is the area of the
parallelogram and n is the corresponding unit normal vector, correctly oriented with respect to
the direction of the current. The point of observation r is far away compared with the size of
the loop. We use the vertex r0 to characterize the position of the loop, see the figure below.
Points of the loop are denoted by the variable r′ . The point of observation r being ”far away”
then means that kr − r′ k ≫ kr′ − r0 k.
CHAPTER 6. POTENTIAL
100
r1 x r2
According to the Biot–Savart law in electromagnetics the magnetic field in the point r is given by
I
I
r′ − r
H(r) =
× ds′ .
4π
kr − r′ k3
−
→
∂P
Thus, for a constant vector a,
I
a × (r′ − r)
I
• ds′ .
a • H(r) =
4π
kr − r′ k3
r2
I
r0
P
−
→
∂P
r1
This is the integral of the vector field
I a × (r′ − r)
4π kr − r′ k3
around the loop and—as noted in Section 5.3—it is approximately the flux form field of the curl
in the point r0 , i.e.
I a × (r′ − r) a × (r′ − r) AI
′
′
∼
a • H(r) = ∇ ×
n• ∇ ×
• r1 × r2 =
,
4π kr − r′ k3
4π
kr − r′ k3 r′ =r0
r′ =r0
where ∇′ is with respect to r′ . Using the derivation rule (vi) in Section 1.55 and remembering
that the Newton vector field is solenoidal, the curl will be seen to be
−
a • (r − r′ )
a
+
3
(r − r′ )
′
3
′
5
kr − r k
kr − r k
(check!). Thus
a•n
IA (r − r0 ) • n
∼
−
a • H(r) =
+3
(a • (r − r0 )) .
4π
kr − r0 k3
kr − r0 k5
The constant vector a was arbitrary, so
H(r) ∼
=
IA n
(r − r0 ) • n
−
+
3
(r
−
r
)
.
0
4π
kr − r0 k3
kr − r0 k5
Adopting the corresponding magnetic dipole moment
m′ = IAn
of the loop, we get the dipole approximation of the magnetic field in a far away point r as
H(r) ∼
=
1
− kr − r0 k2 m′ + 3 (r − r0 ) • m′ (r − r0 ) .
5
4πkr − r0 k
This expression is of exactly the same form as it was for the electric dipole field! So, reverse
argumentation as above shows that the magnetic field has here an approximative scalar potential
φ, i.e.
H(r) ∼
= −∇φ(r),
5
Or using Theorem 6.1.
CHAPTER 6. POTENTIAL
101
where
φ(r) =
1 m′ • (r − r0 )
.
4π kr − r0 k3
Using the derivation rule (iii) in Section 1.5 we may check that the dipole approximation
field is solenoidal. Thus it also has a vector potential (in appropriate regions). We could find
it more or less as we found a vector potential for the Newton vector field in the example in
Section 6.3. Looking at the curl expression above it should become clear, however, that the
vector potential is
1 m′ × (r − r0 )
A(r) =
.
4π kr − r0 k3
Note. Dipole approximation is just a part of the more general multipole expansion of a vector
field. Its first term is the so-called monopole approximation, a Newton vector field. The second
term is the dipole approximation of a combination of two Newton vector fields, as above. The
third term is the quadrupole approximation of the combination of four Newton vector fields,
with pairwise opposite signs. The fourth term is the octupole approximation, and so on.
”The miracle of the appropriateness of the
language of mathematics for the formulation
of the laws of physics is a wonderful gift which
we neither understand nor deserve.”
(E UGENE (J EN Ő ) W IGNER : The Unreasonable
Effectiveness of Mathematics in the Natural Sciences.
Communications in Pure and Applied
Mathematics 13 No. 1 (1960))
Chapter 7
PARTIAL DIFFERENTIAL EQUATIONS
7.1 Standard Forms
Numerical solvers for partial differential equations (PDEs) are especially suitable for equations
of the form
∇ • k(r)∇u(r) = F (r) (elliptic PDE)
in the stationary case, or of the form
∂2u
∂u
+ g(r) 2 + F (r, t),
∇ • k(r)∇u(r, t) = f (r)
∂t
∂t
where k(r) > 0, in the nonstationary case, with the proper boundary and initial conditions
(which are not dealt with here). The coefficient functions k, f and g may also depend on u, and
the function g possibly also on the gradient ∇u. The function F is a so-called forcing function
representing exterior forces etc. In the nonstationary case the different basic types are
• f > 0 and g = 0: parabolic PDE, heat equation, diffusion equation,
• f < 0 and g = 0: reverse heat equation, Black–Scholes equation,
• f = 0 and g > 0: hyperbolic PDE, wave equation,
• f > 0 and g > 0: telegraph equation, lossy wave equation, hyperbolic heat equation.
The order of a PDE is the order of the highest partial derivative appearing in the equation.
The above PDEs are all of second order. There are, of course, many important kinds of PDEs
of first order, e.g. advection equations, and also of order higher than two, e.g. the biharmonic
equation (elasticity), Korteweg–de Vries equation (shallow water waves), and Dym’s equation
(solitons). Theories and numerical solution methods for these other orders are very different
from the ones for the above PDEs.
7.2 Examples
Let us take examples of field modelling problems leading to second order PDEs of the above
types. For electric and magnetic fields (M1)–(M4) refer to the Maxwell equations in Section
5.3.
102
CHAPTER 7. PARTIAL DIFFERENTIAL EQUATIONS
103
Example. (Electrostatic field) Since in the stationary case the electric field E is irrotational
(M3), we have ∇ × E = 0 and the field has a scalar potential: E = −∇V . (In electrophysics
the negative potential Φ = −V is used.)
On the other hand (M1),
∇ • D = ∇ • (εE) = ρ
(charge density).
Here ε is the permittivity. If the charge density—and of course the permittivity—is known,
∇ • ε(r)∇V (r) = −ρ(r).
This is a PDE of a standard form.
Example. (Stationary electric current) By Ohm’s law, the current density is J = σE, where
σ is the conductance. According to the Kirchhoff law electricity is not accumulated anywhere,
→
−
i.e., the net current through a closed surface is zero. For the boundary (a closed surface) S of
the solid K,
I
Z
J(r) • dS = 0 = ∇ • J(r) dr
−
→
S
K
(by Gauß’ Theorem). In these problems the field usually is assumed continuously differentiable,
thus since K is arbitrary we have further ∇ • J = 0.
The field E being irrotational (M3) it has a scalar potential. i.e. E = −∇V , and so
∇ • J(r) = ∇ • σ(r)∇V (r) = 0,
and again we have a standard form PDE.
Example. (Magnetic field with a scalar potential) In a region with no conductive material
charge density is zero, and (M4) ∇ × H = J = 0. There is thus a scalar potential, i.e.
H = ∇Φ.
On the other hand (M2)
∇ • B = ∇ • (µH) = 0,
where µ is the permeability. We have again a PDE of the same type since now
∇ • µ(r)∇Φ(r) = 0.
Example. (Stationary incompressible irrotational fluid flow) Here the flow is known to be
irrotational by nature, that is ∇ × v = 0. The velocity thus has a scalar potential: v = ∇φ. In
an incompressible flow fluid is not accumulated in any body K bounded by the closed surface
→
−
S . What comes in also goes out, and so fluid leaves the body at zero net rate;
Z
I
v(r) • dS = ∇ • v(r) dr = 0.
−
→
S
K
Since the body K is arbitrary, then as above ∇ • v = 0, and we have the Laplace equation
∇ • ∇φ(r) = ∆φ(r) = 0.
CHAPTER 7. PARTIAL DIFFERENTIAL EQUATIONS
104
Example. (Stationary heat conduction) The heat flow is v = −k∇T , where T is the temperature and k is the thermal conductivity. Here the model is an empirical one, valid for isotropic
material (one where local heat flow is the same in all directions).
If the temperature is stationary, heat is not accumulated anywhere, and as above with the
fluid flow or with the Kirchhoff law, ∇ • v = 0. Thus we have
∇ • k(r)∇T (r) = 0.
Example. (Nonstationary heat conduction) As above, the heat flow is v = −k∇T . The net
→
−
heat flow out of a body K with boundary surface S is
I
v(r, t) • dS,
−
→
S
i.e. heat is accumulated there at the rate (power)
I
Z
dE
= − v(r, t) • dS = ∇ • (k(r)∇T (r, t)) dr
dt
−
→
S
K
(again by Gauß’ Theorem). Energy conservation forces this to be the same as the power needed
to raise the temperature of the body.
∂T
is
The power per volume unit needed to raise the temperature at the rate
∂t
∂T (r, t)
C(r)ρ(r)
,
∂t
where C(r) is the thermal capacity and ρ(r) the mass density in the point r. For the whole body
the power is thus
Z
∂T (r, t)
dE
= C(r)ρ(r)
dr.
dt
∂t
K
Comparing the powers (and again remembering that K was arbitrary) we deduce that the integrands must be the same, and we get the heat equation
∂T (r, t)
∇ • k(r)∇T (r, t) = C(r)ρ(r)
.
∂t
It is interesting how mathematical modelling in different areas leads to the precise same
types of PDEs. Some PDEs contain second time derivatives originating from Newton’s second
law. This would be the case e.g. for nonstationary fluid flow. Our example, however, is an
acoustic wave equation.
Example. (Small amplitude acoustic plane wave) A plane wave is a planar wave front. We
set the coordinates in such a way that the wave front ”proceeds” to the direction of the positive
x-axis, and is thus parallel to the yz-plane. We then need only the x-coordinate and time t,
within the front there is no change. This would correspond to a sound source far away in the
direction of the negative x-axis.
Let us denote the position of an air molecule at time t, initially in the point x, by x + u(x, t).
Thus u(x, t) is the deviation from the initial position, and u(x, 0) = 0. Initially the left edge of
a layer of thickness dx is in the point x, and at time t it is in the point x + u(x, t). The thickness
of the layer is then
∂u(x, t)
dx,
dx +
∂x
see the figure below.
CHAPTER 7. PARTIAL DIFFERENTIAL EQUATIONS
105
The mass of the layer per area unit remains the same,
i.e.
∂u(x, t) ρ0 dx = ρ(x, t) 1 +
dx,
∂x
where ρ0 is air density at time t = 0 and ρ(x, t) at
time t. We see thus that
∂u(x, t) −1
ρ(x, t) = 1+
.
ρ0
∂x
y
p(x, t)
=
p0
ρ0
,
ρ
ρ0
In an adiabatic process, where there is no heat loss
nor heat gain, is is known that
ρ(x, t) γ
dx + ux dx
dx
u(x,t)
z
x
x
where p0 is the air pressure at time t = 0 and p(x, t) at time t, and γ is the so-called adiabatic
constant, for air γ = 1.40. Differentiating we get
∂u(x, t) −γ
∂ ∂u(x, t) −γ−1 ∂ 2 u(x, t)
∂p(x, t)
1+
= p0
= −γp0 1 +
∂x
∂x
∂x
∂x
∂x2
γp0
∂u(x, t) −γ ∂ 2 u(x, t)
=−
ρ(x, t) 1 +
.
ρ0
∂x
∂x2
For small amplitudes
γp0
= c2
ρ0
is constant, speed of sound squared. This follows from the ideal gas law, since for small amplitudes temperature is constant and density is inversely proportional to volume. For air, at 1 at
and at sea level, this gives c = 340.3 ms−1.
A layer is moved by force dictated by the pressure difference (per area unit) between the
edges, and this must be the same as the force given by Newton’s second law:
∂p(x, t)
∂u(x, t) −γ ∂ 2 u(x, t)
dx = −c2 ρ(x, t) 1 +
dx
∂x
∂x
∂x2
∂u(x, t) ∂ 2 u(x, t)
dx.
= −ρ(x, t) 1 +
∂x
∂t2
Note the sign, molecules are moving against the pressure difference. Thus finally we get the
PDE
γ+1 2
∂ 2 u(x, t)
∂u(x, t)
∂ u(x, t)
1
1
+
=
,
∂x2
c2
∂x
∂t2
and it is of the type indicated. The coefficient function g depends here also on ∂u/∂x.
For small amplitudes and acoustic frequencies ∂u(x, t)/∂x is also small, so at least approximatively
1 ∂ 2 u(x, t)
∂ 2 u(x, t)
= 2
.
∂x2
c
∂t2
This last PDE is a so-called wave equation. Its general solution is of the form
u(x, t) = f (x − ct) + g(x + ct)
where f and g are arbitrary twice continuously differentiable functions.
CHAPTER 7. PARTIAL DIFFERENTIAL EQUATIONS
106
Including the so-far omitted y and z coordinates, we have of course
∂u(r, t)
∂u(r, t)
=
= 0.
∂y
∂z
Thus
∂ 2 u(r, t)
∂ 2 u(r, t)
=
= 0 and
∂y 2
∂z 2
and the wave equation can be written as
∆u(r, t) =
∂ 2 u(r, t)
= ∆u(r, t),
∂x2
1 ∂ 2 u(r, t)
.
c2 ∂t2
The Laplacian is coordinate-free, so this equation holds true for any direction of the planar
wave front.
A general wave equation in R3 has solutions other than the plane waves, e.g. the so-called
(r0 -centered) spherical waves
u(r, t) =
1
1
f kr − r0 k − ct +
g kr − r0 k + ct ,
kr − r0 k
kr − r0 k
where f and g are arbitrary twice continuously differentiable functions (check, and see also
Section A2.4). These may be interpreted as expanding/contracting spherical wave fronts with a
point source in r0 . Strangely such ”spreading sharp signals” are only possible in odd dimensions1—though in another form in higher dimensions—and thus in particular not in R2 .
Often the units are chosen so that c = 1, and the notation
∂2u
u = ∆u − 2
∂t
(so-called d’Alembertian)
is adopted.2 The wave equation is then simply
u = 0
or u = F (r, t),
the latter in case a forcing function is required.
It is again interesting that the same wave equation is obtained for small amplitude vibrations in general, for vibrating strings and membranes, for longitudinal, transverse and torsional
vibrations of rods, for electromagnetic waves (as a consequence of Maxwell’s equations), for
oscillating electric circuits and mechanical systems, for vibrating columns of air and all kinds
of acoustic pressure waves, etc.
1
2
As a consequence of the Hyugens Principle.
Sometimes with the signs reversed or denoted by 2 .
hi i need some help here
it’s due next monday
how to use green’s first identity to show if a function
is harmonic on D(a surface), then the line intergral of
”normal derivative of” is equal to zero.
i totally have no idea,
what does harmonic mean?
and what is normal derivative?
thanks!
(A desperate request in Math Help Forum)
Appendix 1
PARTIAL INTEGRATION AND
GREEN’S IDENTITIES
A1.1 Partial Integration
There are two common integration techniques well drilled in basic courses of calculus, the
change of variable method (essentially just our reparametrization) and partial integration.3 The
familiar univariate partial integration
Zb
′
u (x)v(x) dx =
.b
a
a
u(x)v(x) −
Zb
u(x)v ′(x) dx
a
can be generalized4 using the Generalized Stokes Theorem and Cartan’s magic formula as
Z
Z
Z
k+1
dΦ ∧ Ψ = Φ ∧ Ψ + (−1)
Φ ∧ dΨ,
−
→s
∂−
→A
A
A
M
where Φ is a k-form field.
As noted in Section 5.3, for physical scalar fields f and g and vector fields F and G the
basic wedge products are
∧
f
ΦF–work
g
fg
—
Φg–density
Φf g–density
—
ΦG–work
Φf G–work
ΦF×G–flux
ΦG–flux
Φf G–flux
ΦF•G–density
and the exterior derivatives are
df = Φ∇f –work ,
dΦF–work = Φ∇×F–flux and
dΦF–flux = Φ∇•F–density .
These can be combined into four partial integration formulas (the remaining two are uninteresting: 0 = 0 ± 0):
3
4
There is actuallyZa third one, probably not as well drilled as the other two: inverse
integration
Z
−1
−1
−1
f (x) dx = xf (x) − F f (x) where F (y) = f (y) dy.
There is a generalization for indefinite integrals (potentials) as well but it does not seem to be that useful.
107
APPENDIX 1. PARTIAL INTEGRATION AND GREEN’S IDENTITIES
1.
Z
Φg∇f –work =
Z
fg −
−s
→
∂−
→A
A
Z
108
Φf ∇g–work
A
M
→
−
I.e., for an oriented curve C with end vertices r1 (initial) and r2 (terminal),
Z
Z
g(r)∇f (r) • ds = f (r2 )g(r2) − f (r1 )g(r1) − f (r)∇g(r) • ds.
−
→
C
2.
Z
−
→
C
Φ∇f ×G–flux =
Z
Φf G–work −
−s
→
∂−
→A
A
Z
Φf ∇×G–flux
A
M
→
−
I.e., for an oriented surface S ,
Z
I
Z
∇f (r) × G(r) • dS = f (r)G(r) • ds − f (r)∇ × G(r) • dS.
−
→
S
3.
Z
−
→
∂S
ΦG•∇f –density =
Z
Φf G–flux −
−s
→
∂−
→A
A
Z
−
→
S
Φf ∇•G–density
A
M
→
−
I.e., for an oriented solid body K ,
Z
I
Z
G(r) • ∇f (r) dr = f (r)G(r) • dS − f (r)∇ • G(r) dr.
−
→
K
4.
Z
−
→
∂K
ΦG•∇×F–density =
Z
ΦF×G–flux +
−s
→
∂−
→A
A
Z
−
→
K
ΦF•∇×G–density
A
M
→
−
I.e., for an oriented solid body K ,
Z
I
Z
G(r) • ∇ × F(r) dr = F(r) × G(r) • dS + F(r) • ∇ × G(r) dr.
−
→
K
−
→
∂K
→
−
K
Of these formula 3. is the most commonly used.
When vector fields computed numerically, say by the finite element method (FEM), integrals
of the type
Z
v(r)∇ • k(r)∇u(r) dr
K
are ubiquitous. The usual assumptions then are that the functions k and v are continuously
differentiable and that u is twice contiunuously differentiable. Partial integration formula 3. is
→
−
now applicable when we choose f = v and G = k∇u (and denote by S the boundary surface
of K oriented by exterior normals):
Z
Z
I
v(r)∇ • k(r)∇u(r) dr = − k(r)∇u(r) • ∇v(r) dr + v(r)k(r)∇u(r) • dS
K
−
→
S
K
=−
Z
K
k(r)∇u(r) • ∇v(r) dr +
Z
S
v(r)k(r)
∂u(r)
dS,
∂n
APPENDIX 1. PARTIAL INTEGRATION AND GREEN’S IDENTITIES
109
where
∂u(r)
= ∇u(r) • n
∂n
is the normal derivative of the function u, i.e., its directional derivative in the direction of the
exterior unit normal n.
Partial integration is significant in two ways when PDEs are solved using FEM. First, known
boundary conditions render the right hand side surface integral known. Hence boundary conditions are explicitly included in the solution. Second, the integral
Z
k(r)∇u(r) • ∇v(r) dr
K
only requires continuity of the function k and the partial derivatives of the functions u and v, if
not even that!5 Using partial integration makes it possible to have much less stringent continuity
requirements.
A1.2 Green’s Identities
Let us assume that the functions u and v in the previous section are twice continuously differentiable. Taking k(r) = 1 we get Green’s first identity
Z
K
v(r)∆u(r) dr = −
Z
∇u(r) • ∇v(r) dr +
K
Z
v(r)
∂u(r)
dS.
∂n
S
Exchanging the functions u and v we similarly get
Z
Z
Z
∂v(r)
u(r)∆v(r) dr = − ∇v(r) • ∇u(r) dr + u(r)
dS,
∂n
K
K
S
and subtracting these equalities on both sides we get Green’s second identity
Z
K
Z ∂u(r)
∂v(r) (v(r)∆u(r) − u(r)∆v(r)) dr =
v(r)
dS.
− u(r)
∂n
∂n
S
To get the third identity we need to first derive a property of harmonic functions. Consider
a scalar field u which is twice continuously differentiable in K and harmonic, i.e. ∆u = 0 (cf.
Section 6.4). Take a point r0 in the interior of the solid body K, and take v to be the Newton
potential
1
v(r) =
.
kr − r0 k
Since v has a singularity (the point r0 ) in K, we specify another solid body K1 by removing
from K a small r0 -centered ball K2 of radius δ. The corresponding sphere S2 is oriented using
its exterior normal, that is the normal pointing into the body K1 .
5
In FEM the functions u and v would be so-called element functions, and the approximate solution is formulated
as a linear combination of the element functions ui .
APPENDIX 1. PARTIAL INTEGRATION AND GREEN’S IDENTITIES
110
The body K1 has two boundary surfaces, the inner one and the outer one. In the body
K1 both functions u and v are harmonic (it will be remembered that Newton’s vector field is
solenoidal). We now apply Green’s second identity to the body K1 :
Z Z ∂u(r)
∂u(r)
∂v(r) ∂v(r) 0=
v(r)
v(r)
dS −
dS.
− u(r)
− u(r)
∂n
∂n
∂n
∂n
S
S2
The sign of the latter term is because of the orientation of S2 .
We continue by fixing K to be kr − r0 k < R, i.e., an open r0 -centered ball of radius R.
The outer boundary surface S then is an r0 -centered sphere of radius R. In this outer boundary
surface the function v is the constant 1/R. The normal derivative in this surface in turn is
1
r0 − r
r − r0
1
1
∂v(r)
=∇
•n=
•
=−
= − 2.
3
2
∂n
kr − r0 k
kr − r0 k
kr − r0 k
kr − r0 k
R
For any harmonic function
Z
I
Z
∂u(r)
dS = ∇u(r) • dS = ∆u(r) dr = 0,
∂n
−
→
S
S
K
(by Gauß’ Theorem), so that
Z
Z ∂v(r) ∂u(r)
1
dS =
− u(r)
u(r) dS
v(r)
∂n
∂n
R2
S
S
= 4π× (mean value of u in the sphere S).
The inner boundary surface is a ball of radius δ, and the above is valid for it, too (remembering
the orientation). Thus
Z
Z 1
∂v(r) ∂u(r)
dS =
− u(r)
u(r) dS
v(r)
∂n
∂n
δ2
S2
S2
= 4π× (mean value of u in the inner sphere).
The function u is continuous, so the limit of the mean value is u(r0), when δ → 0+.
Returning now to the earlier equality
I I ∂u(r)
∂v(r) ∂v(r) ∂u(r)
0=
v(r)
dS −
dS
− u(r)
− u(r)
v(r)
∂n
∂n
∂n
∂n
S
S2
the following remarkable property of harmonic functions follows:
Mean Value Theorem for Harmonic Functions. The value of a harmonic function in the
centre of a sphere is the mean value of the function in the sphere.
And finally, returning to our original body K and its boundary surface S,
Z Z ∂u(r)
∂v(r) ∂v(r) ∂u(r)
v(r)
dS =
dS = 4πu(r0),
− u(r)
− u(r)
v(r)
∂n
∂n
∂n
∂n
S
S2
we get Green’s third identity
1
4π
Z S
1
∂
∂u(r)
1
dS = u(r0).
− u(r)
kr − r0 k ∂n
∂n kr − r0 k
”There was a little girl,
Who had a little curl
Right in the middle of her forehead.
When she was good,
She was very good indeed,
But when she was bad she was horrid.”
(H ENRY WADSWORTH L ONGFELLOW: There Was a Little Girl )
Appendix 2
PULLBACKS AND CURVILINEAR
COORDINATES
A2.1 Local Coordinates
Parametrization of an n-dimensional manifold (open subset) of Rn as
M : r = γ u) = (γ1 (u), . . . , γn (u) (u ∈ U)
constitutes a so-called curvilinear coordinate system. To get the axes (curves) in the point γ(u1 )
just fix values of the parameters to u = u1 except for one.
The tangent space Tr (M) of M in the point r = γ(u) is the whole Rn . On the other hand it
is the column space of the derivative matrix γ ′ (u), see Section 2.6. Thus the columns of γ ′ (u)
give a basis for Rn which depends on the point r, and will change (in general) with the point.
This basis gives a so-called local coordinate system.
In what follows we only deal with the case where the local coordinate system is orthogonal,
i.e., the columns of γ ′ (u) are mutually orthogonal and positively oriented. Let us denote by
Q(u) the matrix which we get by normalizing the columns of γ ′ (u), i.e. dividing each column
−1
T
by its length. Then Q(u)
is an orthogonal matrix, i.e. Q(u) = Q(u) , and (for the positive
orientation) det Q(u) = 1. We denote


∂γ(u) 0
0

 ∂u1 

∂γ(u) 

Λ(u) = 
0
0
.


∂u2
∂γ(u) 

0
0
∂u3
Thus the local coordinate transform matrix is
Q(u) = γ ′ (u)Λ(u)−1 .
Example. Parametrization of a 3-dimensional manifold of R3 using cylindrical coordinates has
the form
r = γ(r, φ, z) = (r cos φ, r sin φ, z) ((r, φ, z) ∈ U),
111
APPENDIX 2. PULLBACKS AND CURVILINEAR COORDINATES
where U is a parameter domain. Thus


cos φ −r sin φ 0
γ ′ (r, φ, z) =  sin φ r cos φ 0 
0
0
1
and

,
112


1 0 0
Λ(r, φ, z) =  0 r 0 
0 0 1

cos φ − sin φ 0
Q(r, φ, z) =  sin φ cos φ 0  .
0
0
1
Apparently Q(r, φ, z) is orthogonal and det Q(r, φ, z) = 1. The local coordinate vectors are
then




 
cos φ
− sin φ
0





cos φ
er = sin φ
, eφ =
and ez = 0  .
0
0
1
Example. Parametrization of a 3-dimensional manifold of R3 using spherical coordinates has
the form
r = γ(ρ, θ, φ) = (ρ sin θ cos φ, ρ sin θ sin φ, ρ cos θ) ((ρ, θ, φ) ∈ U),
where U is the parameter domain. Thus


sin θ cos φ ρ cos θ cos φ −ρ sin θ sin φ
γ ′ (ρ, θ, φ) =  sin θ sin φ ρ cos θ sin φ ρ sin θ cos φ 
cos θ
−ρ sin θ
0
and

1 0
0
0 
Λ(r, φ, z) =  0 ρ
0 0 ρ sin θ

sin θ cos φ cos θ cos φ − sin φ
Q(ρ, θ, φ) =  sin θ sin φ cos θ sin φ cos φ  .
cos θ
− sin θ
0

Here, too, Q(ρ,
θ, φ) is orthogonal and
det Q(ρ, θ, φ) = 1 (check!). The local
coordinate vectors (see the figure on the
right) are


sin θ cos φ
eρ =  sin θ sin φ  ,
cos θ


cos θ cos φ
eθ =  cos θ sin φ 
− sin θ
and
,

z
eφ
eρ
eθ
y
x


− sin φ
eφ =  cos φ  .
0
In addition to representing a vector field F as a function of the curvilinear coordinates, i.e. in
the form F γ(u) , often the field should be given in the local coordinates in the point γ(u) (or
APPENDIX 2. PULLBACKS AND CURVILINEAR COORDINATES
113
in the tangent space in the point). Columns of the matrix Q(u) are the new coordinate vectors
in the old coordinates. The required representation is thus (cf. Section 1.3)
G(u) = Q(u)T F γ(u) .
For a scalar field f the representation is simply
g(u) = f γ(u) .
The gradient and the Laplacian of a scalar field f in curvilinear local coordinates are thus
Q(u)T ∇f γ(u) and ∆f γ(u) ,
and the curl and the divergence of a vector field F are
Q(u)T ∇ × F γ(u) and ∇ • F γ(u) .
A2.2 Pullbacks
The correct tool for dealing with curvilinear coordinates is the so-called pullback. If Φ is a
k-form field of Rn and δ : Rm → Rn is a continuously differentiable function, then the k-form
field
(δ ∗ Φ)(u; r1 , . . . , rk ) = Φ δ(u); δ ′ (u)r1 , . . . , δ ′ (u)rk
of Rm is the so-called pullback form field of Φ with respect to the function δ. The notation δ ∗ Φ
is conventional. Note in particular that the δ ′ (u)ri ’s are n-dimensional vectors, as they should
be. Pullbacks make it easy to define new form fields, more or less as Theorem 2.5 was used
to define manifolds. However, to make the pullback of a continuously differentiable form field
continuously differentiable, the function δ should be twice continuously differentiable.
Pullbacks are clearly left-distributive over addition, i.e.
δ ∗ (Φ + Ψ) = δ ∗ Φ + δ ∗ Ψ,
but they have other nice properties:
Theorem A2.1. Pullbacks are left-distributive over wedge product and they commute with exterior differentiation, i.e.
δ ∗ (Φ ∧ Ψ) = δ ∗ Φ ∧ δ ∗ Ψ
and d(δ ∗ Φ) = δ ∗ (dΦ).
Proof. By Theorem 4.2, a form field Φ can be expanded as a combination of elementary form
fields as
X
Φ(p; r1 , . . . , rk ) =
aj1 ,j2 ,...,jk (p)(dxj1 ∧ dxj2 ∧ · · · ∧ dxjk )(r1 , . . . , rk ).
1≤j1 <j2 <···<jk ≤n
It is seen then that it suffices to prove the properties for form fields of the form
a(p) dx1 ∧ dx2 ∧ · · · ∧ dxk .
Left distributivity over the wedge product of such form fields follows fairly directly from the
determinant definition of elementary forms (we leave the details to the reader but cf. also the
proof of Theorem 4.4).
APPENDIX 2. PULLBACKS AND CURVILINEAR COORDINATES
114
We use induction on the degree k to prove commutativity of pullbacks
and exterior differ
∗
entiation. If k = 0 then (δ a)(u) is the composite function a(δ u) . Using chain rule and the
definition of pullbacks we get
d(δ ∗ a)(u; r) = a′ δ(u) δ ′ (u)r = δ ∗ (da) (u; r),
so the commutativity holds true for k = 0.
Assume then that the theorem holds for degrees k ≤ m (induction hypothesis) and consider
the case of degree k = m + 1. We may write the m + 1-form field as Φ ∧ dxm+1 , where Φ is an
m-form field. Using Cartan’s Magic Formula we see first that
d(Φ ∧ dxm+1 ) = dΦ ∧ dxm+1 + (−1)m Φ ∧ ddxm+1 = dΦ ∧ dxm+1 + 0 = dΦ ∧ dxm+1 .
By the induction hypothesis, d(δ ∗ Φ) = δ ∗ (dΦ). Again applying the Magic Formula we get
d δ ∗ (Φ ∧ dxm+1 ) = d(δ ∗ Φ ∧ δ ∗ dxm+1 ) = d(δ ∗ Φ) ∧ δ ∗ dxm+1 + (−1)m δ ∗ Φ ∧ d(δ ∗ dxm+1 )
= δ ∗ (dΦ) ∧ δ ∗ dxm+1 + (−1)m δ ∗ Φ ∧ dd(δ ∗ xm+1 )
= δ ∗ (dΦ) ∧ δ ∗ dxm+1 + 0 = δ ∗ (dΦ ∧ dxm+1 ) = δ ∗ d(Φ ∧ dxm+1 ) .
As an example of the many uses of pullbacks we take a reduction of the Generalized Stokes
Theorem.
Example. For the notation we refer to the Generalized Stokes Theorem in Section 5.4. Assume
that the manifold M has the (relaxed) parametrization p = γ(u) (u ∈ U), and further that the
bounded region with boundary A has the (relaxed) parametrization p = γ(u) (u ∈ V).
Let us first consider orientations. The manifold M is oriented by a k-form field Ψ via
the sign of Ψ(p; t1 , . . . , tk ) where t1 , . . . , tk are in the tangent space Tp (M), as explained in
Section 4.3. Writing ti = γ ′ (u)vi (i = 1, . . . , k) we get the corresponding orientation for U
(and V) by the pullback k-form field
Ψ γ(u); γ ′ (u)v1 , . . . , γ ′ (u)vk = (γ ∗ Ψ)(u; v1 , . . . , vk ).
s
The smooth boundary ∂M
A is defined locally, near the point p0 = γ(u0 ), by a continuously
differentiable function gp0 via the sign of gp0 (p): + (p inside A), − (p outside A), 0 (p in
the boundary), see Section 5.1. For the orientation of the smooth boundary, an exterior vector
texterior (p), satisfying gp′ 0 (p)texterior (p) < 0, is chosen as t1 , and the k − 1-form field
Ψ p; texterior (p), t2 , . . . , tk .
is used. Returning to the
∂Us V the function gp0 (p) is replaced
parameter domain, for orienting
′
by hu0 (u) = gp0 γ(u) . Writing texterior (p) = γ (u)vexterior (u) we see first that
h′u0 (u)vexterior (u) = gp′ 0 γ(u) γ ′ (u)vexterior (u) = gp′ (p0 )texterior (p) < 0,
i.e. vexterior (u) will indeed be an exterior vector, and second that ∂Us V is oriented by the pullback
k − 1-form field
Ψ γ(u); γ ′ (u)vexterior (u), γ ′ (u)v2 , . . . , γ ′ (u)vk = (γ ∗ Ψ) u; vexterior (u), v2 , . . . , vk .
s
Thus all four orientations, that of M, of ∂M
A, of U, and of ∂Us V, agree.
APPENDIX 2. PULLBACKS AND CURVILINEAR COORDINATES
115
Second we consider the integrals in the theorem. For any k-form field Ξ defined on M we
have
Z
Z
Z
Z
′
∗
Ξ = Ξ γ(u); γ (u) du = (γ Ξ)(u; Ik ) du = γ ∗ Ξ
−
→
A
V
−
→
V
V
(we used here the identity parametrization for U for which the derivative is the k × k identity
matrix Ik ). Assuming γ is twice continuously differentiable, applying this to dΦ we thus get, by
Theorem A2.1,
Z
Z
Z
∗
dΦ = γ (dΦ) = d(γ ∗ Φ).
−
→
A
−
→
V
−
→
V
Let then the smooth boundary ∂Us V have the (relaxed) parametrization
u = ε(s) (s ∈ S),
s
whence ∂M A has the (relaxed) parametrization p = γ ε(s) (s ∈ S). So
Z
Z
Z
Z
′
′ ∗
′
Φ = Φ γ ε(s) ; γ ε(s) ε (s) ds = (γ Φ) ε(s); ε (s) ds =
γ ∗ Φ.
−
→
∂A
S
S
−
→
∂V
This means we can prove the Generalized Stokes Theorem by first taking the pullback to the
parameter domain and then proving the theorem there which is very much easier (the theorem
becomes essentially the Generalized Divergence Theorem). This however works only with the
annoying assumption of γ being twice continuously differentiable.
A2.3 Transforming Derivatives of Fields
Let us then return to the curvilinear coordinates and consider the basic physical form fields and
their derivatives, assuming, as required, that γ is twice continuously differentiable. We first
consider the gradient. By Theorem A2.2
d(γ ∗ f )(u; r) = γ ∗ (df ) (u); r) = (γ ∗ Φ∇f –work )(u; r)
= ∇f γ(u) • γ ′ (u)r = γ ′ (u)T ∇f γ(u) • r.
On the other hand,
(γ ∗ f )(u) = f γ(u)
where ∇u operates on u, and so
and
d(γ ∗ f )(u; r) = ∇u f γ(u) • r,
γ ′ (u)T ∇f γ(u) = ∇u f γ(u) .
Note that here γ only needs to be once continuously differentiable. Normalizing γ ′ (u) we then
get the gradient in the local coordinates:
Q(u)T ∇f γ(u) = Λ(u)−1 ∇u g(u).
Let us take next the curl. By Theorem A2.2
d(γ ∗ ΦF–work )(u; r1 , r2 ) = γ ∗ (dΦF–work ) (u; r1 , r2 ) = (γ ∗ Φ∇×F–flux )(u; r1 , r2 )
= det ∇ × F γ(u) , γ ′ (u)r1 , γ ′ (u)r2
= det γ ′ (u) det γ ′ (u)−1 ∇ × F γ(u) , r1, r2
= det γ ′ (u) γ ′ (u)−1 ∇ × F γ(u) • γ ′ (u)r1 × γ ′ (u)r2 .
APPENDIX 2. PULLBACKS AND CURVILINEAR COORDINATES
116
On the other hand,
(γ ∗ ΦF–work )(u; r) = F γ(u) • γ ′ (u)r = γ ′ (u)T F γ(u) • r
= Φγ ′ (u)T F(γ(u))–work (u; r),
and
d(γ ∗ ΦF–work )(u; r1 , r2 ) = Φ∇u ×(γ ′ (u)T F(γ(u)))–flux (u; r1 , r2 )
= ∇u × γ ′ (u)T F γ(u)
• r1 × r2 ,
and comparing with the above
γ ′ (u)−1 ∇ × F γ(u) =
1
∇u × γ ′ (u)T F γ(u) .
det γ ′ (u)
Normalizing γ ′ (u) we get the desired curl given in the local coordinates:
Q(u)T ∇ × F γ(u) =
Λ(u)
∇u × Λ(u)G(u) .
′
det γ (u)
The divergence is treated similarly. By Theorem A2.2
d(γ ∗ ΦF–flux )(u; r1 , r2 , r3 ) = γ ∗ (dΦF–flux ) (u; r1 , r2 , r3 ) = (γ ∗ Φ∇•F–density )(u; r1 , r2 , r3 )
= ∇ • F γ(u) det γ ′ (u)r1 , γ ′ (u)r2 , γ ′ (u)r3
= det γ ′ (u) ∇ • F γ(u) det(r1 , r2 , r3 ).
On the other hand,
(γ ∗ ΦF–flux )(u; r1 , r2 ) = det F γ(u) , γ ′ (u)r1 , γ ′ (u)r2
= det γ ′ (u) det γ ′ (u)−1 F γ(u) , r1 , r2
= det γ ′ (u) γ ′ (u)−1 F γ(u) • r1 × r2
= Φdet(γ ′ (u))(γ ′ (u)−1 F(γ(u)))–flux (u; r1 , r2)
and
d(γ ∗ ΦF–flux )(u; r1 , r2 , r3 ) = Φ∇u •(det(γ ′ (u))(γ ′ (u)−1 F(γ(u))))–density (u; r1 , r2 , r3 )
= ∇u • det γ ′ (u) γ ′ (u)−1 F γ(u)
det(r1 , r2 , r3 ),
and thus
∇ • F γ(u) =
1
∇u • det γ ′ (u) γ ′ (u)−1 F γ(u) .
det γ ′ (u)
The divergence in the local coordinates is then obtained by normalizing γ ′ (u):
∇ • F γ(u) =
1
∇u • det γ ′ (u) Λ(u)−1 G(u) .
det γ ′ (u)
APPENDIX 2. PULLBACKS AND CURVILINEAR COORDINATES
117
Combining the representations of the gradient and the divergence we also get the Laplacian
in the local coordinates:
∆f γ(u) =
1
∇u • det γ ′ (u) Λ(u)−2 ∇u g(u) .
′
det γ (u)
Note that this expression contains both second order and first order partial derivatives.
A2.4 Derivatives in Cylindrical and Spherical Coordinates
Application of the above formulas is basically easy but somewhat tedious. A symbolic computation program, e.g. Maple, will come handy.
We collect here the results for cylindrical and spherical coordinates (clearly the corresponding parametrizations are twice continuously differentiable). First the cylindrical coordinates
where the local basis vectors are er , eφ and ez . Writing
G = Gr er + Gφ eφ + Gz ez
we have
∇f =
1 ∂g
∂g
∂g
er +
eφ +
ez
∂r
r ∂φ
∂z
1
∂Gr 1 ∂Gφ ∂Gz
Gr +
+
+
r
∂r
r ∂φ
∂z
1 ∂G
∂Gr ∂Gz 1
∂Gφ ∂Gr ∂Gφ
z
er +
eφ +
Gφ + r
ez
−
−
−
∇×F=
r ∂φ
∂z
∂z
∂r
r
∂r
∂φ
∇•F=
∆f =
1 ∂g ∂ 2 g
1 ∂2g ∂2g
+ 2+ 2 2+ 2
r ∂r ∂r
r ∂φ
∂z
These representations of the derivative operations are very handy for axially symmetric fields.
The spherical coordinates in turn are very handy for radially symmetric fields. The local
basis vectors are eρ , eθ and eφ . Writing
G = Gρ eρ + Gθ eθ + Gφ eφ
we have
∇f =
1 ∂g
1 ∂g
∂g
eρ +
eθ +
eφ
∂ρ
ρ ∂θ
ρ sin θ ∂φ
2
∂Gρ
1
1 ∂Gθ
1 ∂Gφ
Gρ +
+
Gθ +
+
ρ
∂ρ
ρ tan θ
ρ ∂θ
ρ sin θ ∂φ
∂Gφ ∂Gθ 1 1 ∂Gρ
∂Gφ 1 cos θ Gφ + sin θ
eρ +
eθ
−
− Gφ − ρ
∇×F=
ρ sin θ
∂θ
∂φ
ρ sin θ ∂φ
∂ρ
1
∂Gθ ∂Gρ +
Gθ + ρ
eφ
−
ρ
∂ρ
∂θ
∇•F=
∆f =
1
1 ∂2g
1
∂g
∂2g
2 ∂g ∂ 2 g
+ 2+ 2
+ 2 2+ 2 2
ρ ∂ρ ∂ρ
ρ tan θ ∂θ ρ ∂θ
ρ sin θ ∂φ2
”For Angling may be said to be so like the Mathematics that it can
never be fully learned...”
(I ZAAK WALTON : The Compleat Angler)
Appendix 3
ANGLE
A3.1 Angle Form Fields and Angle Potentials
The flux form field (n − 1-form field)
ΦF–flux (p; r1 , . . . , rn−1 ) = kpk−n det(pT, r1 , . . . , rn−1)
of the vector field F(p) = kpk−n pT in Rn is called the angle form field, denoted by Φn−angle .
−
→
Its integral over an n − 1-dimensional oriented manifold (or region with boundary) M of Rn is
−
→
called the angle spanned by M from the origin. (And it is assumed that the origin itself is not
−
→
in M.)
Since Φn−angle is the flux form field of the vector field
F(p) = kpk−n pT ,
its exterior derivative is the density form field of the divergence ∇ • F, and this divergence is
zero (check!), Poincaré’s Lemma says that the angle form field Φn−angle has a potential Ψn−angle
(an n − 2-form field) in star-shaped regions, a so-called angle potential.
By the (Generalized) Stokes Theorem, then the angle spanned by the n − 1-dimensional
−
→
oriented manifold M is
Z
Z
Z
Φn−angle = dΨn−angle = Ψn−angle .
−
→
M
−
→
M
−
→
∂M
The angle is thus determined by the boundary of the manifold, and can be computed by integration over it.
The case n = 1 is included here, although it is very simple and subject to interpretation.
Then
p
= signum(p),
Φ1−angle (p) =
|p|
−
→
and an oriented 0-dimensional oriented manifold is a finite set M = {p1 , . . . , pm } of oriented
points. The orientation of a point pi is denoted by ω(pi ). When chosen as the sign of pi it
−
→
contributes positively to the angle, negatively otherwise. Then the (net) angle spanned by M is
Z
m
X
Φ1−angle =
ω(pi )signum(pi ).
−
→
M
i=1
Note that if the orientation of each point pi is its sign, then the angle is = m.
118
APPENDIX 3. ANGLE
119
A3.2 Planar Angles
Origins of the names in the previous section will be apparent when we take a look at the familiar
case of n = 2, the planar angle. Then
xy1 − x1 y
Φ2−angle (p; r1 ) = kpk−2 det(pT, r1 ) = 2
,
x + y2
x1
when we denote p = (x, y) and r1 =
, i.e.
y1
x
−y
dx + 2
dy.
Φ2−angle = 2
2
x +y
x + y2
This is the exterior derivative of the function (0-form field) atan in the star-shaped region which
we get by removing from R2 the positive x-axis and the origin, cf. the example in Section 2.5.
Even though atan itself is not continuous in the positive x-axis—and not even defined in the
origin—its derivative atan′ is continuous everywhere except in the origin.
Let us integrate Φ2−angle over the oriented curve
−
→
C : r = γ(u) (u ∈ U)
(a relaxed parametrization). We assume in addition that the curve does not contain the origin.
Then
Z
Z
Z
′
d
′
Φ2−angle = atan γ(u) γ (u) du =
atan γ(u) du.
du
−
→
C
U
U
y
This is the net angle that a point moving along the curve
C
scans seen from the origin, counted as positive in the positive rotation and negative in the negative rotation. Note that
discontinuity of atan does not matter because when moving over the positive x-axis the value of atan changes by
2π, but its derivative is continuous. The star-shaped region
indicated above could be replaced by any region we get by
x
removing from R2 a ray starting from the origin, and making the corresponding changes in the definition of atan.
The angle potential in R2 is a scalar potential, essentially atan or any function obtained by
adding a constant. The boundary is formed of separate
points, and the net angle spanned is always in principle ob1
tained by addition/subtraction.
x
1
Example. Let us compute the angle spanned by the the
curve
→
−
C : r = γ(u) = u cos(cos u), u sin(cos u) (1 ≤ u ≤ 5),
2
3
0
–1
y
–2
see the figure on the right (Maple). Then
u sin(cos u)
atan′ γ(u) γ ′ (u) = −
cos(cos
u)
+
u
sin(cos
u)
sin
u
u2
u cos(cos u)
sin(cos
u)
−
u
cos(sin
u)
sin
u
+
u2
= − sin u
4
APPENDIX 3. ANGLE
120
and the net angle is
Z5
(− sin u) du = cos 5 − cos 1 ∼
= −0.26 rad,
1
as it should.
Note. For those familiar with complex analysis this brings to mind the complex logarithm and
its derivative.
A3.3 Solid Angles
An angle potential in R3 is the work form field of a vector potential of the Newton vector field
F(p) =
pT
,
kpk3
and Φ3−angle is the flux form field of F. The angle here is the so-called solid angle.
Geometrically the solid angle spanned
→
−
z
by the surface S seen from the origin is
the area of the part of the origin-centered
→
−
unit sphere P that exactly covers S (see the
figure on the right). The general situation
S
is however more complex. We must allow
the possibility that some part of the sphere
P
is seen ”twice” or even more times. We
also count as positive those points p of the
y
surface where the normal n points away
from the origin, i.e. p • n > 0, and as
negative points where the normal points
x
towards the origin, i.e. p • n < 0. The whole
space obviously spans the solid angle 4π (the
area of a unit sphere). The unit of solid angle is the steradian (sr).
Mathematically the solid angle seen from the origin spanned by the oriented 2-dimensional
manifold
→
−
S : r = γ(u) (u ∈ U)
(an oriented surface in a relaxed parametrization) of R3 is
Z
→
−
∂δ(u) ∂δ(u)
Ω( S ) = δ(u) •
×
du,
∂u1
∂u2
U
where
γ(u)
δ(u) = γ(u) .
Here we again assume that S does not contain the origin. Note that the values of δ(u) are in the
unit sphere x2 + y 2 + z 2 = 1.
On the other hand,
→
−
P : r = δ(u) (u ∈ U)
APPENDIX 3. ANGLE
121
is not (necessarily) a parametrized manifold, even in a relaxed sense. A part of the unit sphere
may appear several times, possibly with opposite orientations, depending on how many times a
→
−
ray starting from the origin intersects the surface S and in which direction. Generally however
→
−
P can be considered as an oriented trajectory manifold in a relaxed parametrization.
The unit normal of the origin-centered unit sphere in the point δ(u) is either δ(u)T (the
exterior normal) or −δ(u)T (the interior normal). Thus
∂δ(u) ∂δ(u) ∂δ(u) ∂δ(u)
×
= ±
∂u1 × ∂u2 δ(u),
∂u1
∂u2
where the sign is dictated by the particular orientation, and further
∂δ(u) ∂δ(u) ∂δ(u) ∂δ(u)
.
δ(u) •
×
= ±
×
∂u1
∂u2
∂u1
∂u2 →
−
The thus defined Ω( S ) is then just the solid angle corresponding to the geometrical idea.
That indeed
Z
→
−
Ω( S ) = Φ3−angle
−
→
S
follows as special case of the general result in the next section (but can be verified quite easily
separately, too).
Example. Using the result in the example in Section 6.3 we get a vector potential U for the
vector field
p
F(p) =
kpk3
in the star-shaped region that we get by removing from the space one of the rays −up0 (u ≥ 0)
where p0 6= 0, as
p0 × p
U(p) =
2
kp0 kkpk + (p0 • p)kpk
(we chose p1 = 0).6 The corresponding angle potential is the 1-form field
Ψ3−angle (p; r) = U(p) • r =
p0 × p • r
.
kp0 kkpk2 + (p0 • p)kpk
→
−
The solid angle spanned by the oriented surface S is thus obtained by Stokes’ Theorem by
→ −
−
→
integrating over the correspondingly oriented boundary curve C of S :
I
I
→
−
p0 × p
Ω( S ) = U(p) • ds =
• ds.
2
kp0 kkpk + (p0 • p)kpk
−
→
C
−
→
C
We can choose the ray (the Dirac string) removed from the space to be one pointing to the
opposite of the direction of sight, i.e. p0 is in the direction of sight.
As a concrete example, let us compute the solid angle spanned by the ellipse
−
→
C : p = γ(u) = (cos u, 2 sin u, 2) (0 ≤ u ≤ 2π)
6
For simplicity we again use the same notation for points and vectors.
APPENDIX 3. ANGLE
122
—or any surface having it as a boundary curve—from the origin. We choose p0 = (0, 0, 1). See the figure on the right
(Maple). Then




−2 sin u
− sin u
p0 × γ(u) =  cos u  and γ ′ (u) =  2 cos u 
0
0
and the solid angle is
Ω=
=
Z2π
0
Z2π
0
cos2
2 du
√
u + 4 sin u + 4 + 2 cos2 u + 4 sin2 u + 4
2
3
2
z
1
–2
–1.5
–1
–0.5
0.5
1
x
–0.5
0.5
y 1
1.5
–1
2
2 du
∼
√
= 1.1 sr.
3 sin u + 5 + 2 3 sin2 u + 5
2
(The integral here is a so-called elliptic integral and it cannot be given by elementary functions.
Numerical calculation is of course possible and easy.)
The part of the space not spanned by the ellipse then determines the solid angle
4π − Ω ∼
= 11.5 sr.
Note how the ray removed from the space—here the negative z-axis—deftly chooses which one
of these two conjugate solid angles, Ω or 4π − Ω, is computed!
A3.4 Angles in Rn
Mathematically the angle ”seen” from the origin spanned by the oriented n − 1-dimensional
manifold
−
→
M : r = γ(u) (u ∈ U)
(an oriented hypersurface in a relaxed parametrization) of Rn is
Z q
−
→
Ω(M) = ± det δ ′ (u)T δ ′ (u) du,
U
where
δ(u) =
γ(u)
kγ(u)k
and the sign is chosen by the particular local orientation. Here we again assume that S does not
−
→
contain the origin. Thus Ω(M) is the volume of the part P of a unit hypersphere kpk = 1 of
Rn exactly covering M as ”seen” from the origin. Here, too,
−
→
P : r = δ(u)
(u ∈ U)
is not (necessarily) a parametrized manifold, even in a relaxed sense, but it may be considered
as an oriented trajectory manifold in a relaxed parametrization.
Writing first
γ(u)
δ(u) =
1/2
γ(u) • γ(u)
APPENDIX 3. ANGLE
123
it is straightforward to verify (check!) that
1
1
′
δ ′ (u) = γ(u)T γ(u)γ ′ (u)
γ(u) γ (u) − γ(u)3
and further (check this, too!) that
1
1
δ ′ (u)T δ ′ (u) = 2 γ ′ (u)T γ ′ (u) − γ ′ (u)T γ(u)T γ(u)γ ′ (u).
γ(u)
γ(u)4
Thus we have the formula
s
Z
1
−
→
1
Ω(M) = ± n−1 det γ ′ (u)T γ ′ (u) − 2 γ ′ (u)T γ(u)T γ(u)γ ′ (u) du.
γ(u)
γ(u)
U
The Gauß elimination formula for block matrices is (check!)
!
−1
I
A
B
A O
I O
A B
=
C D
O I
C I
O D − CA−1 B
for zero matrixes O and identity matrices I of appropriate sizes, provided that A is a nonsingular
square matrix. Taking determinants we immediately have the so-called Schur identity7
A B
det
= det(A) det(D − CA−1 B),
C D
which we will apply to the matrix
T
γ(u)T γ ′ (u) γ(u)T γ ′ (u) =
γ(u)
!
γ(u)2
γ(u)T γ ′ (u) =
γ(u)γ ′ (u)
γ ′ (u)T
γ ′ (u)T γ(u)T γ ′ (u)T γ ′ (u)
2
taking A to be the 1 × 1 matrix γ(u) . The determinant of this matrix is then
2
2 1
′
T
T
′
det γ(u), γ ′ (u) = γ(u) det γ ′ (u)T γ ′ (u) − γ
(u)
γ(u)
γ(u)γ
(u)
.
γ(u)2
Thus indeed
−
→
Ω(M) =
Z
U
1
n det γ(u), γ ′ (u) du =
γ(u)
Z
!
Φn−angle .
−
→
M
Note that we did not indicate here the sign in the integrand. The sign choice is now automatic
since the sign of the determinant tells whether or not the normal points away from the origin
(+) or towards it (−). We leave it as an exercise for the reader to verify the not so difficult fact
that det γ(u), γ ′ (u) is a dot product of γ(u) and a nonzero normal vector. The determinant
also takes care of situations where the normal and γ(u) are perpendicular and the contribution
to the angle, and of the determinant, is zero. Indeed, large parts of the hypersurface M may
well be parts of conical hypersurfaces having the apex in the origin.
The full n-dimensional angle is the angle spanned by an origin-centered hypersphere, i.e.,
the volume of a unit hypersphere. A somewhat difficult induction proof shows that it is
2π n/2
,
Γ(n/2)
√
√
where Γ is the gammafunction. Since Γ(1/2) = π, Γ(1) = 1 and Γ(3/2) = π/2, the
formula gives correct values for n = 2 and n = 3, but also the value 2 of the full 1-dimensional
angle.
7
Here D − CA−1 B is the so-called Schur complement of D. Schur’s complements are very useful in many
other contexts, too, see e.g. Z HANG , F. (Ed.): The Schur Complement and Its Applications. Springer (2005).
,
References
1. A BRAHAM , R. & M ARSDEN , J.E. & R ATIU , T.: Manifolds, Tensor Analysis, and Applications. Springer–Verlag (1993)
2. A KCOGLU , M.A. & BARTHA , P.F.A. & H A , D.M.: Analysis in Vector Spaces. A Course
in Advanced Calculus. Wiley (2009)
3. A POSTOL , T.M.: Calculus. Vol. II: Multi-Variable Calculus and Linear Algebra with
Applications to Differential Equations. Wiley (1975)
4. A POSTOL , T.M.: Mathematical Analysis. Modern Approach to Advanced Calculus. Addison–Wesley (1982)
5. C HERN , S.-S. & C HEN , W.H. & L AM , K.S.: Lectures on Differential Geometry. World
Scientific (1999)
6. H UBBARD , J.H. & H UBBARD , B.B.: Vector Calculus, Linear Algebra, and Differential
Forms. Matrix Editions (2009)
7. JÄNICH , K. & K AY, L.D.: Vector Analysis. Springer–Verlag (2010)
8. L OOMIS , L.H. & S TERNBERG , S.: Advanced Calculus. Jones and Bartlett (1990)
9. L OVRI Ć , M.: Vector Calculus. Wiley (2007)
10. M ARSDEN , J.E. & T ROMBA , A.J.: Vector Calculus. W.H. Freeman (2003)
11. M ISNER , C.W. & T HORNE , K.S. & W HEELER , J.A.: Gravitation. W.H. Freeman (1973)
12. N IKOLSKY, S.M. & VOLOSOV, V.M.: A Course of Mathematical Analysis. MIR Publishers (1987)
13. O’N EILL B.: Elementary Differential Geometry. Academic Press (2006)
14. O STROWSKI , A.: Vorlesungen über Differential- und Integralrechnung. Birkhäuser (1972)
15. RUDIN , W.: Principles of Mathematical Analysis. McGraw–Hill (1987)
16. S CHEY, H.M.: Div, Grad, Curl, and All That. An Informal Text on Vector Calculus. W.W.
Norton (1997)
17. S PIVAK , M.: Calculus on Manifolds. A Modern Approach to Classical Theorems of Advanced Calculus. W.A. Benjamin (1971)
18. TALLQVIST, H.: Grunderna av vektoranalysen med tillämpningar i fysiken. Söderström
(1923) (in Swedish)
124
References
125
19. VÄISÄLÄ , K.: Vektorianalyysi. Werner Söderström (1972) (in Finnish)
20. WADE , W.R.: An Introduction to Analysis. Prentice–Hall (2004)
21. W EINTRAUB , S.H.: Differential Forms. A Complement to Vector Calculus. Academic
Press (1996)
Index
126
Index
active variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
addition of vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
advection equation . . . . . . . . . . . . . . . . . . . . . . . . . 102
affine approximation . . . . . . . . . . . . . . . . . . . . . . . . 30
affine subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Ampère’s law . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75,81
angle form field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
angle potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3,118
atan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25,92,119
anticommutativity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
antisymmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
atlas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
ball . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
biharmonic equation . . . . . . . . . . . . . . . . . . . . . . . . 102
bilinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Biot–Savart law . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Black–Scholes equation . . . . . . . . . . . . . . . . . . . . 102
boundary . . . . . . . . . . . . . . . . . . . . . . . . . . 1,43,63,115
Cartan’s Magic Formula . . . . . . . . . . . . . . . . . 72,107
chain rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
charge conservation law . . . . . . . . . . . . . . . . . . . . . 76
chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
closed form field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
closed set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
closure of a set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
connected manifold . . . . . . . . . . . . . . . . . . . . . . . . . 58
conservative vector field . . . . . . . . . . . . . . . . . . . . . 90
continuity law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
coordinate function . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
coordinate point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
coordinate transform . . . . . . . . . . . . . . . . . . . . . . . . 7,9
coordinate vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
coordinate-freeness . . . . . . . . . . . . . . . . . . . . . . . 12,22
cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4,7
curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10,74,115
current density form field . . . . . . . . . . . . . . . . . . . . 76
curvilinear coordinate system . . . . . . . . . . . . . . . 111
cyclical symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
cylindrical coordinates . . . . . . . . . . . . . . . . . . 111,117
d’Alembertian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
density form field . . . . . . . . . . . . . . . . . . . . . . . . 59,73
derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9,70
difference of vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 3
differential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
differential form . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
diffusion equation . . . . . . . . . . . . . . . . . . . . . . . . . . 103
dipole approximation . . . . . . . . . . . . . . . . . . . . . . . . 98
Dirac’s string . . . . . . . . . . . . . . . . . . . . . . . . . . . 93,121
direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
directional derivative . . . . . . . . . . . . . . . . . . . . . . . . 10
distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1,4
Divergence Theorem . . . . . . . . . . . . . . . . . . . . . . . . 78
divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 10,74,116
division by scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
dot product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4,6
double-curl expansion . . . . . . . . . . . . . . . . . . . . 11,97
Dym’s equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
dynamical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
electric dipole moment . . . . . . . . . . . . . . . . . . . . . . 99
elementary form . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
elliptic PDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
exact form field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
exception set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
explicit representation . . . . . . . . . . . . . . . . . . . . . . . 21
extended parameter domain . . . . . . . . . . . . . . . . . . 43
exterior derivative . . . . . . . . . . . . . . . . . . . . . 69,70,83
exterior vector . . . . . . . . . . . . . . . . . . . . . . . . . . 66,114
face . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Faraday’s form field . . . . . . . . . . . . . . . . . . . . . . 61,76
Faraday’s law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
FEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
fiber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
fiber bundle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
finite element method . . . . . . . . . . . . . . . . . . . . . . 110
flux form field . . . . . . . . . . . . . . . . . . . . . . . . . . . 59,73
flux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
forcing function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
form field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
four-potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Fundamental Theorem of Integral Calculus 62,83
Gauß’ law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Gauß’ Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 62,78
General Divergence Theorem . . . . . . . . . . . . . 78,115
Generalized Stokes’ Theorem . . . . . . . . . 62,77,114
geometric vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9,73,115
Gradient Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Gramian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37,40,41
graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Green’s first identity . . . . . . . . . . . . . . . . . . . . . . . . 109
Green’s second identity . . . . . . . . . . . . . . . . . . . . . 109
Green’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 62,79
Green’s third identity . . . . . . . . . . . . . . . . . . . . . . . 110
harmonic function . . . . . . . . . . . . . . . . . . . . . . . 97,109
heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . 103,104
Helmholtz’s decomposition . . . . . . . . . . . . . . . . . . 96
Hodge’s decomposition . . . . . . . . . . . . . . . . . . . . . . 97
Index
Hodge’s dual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60,75
Hodge’s star . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60,75
hyperbolic heat equation . . . . . . . . . . . . . . . . . . . . 102
hyperbolic PDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Implicit Function Theorem . . . . . . . . . . . . . . . . . . . 19
implicit representation . . . . . . . . . . . . . . . . . . . . . . . 21
inner cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
integral. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53
interior of a set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
interior vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66
inverse image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
irrotational . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88,103
Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Jordan measurable . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Jordan’s inner measure . . . . . . . . . . . . . . . . . . . . . . 38
Jordan’s measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Jordan’s outer measure . . . . . . . . . . . . . . . . . . . . . . 38
k-form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
k-form field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Klein’s bottle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
k-null set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Koch’s snowflake . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Korteweg–de Vries equation . . . . . . . . . . . . . . . . 102
Lagrange’s formulas . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Lambert’s W function . . . . . . . . . . . . . . . . . . . . . . . 22
Laplace’s equation . . . . . . . . . . . . . . . . . . . . . . 96,103
Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10,117
length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
level manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
local coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
locus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Lorenz’s gauge condition . . . . . . . . . . . . . . . . . . . . 98
lossy wave equation . . . . . . . . . . . . . . . . . . . . . . . . 102
magnetic dipole moment . . . . . . . . . . . . . . . . . . . . 100
manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17,24,29
mass form field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Maxwell’s equations . . . . . . . . . . . . . . . . . . . . . 75,102
Maxwell’s form field . . . . . . . . . . . . . . . . . . . . . 61,76
Mean Value Theorem for Harmonic Functions 110
Möbius’ band . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55,90
monopole approximation . . . . . . . . . . . . . . . . . . . 101
multilinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
multiplication by scalar . . . . . . . . . . . . . . . . . . . . . . . 3
multipole expansion . . . . . . . . . . . . . . . . . . . . . . . . 101
nabla rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
n-cube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Newton’s form field . . . . . . . . . . . . . . . . . . . . . . . . . 83
Newton’s potential . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Newton’s vector field . . . . . . . . . . . . . . . . . . . . . . . . 93
nonstationary . . . . . . . . . . . . . . . . . . . . . . . 13,103,105
normal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
normal bundle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
127
normal derivative . . . . . . . . . . . . . . . . . . . . . . . . . . 109
normal space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34,37
normal vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
null set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
octupole approximation . . . . . . . . . . . . . . . . . . . . . 101
Ohm’s law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
open ball . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
open set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
opposite vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
order. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102
orientable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . 54,67,114
outer cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
parabolic PDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
parallelepiped . . . . . . . . . . . . . . . . . . . . . . . . . 39,41,42
parameter domain . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
parametrization . . . . . . . . . . . . . . . . . . . . . . . 23,41,43
parametrized manifold . . . . . . . . . . . . . . . . . 23,42,43
partial differential equation . . . . . . . . . . . . . . . . . 103
partial integration . . . . . . . . . . . . . . . . . . . . . . . . . . 107
passive variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
PDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
planar angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
plane wave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Poincaré’s Lemma . . . . . . . . . . . . . . . . . . . . . 85,87,92
point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1,5
point of action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Poisson’s equation . . . . . . . . . . . . . . . . . . . . . . . . . . 96
polar space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
pullback form field . . . . . . . . . . . . . . . . . . . . . . . . . 113
quadrupole approximation . . . . . . . . . . . . . . . . . . 101
region with boundary . . . . . . . . . . . . . . . . . . . . . . . . 65
relaxed parametrization . . . . . . . . . . . . . . . . . . . . . . 43
reparametrization . . . . . . . . . . . . . . . . . . . 26,42,45,59
reverse heat equation . . . . . . . . . . . . . . . . . . . . . . . 102
Riemann’s integral . . . . . . . . . . . . . . . . . . . . . . . . . . 39
scalar field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
scalar potential . . . . . . . . . . . . . . . . . . . . . . . . . . 87,104
scalar product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
scalar triple product . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Schur’s identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
simply connected . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
smooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
smooth boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
smooth point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
solenoidal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
solid angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
source density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
spherical coordinates . . . . . . . . . . . . . . . . . . . 112,117
Index
spherical wave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
standard forms of PDEs. . . . . . . . . . . . . . . . . . . . . 102
star-shaped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
stationary. . . . . . . . . . . . . . . . . . . . . . . . . . .13,103,104
steradian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Stokes’ Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 62,79
symplectic form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
symplectic potential . . . . . . . . . . . . . . . . . . . . . . . . . 82
tangent bundle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
tangent space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30,37
tangent vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
telegraph equation . . . . . . . . . . . . . . . . . . . . . . . . . 103
Thomson’s circulation law . . . . . . . . . . . . . . . . . . . 81
torus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56,91
trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
trajectory manifold . . . . . . . . . . . . . . . . . . . . . . . 29,43
triangle inequality . . . . . . . . . . . . . . . . . . . . . . . . . . 1,4
triple vector product . . . . . . . . . . . . . . . . . . . . . . . . . . 5
unit vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2,6
vector field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8,36
vector potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
vector product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Vectoral Gauß’ Theorem . . . . . . . . . . . . . . . . . . . . . 80
volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38,39,41
vortex density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
vorticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
wave equation . . . . . . . . . . . . . . . . . . . . . . . . . 102,106
wedge product . . . . . . . . . . . . . . . . . . . . . . . . . . . 49,51
work form field . . . . . . . . . . . . . . . . . . . . . . . . . . 59,73
zero vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Zhukovsky’s lift law . . . . . . . . . . . . . . . . . . . . . . . . . 82
128