Download Appendix A Proof of Basic Rules of Differentiation

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Trigonometric functions wikipedia , lookup

Transcript
Now divide through by δx and let δx → 0, observing that v(x + δx) → v(x) as δx → 0.
DV
Proof of Basic Rules of
Differentiation
f (x) = constant
if and only if
where the x-dependent quantity δg = g(x + δx) − g(x) tends to 0 as δx → 0. The above
limit is therefore equal to f 0 {g(x)}g0 (x), as asserted.
f 0 (x) = 0 for all x.
The “only if” part is obvious, since, if f (x) = constant, then
f 0 (x) = lim
δx→0
f (x + δx) − f (x)
= 0 for all x.
δx
The proof of the “if” part will be omitted, since it involves the mean value theorem,
which you will learn about in MATH 1035 Analysis.
d
du dv
(u + v) =
+
.
dx
dx dx
D II
(sum rule)
u(x + δx) + v(x + δx) − u(x) − v(x)
δx
u(x + δx) − u(x)
v(x + δx) − v(x)
+ lim
= RHS.
= lim
δx→0
δx→0
δx
δx
LHS =
lim
δx→0
d
dv
du
(uv) = u + v
.
dx
dx
dx
D III
(product rule)
Note that
∙
¸
∙
¸
u(x + δx)v(x + δx) − u(x)v(x)
v(x + δx) − v(x)
u(x + δx) − u(x)
= u(x+δx)
+v(x)
.
δx
δx
δx
Now just let δx → 0, observing that
u(x + δx) = u(x) + δx
tends to u(x) as δx → 0.
D IV
Note that
∙
u(x + δx) − u(x)
δx
− u dv
d ³ u ´ v du
= dx 2 dx .
dx v
v
¸
(quotient rule)
u(x + δx) u(x)
v(x)[u(x + δx) − u(x)] − u(x)[v(x + δx) − v(x)]
−
=
.
v(x + δx) v(x)
v(x)v(x + δx)
1
(chain rule)
The left hand side is the limit as δx → 0 of
∙
¸∙
¸
f {g(x + δx)} − f {g(x)}
f {g(x + δx)} − f {g(x)} g(x + δx) − g(x)
=
,
δx
g(x + δx) − g(x)
δx
¸∙
¸
∙
f {g(x) + δg} − f {g(x)} g(x + δx) − g(x)
,
=
δg
δx
Appendix A
DI
d
[f {g(x)}] = f 0 {g(x)}g 0 (x).
dx
2
d
(sinh x) = cosh x,
dx
d
(cosh x) = sinh x,
dx
d
(tanh x) = sech2 x,
dx
Appendix B
List of Formulae for Derivatives of
Elementary Functions
[These formulae hold for all real values of x, except where explicitly indicated to the
contrary.]
⎧
⎨ for all x if α is an integer ≥ 0,
d α
for x 6= 0 if α is a negative integer,
(B.1)
(x ) = αxα−1
⎩
dx
for x > 0 otherwise,
d x
(e ) = ex ,
(B.2)
dx
1
d
(ln x) =
≡ x−1 (for x > 0),
(B.3)
dx
x
d
(sin x) = cos x,
(B.4)
dx
d
(cos x) = − sin x,
(B.5)
dx
d
(tan x) = sec2 x (for x − {π/2} not an integer multiple of π),
(B.6)
dx
d
(B.7)
(cot x) = −cosec2 x (for x not an integer multiple of π),
dx
d
(sec x) = sec x tan x (for x − {π/2} not an integer multiple of π), (B.8)
dx
d
(cosec x) = −cosec x cot x (for x not an integer multiple of π),
(B.9)
dx
1
d
d
(sin−1 x) = √
= − (cos−1 x) (for − 1 < x < 1),
(B.10)
dx
dx
1 − x2
d
1
(tan−1 x) =
,
dx
1 + x2
(B.13)
(B.14)
1
d
,
(sinh−1 x) = √
dx
1 + x2
d
1
(for x > 1),
(cosh−1 x) = √ 2
dx
x −1
d
1
(for − 1 < x < 1).
(tanh−1 x) =
dx
1 − x2
d
(coth x) = −cosech2 x (for x 6= 0),
dx
d
(sech x) = −sech x tanh x,
dx
d
(cosech x) = −cosech x coth x
dx
(B.11)
3
(B.12)
4
(for x 6= 0),
(B.15)
(B.16)
(B.17)
(B.18)
(B.19)
(B.20)
Diagram for proof of the addition formulae for 0 < x < x + y < π/2
Appendix C
Basic Properties of the
Trigonometric Functions
C.1
Fundamental Identities for the Sine and Cosine
We recall the special values:
Diagram for proof of the subtraction formulae for 0 < y < x < π/2
sin 0
π
cos
2
cos 0
π
sin
2
= 0,
(C.1)
= 0,
(C.2)
= 1,
(C.3)
= 1.
(C.4)
The trigonometric functions possess a high degree of symmetry and accordingly obey
an almost bewildering variety of identities. [NB Recall that an identity is an equation
which holds for all permitted values of its variables.] Of fundamental importance are the
addition and subtraction formulae:sin(x + y)
sin(x − y)
cos(x + y)
cos(x − y)
=
=
=
=
sin x cos y + cos x sin y,
sin x cos y − cos x sin y,
cos x cos y − sin x sin y,
cos x cos y + sin x sin y.
(C.5)
(C.6)
(C.7)
(C.8)
Take special note here of the signs in equations (C.7) and (C.8). The truth of the addition
formulae (C.5) and (C.7) for 0 < x < x + y < π/2 and of the subtraction formulae (C.6)
and (C.8) for 0 < y < x < π/2 is immediately apparent geometrically from the two
diagrams which follow, but note that (C.5)—(C.8) hold for all real values of the variables
x and y.
From (C.1)—(C.8) we can deduce many more formulae for the sine and cosine. Taking
y = x in (C.8) and using (C.3) gives the fundamental trigonometric identity
cos2 x + sin2 x = 1,
(C.9)
while taking y = x in (C.5) and (C.7) gives the double angle formulae
sin 2x = 2 sin x cos x,
cos 2x = cos2 x − sin2 x.
5
6
(C.10)
(C.11)
Adding and subtracting (C.9) and (C.11) back to front (and subsequently dividing
through by 2) gives
cos2 x =
sin2 x =
1
2
1
2
(1 + cos 2x),
(1 − cos 2x).
between the sine and cosine functions. Replacing x by −x in (C.21) and (C.22) and using
(C.17) and (C.18) gives
(C.12)
(C.13)
sin(π − x) = sin x,
cos(π − x) = − cos x.
(C.27)
(C.28)
Adding (C.5) and (C.6) and adding and subtracting (C.7) and (C.8) yields
1
[sin(x + y) + sin(x − y)],
2
1
cos x cos y =
[cos(x − y) + cos(x + y)],
2
1
sin x sin y =
[cos(x − y) − cos(x + y)].
2
sin x cos y =
(C.14)
(C.15)
(C.16)
[Note the sign in (C.16). By (C.3), it is clear that (C.12) and (C.13) are the special cases
of (C.15) and (C.16) respectively with y = x. Formulae (C.12)-(C.16) can prove very
useful in integration.]
Putting x = 0 in (C.6) and (C.8), using (C.1) and (C.3) and then replacing y by x
(which is legitimate, since x and y are just labels standing for arbitrary real numbers)
gives
sin(−x) = − sin x,
cos(−x) = cos x,
(C.17)
(C.18)
C.2
Some Identities involving Other Trigonometric
Functions
Further identities may now be obtained involving the four extra trigonometric functions
³π
´
cos x
1
sin x
, cot x =
=
= tan
− x [by (C.25) to (C.26)],
tan x =
cos x
sin x
tan x
2
1
1
, cosec x =
,
cos x
sin x
of which tan x and sec x are undefined when x is an odd integer multiple of π/2 [since
this makes cos x = 0, by (C.23)], and cot x and cosec x are undefined when x is an integer
multiple of π [since this makes sin x = 0, by (C.23)]. We note here some of the more
important identities involving these functions. From (C.17) and (C.18) we deduce
sec x =
tan(−x)
cot(−x)
sec(−x)
cosec x
or sin is an odd function (i.e. reverses sign when its variable does), while cos is an even
function (i.e. remains unchanged in value when its variable reverses sign).
Putting x = π/2 in (C.10) and (C.11) and using (C.2) and (C.4) gives
sin π = 0,
cos π = −1.
(C.19)
(C.20)
(C.21)
(C.22)
i.e., whenever we add π to x, we reverse the signs of both sin x and cos x (as is evident
from the graphs of these two functions). For any integer n (positive, negative or zero),
this allows us to generalize (C.1)—(C.4) to
µ
¶
1
sin nπ = cos n +
π = 0,
(C.23)
2
¶
½
¾
µ
1
1 if n is even
π =
= (−1)n .
(C.24)
cos nπ = sin n +
−1 if n is odd
2
(C.29)
(C.30)
(C.31)
(C.32)
1 + tan2 x = sec2 x,
1 + cot2 x = cosec2 x.
(C.33)
(C.34)
tan(x + π)= tan x,
(C.35)
Dividing (C.21) by (C.22) gives
or, as we say, tan is a periodic function with period π. For this reason, its graph, depicted
below, consists of a pattern which continually repeats itself after a horizontal distance π.
Note the vertical asymptotes occurring wherever x is equal to an odd integer multiple of
π/2.
Putting x = π/2 in (C.6) and (C.8), using (C.2) and (C.4) and then replacing y by x
gives the simple (and equivalent!) relations
´
³π
− x = cos x,
(C.25)
sin
2
´
³π
− x = sin x
(C.26)
cos
2
7
− tan x,
− cot x,
sec x,
−cosec x,
i.e. tan, cot and cosec x are odd functions, while sec is even. Dividing (C.9) by cos2 x or
sin2 x gives the useful formulae
Putting y = π in (C.5) and (C.7) then gives
sin(x + π) = − sin x,
cos(x + π) = − cos x,
=
=
=
=
8
C.4
The graph of tan x in the range −3π < x < 3π
Evaluation of a Certain Limit
Writing x + 2π as (x + π) + π and using (C.21) and (C.22), we get
sin(x + 2π) = sin x,
cos(x + 2π) = cos x,
(C.36)
(C.37)
i.e. sin and cos are periodic with period 2π. Geometrically, this reflects the fact that
2π radians = 360◦ is a complete revolution, so that x and x + 2π are essentially the
same angle. This periodicity of sin and cos is immediately evident from the form of their
graphs.
C.3
Trigonometric Functions of some Special Angles
Putting x = π/4 in (C.25) and (C.10) and using (C.4) gives sin(π/4) = cos(π/4), 1 =
2 sin2 (π/4), so
π
1
π
π
(C.38)
sin = cos = √ , tan = 1.
4
4
4
2
[NB Since π/4 is an acute angle, i.e. 0 < π/4 < π/2, we know that its sine and
cosine must both be positive.] Putting x = π/3 in (C.25) and (C.26) gives sin(π/6) =
cos(π/3),
cos(π/6) = sin(π/3). Putting x = π/6 in (C.10) then gives sin(π/3) =
2 sin(π/6) cos(π/6) = 2 sin(π/6) sin(π/3), whence use of (C.9) shows that
√
3
π
π
π
π
π
1
π √
sin = cos = 12 , cos = sin =
(C.39)
, tan = √ , tan = 3.
6
3
6
3
2
6
3
3
9
In the above diagram, the areas of triangles OAB and OAC are (sin x)/2 and (tan x)/2
respectively (each being equal to half the base times the vertical height). On the other
hand, the area of the sector OAB of the circle centre O radius 1 is a fraction x/(2π) of
the total area π of that circle, i.e. it is equal to x/2. Comparing these areas and doubling
through, we see immediately that
sin x < x < tan x
for 0 < x <
π
.
2
Dividing through by the positive quantity sin x gives
1<
x
< sec x
sin x
for 0 < x <
π
,
2
and “turning this upside down” yields
cos x <
sin x
<1
x
for 0 < x <
π
π
, and hence also for − < x < 0,
2
2
since cos x and sin x are respectively even and odd functions of x. Letting x tend to zero
(from either side), since cos x → cos 0 = 1, it follows that
lim
x→0
sin x
= 1.
x
10
Taking y = −x in (D.4) and using (D.2) gives
exp(x) exp(−x) = 1 (for all real x).
(D.5)
Now it is clear from (D.3) that exp(x) > 0 for x ≥ 0, and it follows from (D.5) that
Appendix D
exp(x) > 0 for all real x,
(D.6)
which allows (D.5) to be rewritten in the form
Properties of the Exponential and
Logarithmic Functions
exp(−x) =
1
(for all real x).
exp(x)
(D.7)
Note that exp(−x) 6= − exp(x).
It is clear from (D.3) that
D.1
The Exponential Function
exp(x) → +∞ as x → +∞,
We recall the definition of exp :
and so from (D.7) it follows that
d
[exp(x)] = exp(x) (for all real x),
dx
exp(0) = 1.
(D.1)
exp(x) → 0 as x → −∞.
(D.2)
D.2
and its Taylor series:
exp(x) = 1 + x +
X xn
x2 x3
+
+ ··· =
,
2!
3!
n!
n=0
∞
(D.9)
The Logarithmic Function
The logarithmic function is the unique inverse of the exponential function:
(D.3)
Now let c be any real constant. Then, by (D.1) and the chain rule, the function
satisfies
(D.8)
y = exp(x) if and only if x = ln y.
(D.10)
From the properties of the exponential function, we deduce that
yc (x) = exp(x + c)
ln 1 = 0.
(D.11)
d[exp(x + c)] d(x + c)
dyc
=
= exp(x + c) = yc .
dx
d(x + c)
dx
ln x → +∞ as x → +∞,
ln x → −∞ as x → 0.
(D.12)
(D.13)
It must therefore satisfy
¸
∙
x2 x3
+
+ ··· ,
yc (x) = yc (0) 1 + x +
2!
3!
i.e., using (D.3), we must have yc (x) = yc (0) exp(x), i.e.
exp(x + c) = exp(c) exp(x)
Since c is arbitrary, we may take it as a variable and call it y, when we see that exp obeys
the identity
exp(x + y) = exp(x) exp(y) (for all real x and y).
(D.4)
Replacing x and y in (D.4) by ln x and ln y (where the new x and y are both positive)
and using (D.10) gives exp ( ln x + ln y) = exp(ln x) exp(ln y) = xy, and hence
ln(xy) = ln x + ln y (for x > 0, y > 0).
Note that ln x + ln y 6= ln(x + y).
Putting y = 1/x in (D.14) and using (D.11) immediately gives
µ ¶
1
ln
= − ln x (for x > 0).
x
Note that exp(x + y) is equal to the product and NOT the SUM of exp(x) and exp(y).
11
(D.14)
12
(D.15)
D.3
Definition of an Arbitrary Real Power of a Positive Real Number
The graph of xα for various values of α
Applying (D.4) successively, it is easily seen that
exp(x1 + x2 + · · · + xn ) = exp(x1 ) exp(x2 ) . . . exp(xn ),
(D.16)
where n is any positive integer and x1 , x2 , . . . , xn any real numbers Taking x1 = x2 =
· · · = xn = ln x in (D.16), where x > 0, and using (D.10), we get
exp(n ln x) = [exp(ln x)]n = xn ,
and for any real number α we define the power xα of a positive real number x by
xα = exp(α ln x)
(x > 0, α real).
(D.17)
and note that
ln(xα ) = α ln x
(x > 0, α real).
(D.18)
Given three real numbers x,α and β, with x > 0, replacing x and y in (D.4) by α ln x
and β ln x respectively shows that the definition (D.17) obeys the power law
xα+β = xα xβ
(x > 0, α, β real).
(D.19)
It also follows from (D.17) and (D.18) that
(xα )β = xαβ
(x > 0, α, β real).
(D.20)
The definition (D.17) is therefore consistent. Note that, by (D.17), (D.2) and (D.19)
x0 = 1 for any x > 0,
1
x−α = α for any x > 0 and any real α.
x
(D.21)
(D.22)
[We remark √
that taking α = β = 1/2 in (D.19) shows that x1/2 is the usual positive
square root x of the positive real number x. Similarly, for any positive integer n, x1/n
is its positive n’th root.]
Since both ln and exp are increasing functions, we can see from (D.17) that, if α > 0,
then xα is a strictly increasing function of the positive real variable x, while if α < 0 it
is strictly decreasing. In view of (D.21), we conclude that
if α > 0, then xα > 1 for x > 1, xα < 1 for 0 < x < 1,
if α < 0, then xα < 1 for x > 1, xα > 1 for 0 < x < 1.
(D.23)
(D.24)
Moreover, it follows from (D.8), (D.9), (D.12), (D.13) and (D.17) that
if α > 0, then xα → +∞ as x → +∞, xα → 0 as x → 0,
if α < 0, then xα → 0 as x → +∞, xα → +∞ as x → 0.
(D.25)
(D.26)
Note also, by virtue of (D.17), (D.11) and (D.2), that
1α = 1 for all real α.
(D.27)
α
We sketch on the next page the graphs of x against (positive) x for various ranges of
values of the constant α.
13
14
D.4
Asymptotic Properties of the Exponential and
Logarithmic Functions
Let α be any real number, n any integer strictly greater than α. By (D.3) and (D.19),
we deduce that, for x > 0,
µ n¶
x
xn−α
x−α exp x > x−α
=
,
n!
n!
where it follows by (D.25) (with α replaced by n − α) that xn−α → +∞ as x → +∞.
Consequently
x−α exp x → +∞ as x → +∞, for any real α.
(D.28)
Taking the reciprocal of the above expression and using (D.22) and (D.7), we deduce
that xα exp (−x) → 0 as x → +∞, and hence (by reversing the sign of x) that
|x|α exp x → 0 as x → −∞, for any real α.
(D.29)
Thus the exponential is a very “dynamic” function tending to +∞ as x → +∞ faster than
any positive power xα of x, and to 0 as x → −∞ faster than any negative power |x|−α
of |x|. Conversely, its inverse, the logarithm, is a very “lethargic” function, satisfying
−α
for any α > 0, x
α
ln x → 0 as x → +∞, x ln x → 0 as x → 0,
(D.30)
i.e. | ln x| tends to +∞ as x → +∞ more slowly than any positive power xα of x, and
as x → 0 more slowly than any negative power x−α of x. For the second limit in (D.30),
take y = α ln x, and use (D.13), (D.17) and (D.29) (with α = 1), to get
³y´ µ1¶
lim xα ln x = lim (exp y)
=
lim y exp y = 0.
x→0
y→−∞
α
α y→−∞
To get the first limit from the second limit, put x = 1/y = y −1 and use (D.20) and
(D.15), obtaining
−α
lim x
x→+∞
D.5
−1 −α
ln x = lim (y )
y→0
−1
Appendix E
Taylor Series
E.1
Analytic Functions and Power Series
Definition A function f (x) is analytic if, for every x0 in its domain of definition, there
exists a δ > 0 such that, for |x − x0 | < δ, f (x) can be expressed as the sum of a power
series in powers of x − x0 , i.e.
f (x) = a0 + a1 (x − x0 ) + a2 (x − x0 )2 + a3 (x − x0 )3 + · · · ≡
∞
X
n=0
an (x − x0 )n ,
P
where the an are constants, and ∞
n=0 indicates that the expression to its right should
be summed over all values n ≥ 0 of the (dummy) integer variable n.
Remarks For n = 0, (x − x0 )n = (x − x0 )0 is interpreted to mean the constant number
1, irrespective of the value of x. Note that it isn’t really possible to add infinitely many
numbers together. What (E.1) means is that, for large N, the (polynomial) finite sum
fN (x) = a0 + a1 (x − x0 ) + a2 (x − x0 )2 + a3 (x − x0 )3 + · · · + aN (x − x0 )N =
α
ln(y ) = − lim y ln y = 0.
y→0
The Number e, and the Exponential Function
as a Power
(E.1)
N
X
n=0
an (x − x0 )n
(E.2)
is a good approximation to f (x), and, more precisely, for given x (within a distance δ of
x0 ), the approximation can be made as close as we please by taking N sufficiently large.
In other words, for each x, fN (x) tends to the limit f (x) as N → ∞. In this situation
one says that the power series on the RHS of (E.1) converges to f (x).
We define the positive real number e (whose value is approximately 2.718) by
e = exp 1,
(D.31)
from which it is immediately apparent [by (D.10)] that
ln e = 1.
(D.32)
Putting x = e in (D.17), using (D.32) and replacing α by x, we see that
exp x = ex for all real x.
(D.33)
Thus, the definition (D.17) enables us to write exp x in the conveniently condensed conventional form ex .
15
Definitions In general δ in the above has a maximum possible value (depending on
x0 ) called the radius of convergence R, although it may happen that δ can be as large
as we please, i.e. the series (E.1) converges to f (x) for all x, in which case we formally
write R = ∞ and call f an entire function.
Remark It can be shown (cf MATH 1035 Analysis) that
¯
¯
¯ an ¯
¯,
provided this limit exists.
R = lim ¯¯
n→∞ an+1 ¯
16
(E.3)
E.2
Formula for the Coefficients: Taylor Series
Consider the derivatives of the function (x − x0 )n , where n is an integer ≥ 0. Using the
standard formulae for the derivative of a power and the chain rule, we have
d
[(x − x0 )n ] = n(x − x0 )n−1 ,
dx
2
d
[(x − x0 )n ] = n(n − 1)(x − x0 )n−2 ,
dx2
d3
[(x − x0 )n ] = n(n − 1)(n − 2)(x − x0 )n−3 , etc,
dx3
(E.4)
for any integer m ≥ 0 (as can be proved properly by induction, cf MATH 1201 Mathematical Structures). Putting m = n in (E.4) gives
dn
[(x − x0 )n ] = n(n − 1)(n − 2) . . . 1.(x − x0 )0 = n! (= constant),
dxn
Definition The truncated Taylor series
N
X
f (n) (x0 )
1
1
(x−x0 )n = f (x0 )+(x−x0 )f 0 (x0 )+ (x−x0 )2 f 00 (x0 )+· · ·+
(x−x0 )N f (N) (x0 )
n!
2
N
!
n=0
(E.10)
is sometimes called the N’th Taylor polynomial of of f (x) at x0 . It is the polynomial
of degree ≤ N that best approximates f in the immediate neighbourhood of x0 . In
particular, the first Taylor polynomial
and in general
dm
[(x − x0 )n ] = n(n − 1)(n − 2) . . . (n − m + 1)(x − x0 )n−m
dxm
Definition (E.9) is called the Taylor series or Taylor expansion for the function f (x)
about x0 , converging for |x − x0 | < R (where R depends in general on x0 ).
(E.5)
f (x0 ) + (x − x0 )f 0 (x0 )
(E.11)
is the best linear approximation to f near x0 . Note that (E.10) is just the explicit form
of fN (x) defined by (E.2), and that it therefore tends (for x sufficiently close to x0 ) to
the limit f (x) as N → ∞.
Definition The Taylor series for f (x) about x = 0 (assuming f is defined there) is
called its Maclaurin series.
from which we deduce (via D I) that
E.3
dm
[(x − x0 )n ] = 0 for m > n.
dxm
E.3.1
For the m’th derivative of the whole series (E.1), term-by-term differentiation (which
may be shown to be permissible) and use of (E.4) and (E.6) yield
∞
∞
X
X
dm f (x)
dm
≡ f (m) (x) =
an m [(x − x0 )n ] =
an n(n − 1) . . . (n − m + 1)(x − x0 )n−m
m
dx
dx
{z
}
|
n=0
n=m
= am m! +
= 0 for n<m
∞
X
n=m+1
Putting x = x0 in (E.7) now gives
f (m) (x0 ) = am m!,
an =
an n(n − 1) . . . (n − m + 1) (x − x0 )n−m .
|
{z
}
am =
(E.8) with x0 = 0 now gives
an =
∞
∞
X
X
f (n) (x0 )
f (n) (x0 )
(x − x0 )n = f (x0 ) +
(x − x0 )n
n!
n!
n=0
n=1
(E.8)
1
1
(x − x0 )2 f 00 (x0 ) + (x − x0 )3 f 000 (x0 ) + · · ·(E.9)
.
2!
3!
17
α(α − 1) . . . (α − n + 1)
f (n) (0)
=
,
n!
n!
(E.12)
so the required Maclaurin expansion is
f (n) (x0 )
for any integer n ≥ 0.
n!
= f (x0 ) + (x − x0 )f 0 (x0 ) +
f 0 (x) = α(1+x)α−1 , f 00 (x) = α(α−1)(1+x)α−2 , . . . . . . , f (n) (x) = α(α−1) . . . (α−n+1)(1+x)α−n .
= 0 for x=x0
f (m) (x0 )
for any integer m ≥ 0,
m!
The General Power of 1+x : Binomial Series
Suppose f (x) = (1 + x)α where α is an arbitrary real number, a function which is only
defined for x > −1, unless α is an integer. We know that
(E.7)
Substituting (E.8) back into (E.1) now gives
f (x) =
Examples of Maclaurin Series
(E.6)
∞
X
α(α − 1) . . . (α − n + 1)
α(α − 1) 2 α(α − 1)(α − 2) 3
x+
x +· · · ,
2!
3!
n=0
(E.13)
the so-called binomial series. In the general case where α is not an integer ≥ 0, we see
from (E.12) that
¯
¯
¯
¯
¯
¯
¯ ¯
¯ an ¯ ¯ n + 1 ¯
1 + n1
¯
¯
¯=¯
¯ an+1 ¯ ¯ α − n ¯ = ¯¯1 − α ¯¯ → 1 as n → ∞,
¯ | {z } ¯
n
¯ never zero ¯
(1+x)α =
n!
xn = 1+αx+
so that the radius of convergence is R = 1.
18
E.3.2
Special cases
(a) α = integer N ≥ 0.
In this case f (x) = (1 + x)N is a polynomial of degree N. By (E.12),
an =
N(N − 1) . . . (N − n + 1)
,
n!
Take f (x) = ln(1 + x), only defined for x > −1. Then differentiating, we have
(E.14)
so aN +1 = 0 and, more generally, an = 0 for all n > N, as expected for a polynomial of
degree N. For 0 ≤ n ≤ N, multiplying top and bottom of (E.14) by (N − n)! gives
µ
¶
N!
N
=
= binomial coefficient
(E.15)
an =
n
n!(N − n)!
f 0 (x) = (1+x)−1 , f 00 (x) = −(1+x)−2 , f 000 (x) = 2(1+x)−3 , f 0000 (x) = −(3×2)(1+x)−4 , . . .
and in general
f (n) (x) = (−1)n−1 (n − 1)!(1 + x)−n for n ≥ 1.
Putting x = 0, we see from (E.8) that a0 = f (0) = ln 1 = 0, while , for n ≥ 1,
f (n) (0)
(−1)n−1 (n − 1)!
(−1)n−1
=
=
,
n!
n!
n
so the required Maclaurin expansion (with no “n = 0” term) is
an =
Consequently (E.13) reduces in this case to the binomial theorem
ln (1 + x) =
¶
N µ
X
N(N − 1) 2 N(N − 1)(N − 2) 3
N
xn = 1 + Nx +
x +
x + · · · + xN ,
(1 + x)N =
n
2!
3!
n=0
(E.16)
a purely algebraic result, valid for all real x (so that R = ∞ in this case).
(b) α = −1.
In this case (E.12) gives
an =
(1 + x)
∞
X
=
(−1)n xn = 1 − x + x2 − x3 + · · ·
n=0
(for |x| < R = 1).
∞
X
n=0
xn = 1 + x + x2 + x3 + · · ·
(for |x| < 1),
(E.18)
µ
¶−1 µ ¶−1
1
1
9
10
1−
=1 ,
=
=
10
10
9
9
while its RHS becomes
1
1
1
+
+
+ · · · = 1 + 0.1 + 0.01 + 0.001 + · · · = 1.111 . . . = 1.1• ,
10 100 1000
the expression for 1 19 as a recurring decimal.
19
xn = x −
x2 x3 x4
+
−
+··· .
2
3
4
(E.19)
¯
¯
¯ an ¯ n + 1
1
¯
¯
¯ an+1 ¯ = n = 1 + n → 1 as n → ∞,
the radius of convergence of (E.19) is R = 1, as with (E.13). Note that the expansion
(E.19) can be obtained from the expansion (E.17) by integrating term by term.
The Exponential Function and cosh and sinh
ex =
(E.17)
the geometric series, giving the sum of an infinite geometric progression. Note that the
series (E.18) obviously diverges when x = 1, since its RHS is then infinite. On the other
hand it converges whenever |x| < 1, and this provides plenty of examples of familiar
situations where infinitely many terms add up to a finite answer. For example, taking
1
x = 10
, the LHS of (E.18) becomes
1+
Since
n
Since f (x) = ex satisfies f (n) (x) = f (x) = ex for all integers n ≥ 0, it follows from (E.8)
and the fact that exp 0 = 1 that an = 1/n!, so the Maclaurin expansion is
Reversing the sign of x in (E.17) yields
(1 − x)−1 =
∞
X
(−1)n−1
n=1
E.3.3
(−1)(−2)(−3) . . . (−n)
= (−1)n ,
n!
and (E.13) reduces to
−1
The Logarithm of 1+x
∞
X
xn
n=0
n!
=1+x+
x2 x3
+
+ ··· ,
2!
3!
(E.20)
and this converges for all x (i.e. R = ∞, i.e. the exponential is an entire function), since
¯
¯
¯ an ¯
¯
¯
¯ an+1 ¯ = n + 1 → ∞ as n → ∞.
Reversing the sign of x in (E.20) yields
e−x =
∞
X
xn
x2 x3
(−1)n
=1−x+
−
+··· .
n!
2!
3!
n=0
(E.21)
Adding and subtracting (E.20) and (E.21) (and dividing by 2) then gives the Maclaurin
expansions
∞
X
x2k
x2 x4
ex + e−x
= 1+
+
+ ··· =
,
(E.22)
cosh x =
2
2!
4!
(2k)!
k=0
sinh x =
X x2k+1
x3 x5
ex − e−x
= x+
+
+ ··· =
,
2
3!
5!
(2k + 1)!
k=0
∞
(E.23)
also valid for all x. Note that the Maclaurin expansion of the even function cosh x
contains only even powers of x, while that of the odd function sinh x contains only odd
powers of x. The dummy integer variable k appearing in the right hand members of
(E.22) and (E.23) runs through all non-negative integer values. As it does so, 2k runs
through all the even integers ≥ 0, while 2k + 1 runs through all the odd positive integers.
20
E.3.4
E.5
The Cosine and Sine
Because d(cos x)/dx = − sin x and d(sin x)/dx = cos x, it is clear that differentiating
cos x or sin x twice reverses its sign, and hence that f (x) = cos x satisfies
f (2k) (x) = (−1)k cos x, f (2k+1) (x) = (−1)k+1 sin x, for all integers k ≥ 0.
(E.24)
It follows from (E.8) that, for all integers k ≥ 0,
a2k =
(−1)k
f (2k) (0)
=
,
(2k)!
(2k)!
a2k+1 =
f (2k+1) (0)
= 0,
(2k + 1)!
whence the Maclaurin expansion
cos x =
∞
X
(−1)k x2k
k=0
(2k)!
x2 x4
=1−
+
− ···
2!
4!
(E.25)
of the even function cos x, which converges for all x [as can be seen indirectly from (E.3)
by viewing (E.25) as a power series in the variable x2 ]. In an exactly similar manner [or
by term-by-term differentiation of (E.25) and direct differentiation of cos x], it may be
shown that the odd function sin x has the Maclaurin expansion
sin x =
∞
X
(−1)k x2k+1
k=0
(2k + 1)!
=x−
x3 x5
+
− ··· ,
3!
5!
(E.26)
also valid for all x.
Note the striking similarity between the expansions (E.22) and (E.25) and between the
expansions (E.23) and (E.26). One can in fact (cf MATH 1201 Mathematical Structures)
extend the definitions of cosh x, sinh x, cos x and sin x from real x to complex x, and
the expansions (E.22), (E.23), (E.25) and (E.26) are
√ then valid for all complex numbers
x. Remembering that the complex number i = −1 satisfies i2 = −1 comparison of
(E.22) and (E.23) with (E.25) and (E.26) reveals that the trigonometric and hyperbolic
functions are related by
cos x = cosh ix,
E.4
cosh x = cos ix,
i sin x = sinh ix,
i sinh x = sin ix,
Example of a Smooth Function which is not Analytic
(included for the sake of curiosity)
Although analytic functions are always smooth, in the sense that they possess continuous
derivatives of all orders, the converse is false, as the following example shows. Define
f : R → R by
½ −1/x2
e
if x 6= 0,
f (x) =
0
if x = 0.
This is obviously smooth except possibly at x = 0, where it is continuous since exp x → 0
as x → −∞. It is easy to prove by induction (cf MATH 1201 Mathematical Structures)
that, for any positive integer n, its n’th derivative is given for x 6= 0 by an expression of
the form
1
2
x−n−2 e−1/x × polynomial of degree n − 1 in 2 .
x
It is then easy, again by induction, together with the strongly decaying property of the
exponential function, to show that all derivatives of f vanish at x = 0. It follows from
(E.8) that the Maclaurin expansion of f is identically zero and therefore not equal to
f (x). Thus f (x) is not analytic as a function on the whole of R, though it is analytic
on both (−∞, 0) and (0, ∞). The point x = 0 is a singularity of f (x). Despite its
apparently innocuous behaviour there, f (x) has no series expansion in powers of x valid
in an interval containing 0.
for all complex x.
Example of a Taylor Series not about x = 0
Consider the Taylor series for ln x about x = x0 , for any x0 > 0. This can easily be
tackled directly from first principles, but may more quickly be derived from the Maclaurin
expansion. Using (D.14), we see that
¶¸
¶
∙ µ
µ
x − x0
x − x0
= ln x0 + ln 1 +
ln x = ln[x0 + (x − x0 )] = ln x0 1 +
x0
x0
for any x > 0. Provided |x − x0 | < x0 , it follows from (E.19) that
µ
µ
µ
µ
¶
¶2
¶3
¶4
x − x0
1 x − x0
1 x − x0
1 x − x0
ln x = ln x0 +
+
−
+ ···
−
x0
2
x0
3
x0
4
x0
= ln x0 +
∞
X
(−1)n−1
(x − x0 )n ,
nxn0
n=1
(E.27)
the required Taylor series, with radius of convergence R = x0 .
21
22
respectively, but starting to count at zero rather than 1:
(s = 0)
f (a, b)
(r = 0)
(r = 1)
Appendix F
(r = 2)
Taylor Series for a Function of Two
Variables
Consider a function f (x, y) analytic near (a, b). For each fixed x (sufficiently close to a)
we can (for y sufficently close to b) expand f (x, y) as a Taylor series in powers of y − b:
f (x, y) =
∞
X
∂ sf
(y − b)s
(x, b)
.
s
∂y
s!
s=0
(F.1)
s
But the x-dependent coefficient ∂∂yfs (x, b) can now itself be expanded as a Taylor series in
powers of x − a:
¸
¾
∞ ∙½ r µ s ¶¾
∞ ½
X
∂ sf
∂
∂ f
(x − a)r X ∂ r+s f
(x − a)r
(a, b)
(x, b) =
(a, b)
=
. (F.2)
s
r
s
r
s
∂y
∂x
∂y
r!
∂x
∂y
r!
r=0
r=0
Substituting (F.2) into (F.1) gives the double power series
"∞ ½
#
¾
∞
X
X ∂ r+s f
(x − a)r (y − b)s
(a,
b)
f (x, y) =
∂xr ∂y s
r!
s!
s=0 r=0
½
¾
∞
∞
r
X X ∂ r+s f
(x − a) (y − b)s
=
(a, b)
r ∂y s
∂x
r!s!
s=0 r=0
∙
½
¾¸
∞
X 1
∂ r+s f
=
(a, b) (x − a)r (y − b)s ,
r ∂y s
r!s!
∂x
r,s=0
23
(x−a)2
2
(x−a)2 (y−b)
fxx (a, b)
2
..
.
(x−a)(y−b)2
2
(x−a)2 (y−b)2
fxxy (a, b)
4
..
.
fxyy (a, b)
···
· · · (F.4)
fxxyy (a, b) · · ·
...
The way we arrived at (F.3) involved summing first over r and then over s, i.e. first
summing the columns of the array and then adding up the separate column totals. We
could obviously have done things the other way round, summing the individual rows
first and then adding their totals. A more convenient arrangement of the terms for our
purposes than either of these is by increasing order, where the order of a term is the
corresponding integer n = r + s. In terms of the array (F.4), this corresponds to starting
with the top left entry (order zero), then moving on to its two nearest neighbours below
and to the right (order 1), and then to the three remaining nearest neighbours of these
(order 2) and so on. [Note that the entries of a particular order all lie on an diagonal
running from bottom left to top right] The rearranged Taylor expansion is
f (x, y) =
f (a, b)
| {z }
constant term (order zero)
+ (x − a)fx (a, b) + (y − b)fy (a, b)
{z
}
|
linear terms (1st order)
(x − a)2
(y − b)2
+
· · · (F.5)
· · ·}·
fxx (a, b) + (x − a)(y − b)fxy (a, b) +
fyy (a, b) + ·|· · · · {z
2
{z
}
| 2
higher order terms
quadratic terms (2nd order)
∞ X
n
X
(x − a)r (y − b)n−r
n=0 r=0
=
|
r!(n − r)!
{z
∂ nf
∂xr ∂y n−r
nth order terms
n ∙
∞ X
X
n=0 r=0
a sum in which the two summation variables r and s run independently through all
integer values ≥ 0. Provided x − a and y − b are sufficiently small, the double series (F.3)
will converge, and its infinitely many terms may be summed in any order. It is called
the Taylor series or Taylor expansion of the function f (x, y) about the point (a, b)
of the (x, y) plane.
It is convenient to arrange the terms of the above Taylor expansion in an infinite
rectangular table, in which the rows and columns are labelled by the integers r and s
(s = 2)
fyy (a, b)
(y−b)2
2
(x − a)fx (a, b) (x − a)(y − b)fxy (a, b)
..
.
=
(F.3)
(s = 1)
(y − b)fy (a, b)
1
r!(n − r)!
½
(a, b)
}
¾¸
∂ nf
(a,
b)
(x − a)r (y − b)n−r .
∂xr ∂y n−r
(F.6)
(F.7)
Extension to Functions of n Variables
By the same method as was used to derive (F.3) (which incidentally makes no pretensions
to mathematical rigour), it may be shown that an analytic function f (x1 , . . . , xn ) of n
variables has about a point (x∗1 , . . . , x∗n ) of Rn the multiple Taylor series expansion
f (x1 , . . . , xn ) =
∞
X
r1 ,...,rn = 0
∙
1
r1 ! . . . rn !
½
¾¸
∂ r1 +···+rn f
∗
∗
(x
,
.
.
.
,
x
)
(x1 −x∗1 )r1 . . . (xn −x∗n )rn ,
r1
1
n
∂x1 . . . ∂xrnn
(F.8)
in which the n summation variables r1 , . . . , rn run independently through all integer
values ≥ 0.
As in the two-variable case, the terms may be arranged in increasing order, now
24
defined to be r1 + r2 + · · · + rn . This leads to
¸
n ∙
X
∂f ∗
(x1 , . . . , x∗n ) (xk − x∗k )
f (x1 , . . . , xn ) = f (x∗1 , . . . , x∗n ) +
∂xk
k=1
∙
¸
n
2
∂ f
1 X
(x∗1 , . . . , x∗n ) (xj − x∗j )(xk − x∗k ) + higher order terms
(F.9)
+
2 j,k=1 ∂xj ∂xk
as a generalization of (F.5). Retaining just the top row of (F.9) leads to a linear approximation to f (x∗1 , . . . , x∗n ) valid near (x∗1 , . . . , x∗n ):
n ∙
X
f (x1 , . . . , xn ) ≈ f (x∗1 , . . . , x∗n )+
k=1
¸
∂f ∗
(x1 , . . . , x∗n ) δxk where δxk = xk −x∗k for each k.
∂xk
(F.10)
Appendix G
Behaviour of a Function of Two
Variables near a Stationary Point
It follows from the definition of a stationary point that the value of a function f at a point
(x∗ +δx, y ∗ +δy) near a stationary point (x∗ , y ∗ ) is approximately equal to z ∗ + 12 Q(δx, δy),
where z ∗ = f (x∗ , y ∗ ) and
Q(δx, δy) = A δx2 + 2B δxδy + C δy 2
where
A = fxx (x∗ , y ∗ ) , B = fxy (x∗ , y ∗ ) and C = fyy (x∗ , y ∗ ) .
In particular, for small vector displacement (δx, δy):
f (x∗ + δx, y ∗ + δy) > z ∗ iff Q(δx, δy) > 0,
f (x∗ + δx, y ∗ + δy) = z ∗ iff Q(δx, δy) = 0,
f (x∗ + δx, y ∗ + δy) < z ∗ iff Q(δx, δy) < 0.
We wish to investigate which of the above three conditions holds when the displacement
(δx, δy) points in a certain direction. In particular, notice that if the stationary point is
a minimum, then Q will always be positive. If if is a maximum, then Q will always be
negative. If it is a saddle point, then Q will be positive for some directions, and negative
for others.
Suppose the vector (δx, δy) makes an anticlockwise angle θ with the x axis. Then
p
δx = δs cos θ, δy = δs sin θ, where δs = δx2 + δy 2 .
It follows from (C.10)—(C.13) of Appendix C that
¢
¡
Q(δs cos θ, δs sin θ) = δs2 A cos2 θ + 2B cos θ sin θ + C sin2 θ
∙
¸
1
1
= δs2 A(1 + cos 2θ) + B sin 2θ + C(1 − cos 2θ)
2
2
¸
∙
1
2 1
(A + C) + (A − C) cos 2θ + B sin 2θ
= δs
2
2
∙
¸
2 1
(A + C) + R cos φ cos 2θ + R sin φ sin 2θ
= δs
2
∙
¸
1
= δs2 (A + C) + R cos (2θ − φ)
2
25
26
where we have defined a positive value R such that
1
1
R2 = (A − C)2 + B 2 = (A + C)2 + (B 2 − AC)
4
4
and an angle φ such that 12 (A − C) = R cos φ and B = R sin φ.
We need to know what the function
¸
∙
Q
1
= (A + C) + R cos (2θ − φ)
2
δs
2
looks like. As a function of θ, this oscillates with period π and an amplitude R about an
average value of 12 (A + C). Hence, if
R>
Appendix H
Completing the Square in a
Quadratic
1
|A + C|
2
then the oscillations are large enough to change the sign of Q as θ is varied, in which
case we have a saddle point. Hence, the condition for a saddle point becomes:
1
R >
(A + C)2
4
1
1
(A + C)2 + (B 2 − AC) >
(A + C)2
4
4
(AC − B 2 ) < 0.
2
This simply means rewriting a quadratic function
f (x) = ax2 + bx + c
2
<0
fxx fyy − fxy
Condition for a saddle point.
If it is not a saddle point, then it is a maximum or a minimum, depending on the sign of
1
(A + C). Note that, for a maximum or a minimum,
2
AC > B 2
which means AĊ > 0.
so that both A and C have the same sign (either A and C are both positive or they are
both negative). For a minimum, we require Q > 0 and so A > 0. Hence
2
> 0, fxx > 0
fxx fyy − fxy
Condition for a minimum.
Conversely, for a maximum we require Q < 0 and so A < 0, giving
2
> 0, fxx < 0
fxx fyy − fxy
Condition for a maximum.
2
= 0 indicates that higher-order derivatives are
The marginal case where fxx fyy − fxy
required to determine the nature of the stationary point.
27
(H.1)
in the alternative form
f (x) = a(x + p)2 + q,
(H.2)
where it is easily verified that the constants p and q must be given by
b
,
2a
µ 2
¶
2
b
b − 4ac
q =c−
=−
.
4a
4a
p=
i.e. for a saddle point
(where a 6= 0)
(H.3)
(H.4)
When obtaining (H.2) from (H.1) in a specific case it is best not to use formula (H.4).
Just use (H.3) to get p, then work out what q needs to be in order to get the correct
constant term c in the quadratic.
From (H.2) we may easily deduce the well-known formula for the roots of a quadratic,
since f (x) vanishes for a real value of x if and only if
r
√
r
r
b2 − 4ac −b ± b2 − 4ac
b
q
q
x + p = ± − , x = −p ± − = − ±
, (H.5)
=
a
a
2a
4a2
2a
only possible if the discriminant b2 − 4ac of the quadratic is ≥ 0. It is also clear from
(H.2) that f (x) takes the value q at x = −p, and that this is the (absolute) minimum
value of f (x) if a > 0, the maximum if a < 0. The graph of f (x) is a parabola, which
can be positioned in essentially six different ways, as illustrated below (where we have
assumed for the sake of argument that a and b have opposite sign, so that −p > 0).
28
Appendix I
List of Indefinite Integrals of
Elementary Functions
[Unless otherwise stated, these formulae hold for all real values of x. In all cases the
symbol c denotes an arbitrary constant. The quantities α and a are also real constants,
a being positive. The truth of each formula can easily be checked by differentiation, with
the aid of Appendix B.]
[NB A function is said to be positive (negative) definite if its value is always strictly
positive (negative), positive (negative) semi-definite if its value is always ≥ 0 (≤ 0),
and indefinite if it takes both positive and negative values.]
Z
Z
x−1 dx ≡
Z
xα+1
xα dx =
+c
α+1
Z
dx
= ln |x| + c
x
Z
Z
(I.2)
sin x dx = − cos x + c,
cos x dx = sin x + c,
tan x dx = − ln | cos x| + c
cot x dx = ln | sin x| + c
Z
Z
29
(for x 6= 0),
(I.1)
ex dx = ex + c,
Z
Z
⎧
⎨ for all x if α is an integer ≥ 0,
for x 6= 0 if α is an integer < −1,
⎩
for x > 0 if α is not an integer,
(for x − {π/2} not an integer multiple of π),(I.3)
(for x not an integer multiple of π),
sinh x dx = cosh x + c,
cosh x dx = sinh x + c,
30
Z
Z
Z
(I.4)
tanh x dx = ln(cosh x) + c,
coth x dx = ln | sinh x| + c
(for x 6= 0),
³x´
dx
√
= sin−1
+ c (for |x| < a),
a
a2 − x2
Z
³x´
dx
1
=
tan−1
+ c,
2
2
a +x
a
a
Z
³x´
dx
√
= sinh−1
+ c,
2
2
a
a +x
Z
³ ´
dx
−1 x
√
=
cosh
+ c (for x > a),
a
x2 − a2
¡ ¢
( 1
Z
tanh−1 xa + c for |x| < a,
dx
a
=
¡ ¢
1
a2 − x2
tanh−1 a + c for |x| > a,
a
(I.5)
(I.7)
(for x 6= ±a).
List of Reduction Formulae for
Indefinite Integrals
(I.8)
(I.9)
x
¯
¯
¯x + a¯
1
¯+c
ln ¯¯
=
2a
x − a¯
Appendix J
(I.6)
(I.10)
As with all indefinite integrals, these are defined only to within addition of an arbitrary constant. The first group are derived by integration by parts, and the part to be
integrated (v 0 ) is given for each example.
Z
Z
v 0 = sin x
(J.1)
xn sin x dx = −xn cos x + n xn−1 cos x dx
Z
Z
v 0 = cos x
(J.2)
xn cos x dx = xn sin x − n xn−1 sin x dx
Z
Z
1
n
v 0 = eax
(J.3)
xn eax dx = xn eax −
xn−1 eax dx
a
a
Z
Z
v0 = 1
(J.4)
(ln x)n dx = x (ln x)n − n (ln x)n−1 dx
The following equation is not created by integration by parts:
Z
Z
tann−1 x
− tann−2 x dxderived in lectures.
tann x dx =
n−1
The final group have the additional complication that the result of a straightforward
integration by parts is of the form
I(n) = f (x) + αI(n) + βI(n − 2)
and this must be rearranged to express I(n) purely in terms of functions and of integrals
with lower coefficients.
Z
Z
sinn−1 x cos x n − 1
+
sinn−2 x dx
v0 = sin x (J.5)
sinn x dx = −
n
n
Z
Z
cosn−1 x sin x n − 1
+
cosn−2 x dx
v0 = cos x (J.6)
cosn x dx =
n
n
Z
Z
secn−2 x tan x n − 2
+
v0 = sec2 x (J.7)
secn x dx =
secn−2 x dx
n−1
n−1
Z
Z
sinn+1 x cosm−1 x m − 1
+
sinn x cosm−2 x dx
v0 = sinn x cos
(J.8)
x
sinn x cosm x dx =
n+m
n+m
Z
Z
n−1
x cosm+1 x
n−1
sin
m
+
sinn−2 x cosm x dx
v0 = sin x cos
(J.9)
x
sinn x cosm x dx =
n+m
n+m
31
32
The last two of these also require repeated use of the identity sin2 x + cos2 x = 1 to
achieve the desired form. They can be used together to first reduce the power of cos x
and then reduce the power of sin x to reach a worst case of sin x cos x, easily integrated
using sin 2x = 2 sin x cos x or by substitution.
Appendix K
Partial Fractions: the General Case
Consider a real rational function R, i.e. a function of the real variable x of the form
R(x) =
P (x)
,
Q(x)
where P and Q are polynomials with real coefficients. Let m and n be the degrees of
P and Q respectively, and assume without loss of generality that the coefficient of xn
in Q(x) is equal to 1. Then Q(x) may be expressed as a product of real factors, each
of which is EITHER of the form (x − α)r for some real constant α and some positive
integer r (corresponding to an r-fold real root α of Q) OR of the form [(x − β)2 + γ 2 ]s
for some real constants β and γ and some positive integer
s (corresponding to a pair of
√
complex conjugate s-fold roots β ±iγ of Q, where i = −1, cf MATH 1201 Mathematical
Structures). Then the resolution of R(x) into PARTIAL FRACTIONS is a sum made
up of various contributions; for each factor in Q(x) of the first type one of the form
A1
A2
Ar
+
+···+
,
x − α (x − α)2
(x − α)r
where A1 , A2 , . . . , Ar are real constants; for each factor of the second type one of the form
B1 x + C1
B2 x + C2
Bs x + Cs
+
+···+
,
(x − β)2 + γ 2 [(x − β)2 + γ 2 ]2
[(x − β)2 + γ 2 ]s
where B1 , B2 , . . . , Bs and C1 , C2 , . . . , Cs are real constants; and finally,
if the degree m of P is at least as great as the degree n of Q,
(when Q is an improper rational function) an additional contribution of the form
D0 + D1 x + D2 x2 + · · · + Dm−n xm−n ,
where D0 , D1 , . . . , Dm−n are real constants and Dm−n 6= 0, i.e. a contribution in the form
of a polynomial D(x) of degree m − n with real coefficients. Having written down the
complete resolution into partial fractions, one now determines the unknown coefficients
by equating it to R(x), multiplying through by Q(x) and then comparing coefficients of
powers of x in the resulting polynomial identity. This last process can often be shortcircuited by setting x equal to one of the real roots α of Q.
33
34