Download Notes on Discrete Probability

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Notes on Discrete Probability
Prakash Balachandran
February 21, 2008
1
Probability
• P (E c ) = 1 − P (E).
• P (A ∪ B) = P (A) + P (B) − P (A ∩ B).
• For any events A, B, C,
P (A ∪ B ∪ C) = P (A) + P (B) + P (C) − P (A ∩ B) − P (A ∩ C) − P (B ∩ C) + P (A ∩ B ∩ C).
• Definition: The probability of A given B is P (A|B) =
P (A∩B)
P (B) .
Q
T
• Definition: Events {Ai }ni=1 are called independent if P ( ni=1 Ai ) = ni=1 P (Ai ).
• Let {Ai }ni=1 be a partition of the sample space S. Then, for any event E,
P (E) =
n
X
P (E ∩ Aj )
j=1
⇒ P (E) =
n
X
P (E|Aj )P (Aj ).
j=1
• Let E be an event, and {Ai }ni=1 be a partition of the sample space S. Then,
P (Ai |E) =
P (Ai ∩ E)
P (E|Ai )P (Ai )
= Pn
.
P (E)
j=1 P (E|Aj )P (Aj )
1
2
Discrete Random Variables
• Definition: A discrete random variable X is a measurable function X : (Ω, F, P ) → (R, B, µ) with
countable range.
• Definition: Let X be a discrete random variable. A probability function for X is a function p(x)
which assigns a probability to each value X, such that:
1. p(x) ≥ 0 for all x.
P
2.
x∈S p(x) = 1
• Definition Let X be a discrete random variable. The cumulative distribution function F (x) for X is
defined by
F (x) = P [X ≤ x].
• Definition: Let X be a discrete random variable. The expected value of X is defined by
X
µ = E(X) =
xp(x).
x∈S
• E(aX + b) = aE(X) + b.
• V ar(aX + b) = a2 V ar(X).
• Definition: The mode of a probability function is the value of x which has the highest probability
p(x).
• Definition: The variance of a random variable X is defined to be
σ 2 = V ar(X) = E[(X − µ)2 ] = E[X 2 ] − E[X]2 .
• Definition: For any possible value x of a random variable, the z-score is
z=
x−µ
.
σ
• Definition: Let X be a discrete random variable with probability function p(x). Define f (x) = p(x)n.
Then, the population mean and standard deviation are
s
1X
1X
µ=
f · x,
σ=
f · (x − µ)2 .
n
n
x∈S
x∈S
• Definition: Let X be a discrete random variable with probability function p(x). Define f (x) = p(x)n.
Then, the sample mean and standard deviation are
s
1X
1 X
x̄ =
f · x,
s=
f · (x − x̄)2 .
n
n−1
x∈S
x∈S
2
Note: these numbers are estimates of µ and σ.
• MaX+b (t) = ebt MX (at).
3
3
Commonly Used Discrete Random Variables
3.1
The Binomial Distribution
• Definition: A discrete random variable X is said to be a binomial random variable if
n k
p(X = k) =
p (1 − p)n−k , k = 1, 2, . . . , n.
k
p(x) is the probability that there will be exactly x successes in the n trials.
• Definition: If n = 1, the binomial is referred to as a Bernoulli distribution.
• E(X) = np.
• V ar(X) = np(1 − p).
• MX (t) = (1 − p + pt )n .
3.2
The Hypergeometric Distribution
• Definition: A discrete random variable X is said to be a hypergeometric random variable if
N −r r
p(X = k) =
•µ=
3.3
nr
N,
V ar(X) =
nr
N
1−
r
N
n−k k
N
n
N −n N −1
,
k = 0, . . . , n,
r ≥ n.
.
The Poisson Distribution
• Definition: A Poisson random variable is a discrete random variable X with probability function
p(X = n) =
e−λ λn
, n = 0, 1, . . .
n!
• µ = λ, V ar(X) = λ.
• MX (t) = eλ(e
3.4
t −1)
.
The Geometric Distributions
• Definition: A geometric random variable is a discrete random variable X with probability function
p(X = k) = q k p, k = 0, 1, . . . .
4
• µ = pq , V ar(X) =
• MX (t) =
q
.
p2
p
1−qet .
• If Y represents the trial number of first success (i.e. Y = X + 1), where X is as above,
p(y) = q y−1 p, y = 1, 2, 3, . . .
• E(Y ) = p1 .
• V ar(Y ) =
3.5
q
.
p2
The Negative Binomial Distribution
• Definition: A negative binomial random variable is a discrete random variable X with probability
function
r+k−1 k r
P (X = k) =
q p , k = 0, 1, . . . .
r−1
V ar(X) = prq2 .
r
p
.
• MX (t) = 1−qe
t
•µ=
3.6
rq
p,
The Discrete Uniform Distribution
• Definition: A discrete uniform random variable on 1, 2, . . . , n is a discrete random variable X with
probability function
P (X = k) =
•µ=
n+1
2 ,
• MX (t) =
V ar(X) =
1
, k = 1, . . . , n.
n
n2 −1
12 .
et (1−ent )
n(1−et ) .
5
4
Continuous Random Variables
4.1
The Density Function and Probabilities
• Definition: A random variable X : (Ω, F, P ) → (R, B, µ) is called a continuous random variable if
there exists a Borel measurable function f : R → R such that:
1. f (x) ≥ 0, ∀x ∈ R.
R
2. P [X ∈ A] = A f (x)dµ(x) for every Borel set A.
When such a function f (x) exists, it is called the probability density function (pdf) of X.
• Let g(x) be a strictly increasing/decreasing function on the sample space. Then, fY (y) = fX (g −1 (y))|(g −1 )0 (y)|.
4.2
The Cumulative Distribution Function
• Definition: The cumulative distribution function (cdf) F (x) for a continuous random variable X
with pdf f (x) is defined by
Z
x
F (x) = P [X ∈ (−∞, x)] =
f (u)dµ(x).
−∞
• When F 0 (x) exists, F 0 (x) = f (x).
• P [a < X ≤ b] = F (b) − F (a).
• P [X = x] = F (x) − limy→x− F (y).
4.3
The Mode, Median, and Percentiles
• Definition: The mode of a continuous random variable X with pdf f (x) is the value of x for which
f (x) is a maximum.
• Definition: The median m of a continuous random variable X with pdf f (x) and cdf F (x) is the
solution to the equation
F (m) = P [X ≤ m] = 0.5.
• Definition: Let X be a continuous random variable with cdf F (x). The 100 pth percentile of X is the
number xp such that
Z
xp
F (xp ) = P [X ≤ xp ] =
f (u)dµ(x) = p.
−∞
6
4.4
The Mean and Variance
• Definition: Let X be a continuous random variable with pdf f (x). The expected value or mean of X
is
∞
Z
xf (x)dν(x).
µ = E(X) =
−∞
• Let X be a continuous random variable with density f (x). Then, for any Borel measurable function
R∞
g : R → R, E[g(X)] = −∞ g(x)f (x)dµ(x).
• E(aX + b) = aE(X) + b.
• Definition: The k-th moment of X is E[X k ].
• Definition: The k-th central moment of X is E[(X − µ)k ].
• Definition: The skewness of X is
E[(X−µ)3 ]
.
σ3
√
• Definition: the coefficient o variation of X is
V ar(X)
E(X) .
• Definition: Let X be a continuous random variable with density function f (x) and mean µ. Then, the
variance of X is defined by
2
Z
∞
V ar(X) = E[(X − µ) ] =
(x − µ)2 f (x)dν(x) = E(X 2 ) − µ2 .
−∞
• V ar(aX + b) = a2 V ar(X).
4.5
Misc.
• Suppose that X is a continuous random variable with pdf fX (x) and cdf FX (x), and suppose that
u(x) is an injective strictly increasing/decreasing function, Then, the random variable Y = u(X) has
distribution:
fY (y) = fX (u−1 (y))|(u−1 )0 (y)|.
7
5
5.1
Commonly Used Continuous Distributions
Survival Functions and Failure Rates
• Definition: The survival function of a continuous random variable X is defined by
S(t) = P (T > t) = 1 − F (t).
• Definition: Let X be a random variable with density function f (x), and cdf F (x). The failure rate
function λ(x) is defined by
λ(x) =
f (x)
f (x)
=
.
1 − F (x)
S(x)
Rx
• Let X be a random variable taking values in [0, ∞). Then, S(x) = e− 0 λ(u)du .
R∞
R∞
• Let X be a random variable taking values in [0, ∞). Then, E(X) = 0 S(x)dx = 0 (1 − F (x))dx.
5.2
The Uniform Distribution
• Definition: A continuous random variable X is a uniform random variable on [a,b] if its pdf

1

a≤x≤b
 b−a
f (x) =

0
otherwise.
Its cdf is
F (x) =
• E(X n ) =
x≤a
x−a
b−a





1
bn+1 −an+1
(n+1)(b−a) .
• V ar(X) =
• MX (t) =



0




(b−a)2
12 .
ebt =eat
(b−a)t .
• The median of X is m =
a+b
2 .
• Usage: Can be used as a simple lifetime model.
8
x ∈ (a, b)
x≥b
5.3
•
R∞
0
The Exponential Distribution
xn e−ax dx =
n!
an+1
=
Γ(n+1)
an+1
for a > 0 and n a non-negative integer.
• Definition: A continuous random variable X is an exponential random variable with parameter
λ > 0 if its pdf
f (t) = λe−λt , t ≥ 0.
Its cdf is
F (t) = 1 − e−λt , t ≥ 0.
• S(t) = 1 − F (t) = e−λt , t ≥ 0.
R∞
• E(X n ) = λ 0 xn e−λx dx = λn!n =
• V ar(X) =
• MX (t) =
Γ(n+1)
λn
1
.
λ2
λ
λ−t ,
t < λ.
• ”Lack of memory property:” P [X > x + y|X > x] = P [X > y].
• Usage: If the number of events in a time period of length 1 is Poisson with parameter λ, and the number
of events in a time period of length t is Poisson with parameter λt, then the waiting time between events
between events is an exponential distribution with parameter λ.
Conversely, if X is used to model the time between successive events, then the number of events in t
units of time will have a Poisson distribution with parameter λt.
5.4
The Gamma Distribution
• Γ(α) =
R∞
0
xα−1 e−x dx for α > 0.
• Γ(α) = (α − 1)Γ(α − 1).
• Γ(n) = (n − 1)! for any positive integer n.
√
• Γ 12 = π.
• Definition: A continuous random variable X has a Gamma Distribution with parameters (α, β),
α, β > 0, if its pdf is
f (x) =
•
E(X n )
β α xα−1 e−βx
, x ∈ [0, ∞).
Γ(α)
Qn
=
• V ar(X) =
j=1 (α+n−j)
βn
.
α
.
β2
9
• MX (t) =
β
β−t
α
for t < β.
•: Usage: If {Xj }nj=1 are independent and i.i.d. continuous random variables with density f (x) =
Pn
λe−λx , then Sn =
j=1 Xj has a Gamma distribution with parameters (n, λ). This can be used to
model the waiting time for the nth occurrence of an event if successive occurrences are independent.
5.5
The Normal Distribution
• Definition: A random variable X has a normal distribution if its pdf f (x) is of the form
(x−µ)2
1
f (x) = √ e− 2σ2 , x ∈ (−∞, ∞).
σ 2π
• MX (t) = eµt+
σ 2 t2
2
.
• Let X be a normal random variable with mean µ and standard deviation σ. Then, the transformed
random variable Y = aX + b is also normal with mean aµ + b and standard deviation |a|σ.
•Z =
X−µ
σ
is a standard normal distribution (E(Z) = 0, V ar(Z) = σ 2 ).
• P [x1 ≤ X ≤ x2 ] = P [z1 ≤ Z ≤ z2 ] where zj =
• E(Z n ) = 0 for n odd. E(Z 2n ) =
5.6
xj −µ
σ .
(2n)! 2n
2n n! σ .
The Central Limit Theorem
• Let {Xj }nj=1 be a sequence of independent i.i.d. random variables, with mean µ and variance σ 2 . If n
P
is large, then the sum Sn = nj=1 Xj will be approximately normal with mean nµ and variance nσ 2 .
• In evaluating P [x1 ≤ X ≤ x2 ] using the continuity correction, we calculate P [x1 − 0.5 ≤ X ≤
x2 + 0.5]. Only use this if asked to in the exam, or if σ is sufficiently small such that
0.5
σ
would change
the second place in the z-score.
5.7
The Lognormal Distribution
• A random variable Y is lognormal if Y = eX for some normal random variable X with mean µ and
standard deviation σ. In this case, the pdf f (y) is given by
f (y) =
1
1
√
σy 2π
1
e− 2 (
2
• E(X) = eµ+ 2 σ .
10
ln y−µ 2
σ
) , y ≥ 0.
2
2
• V ar(X) = (eσ − 1)e2µ+σ .
• FY (c) = FX (ln c).
• Usage: Can be used to model insurance claim severity or investment returns.
5.8
The Pareto Distribution
• A random variable X has a Pareto distribution with parameters (α, β) if its pdf has the form
α
f (x) =
β
Its cdf is given by
• E(X) =
α+1
β
, α > 2, x ≥ β > 0.
x
α
β
F (x) = 1 −
, α > 2, x ≥ β > 0.
x
αβ
α−1 .
• V ar(X) =
αβ 2
.
(α−2)(α−1)2
• λ(x) = αx .
• Usage: Used to model certain insurance loss amounts.
5.9
The Weibull Distribution
• A random variable X has a Weibull distribution with parameters (α, β), α, β > 0, if its pdf f (x) has
the form
α
f (x) = αβxα−1 e−βx , x ≥ 0.
α
Its cdf is given by F (x) = 1 − e−βx , x ≥ 0.
• E(X) =
1
Γ(1+ α
)
• V ar(X) =
1
βα
1
2
βα
.
Γ(1 + α2 ) − Γ(1 + α1 )2 .
• λ(x) = αβ(xα−1 ).
• Usage: If the failure rate is constant, one might decide to use an exponential distribution. If the failure
rate increases with time, then the Weibull distribution can be useful.
11
5.10
The Beta Distribution
• A random variable X has a Beta distribution with parameters (α, β), α, β > 0, if its pdf has the form
f (x) =
•
R1
0
xα−1 (1 − x)β−1 dx =
• E(X) =
Γ(α + β) α−1
x
(1 − x)β−1 , 0 < x < 1.
Γ(α)Γ(β)
Γ(α)Γ(β)
Γ(α+β) .
α
α+β .
• V ar(X) =
αβ
.
(α+β)2 (α+β+1)
• Usage: Can be used to model random variables whose outcomes are percentages.
5.11
The Chi-Square Distribution with k Degrees of Freedom
• This is a special case of the Gamma distribution with β = 21 , α = k2 .
12
6
Multivariate Distributions
6.1
Joint and Marginal Distribution Functions
• Definition: Let X and Y be discrete random variables. The joint probability function for X and Y
is the function
p(s, y) = P (X = x, Y = y).
• Definition: The individual distributions for the random variables X and Y are called marginal distributions. Their marginal probability functions are defined, respectively, by
X
pX (x) =
p(x, y)
y∈S
pY (y) =
X
p(x, y).
x∈S
• Definition: Let X and Y be continuous random variables. The joint probability density function for
X and Y is a measurable function f (x, y) satisfying the following properties:
1. f (x, y) ≥ 0 for all x and y.
2. For and Borel set A, P [(X, Y ) ∈ A] =
R
A f (x, y)dxdy.
• Definition: Let f (x, y) be the joint density function for the continuous random variables X and Y .
Then, the marginal density functions of X and Y are defined by
Z ∞
fX (x) =
f (x, y)dy
−∞
Z
∞
fY (y) =
f (x, y)dx.
−∞
• Definition: The cumulative distribution function of a joint distribution is
F (x, y) = P [(X ≤ x) ∩ (Y ≤ y)].
When X and Y are continuous:
Z
x
Z
y
F (x, y) =
f (s, t)dtds
−∞
and
f (x, y) =
When X and Y are discrete:
F (x, y) =
−∞
∂2
F (x, y).
∂x∂y
y
x X
X
∞ −∞
13
f (s, t).
• When X and Y are continuous random variables, FX (x) = limy→∞ F (x, y).
• Let X and Y be given. If U = u(X, Y ), V = v(X, Y ), and x = h(u(x, y), v(x, y)) and y =
k(u(x, y), v(x, y)) are the inverse functions, then the joint density of U and V is
∂h ∂k ∂h ∂k .
g(u, v) = f (h(u, v), k(u, v)) · −
∂u ∂ν
∂ν ∂u 6.2
Conditional Distributions
• Definition: Let X and Y be discrete random variables. The conditional probability function of X
given that Y = y is given by
P (X = x|Y = y) = p(x|y) =
p(x, y)
.
pY (y)
Similarly, the conditional probability function of Y given that X = x is given by
P (Y = y|X = x) = p(y|x) =
p(x, y)
.
pX (x)
• Definition: Let X and Y be continuous random variables with joint density function f (x, y). The
conditional density function for X given that Y = y is given by
f (x|Y = y) = f (x|y) =
f (x, y)
.
fY (y)
Similarly, the conditional density for Y given that X = x is given by
f (y|X = x) = f (y|x) =
6.3
f (x, y)
.
fX (x)
Conditional Expected Value
• Definition: Let X and Y be discrete random variables with conditional probability functions p(x|y)
and p(y|x). Then, the conditional expectation of Y given that X = x is given by
X
E(Y |X = x) =
p(y|x).
y∈S
Similarly, the conditional expectation of X given that Y = y is given by
X
E(X|Y = y) =
xp(x|y).
x∈S
• Let X and Y be continuous random variables, with conditional density functions f (x|y) and f (y|x).
Then, the conditional expectation of Y given that X = x is given by
Z ∞
E(Y |X = x) =
yf (y|x)dy.
−∞
14
Similarly, the conditional expectation of X given that Y = y is given by
Z ∞
xf (x|y)dx.
E(X|Y = y) =
∞
6.4
Independence of Random Variables
• Definition: Two discrete random variables X and Y are independent if
p(x, y) = pX (x)pY (y),
or equivalently F (x, y) = FX (x)FY (y).
• If X and Y are discrete and independent random variables, then
p(x|y) = pX (x),
p(y|x) = pY (y).
• Definition: Two continuous random variables X and Y are independent if
f (x, y) = fX (x)fY (y),
or equivalently F (x, y) = FX (x)FY (y).
• If X and Y are continuous and independent random variables, then
f (x|y) = fX (x),
f (y|x) = fY (y).
• Definition: Suppose that an experiment has k possible outcomes with probabilities p1 , . . . , pk respectively. If the experiment is performed n successive times (independently), let Xi denote the number of
experiments that resulted in outcome i, so that X1 + · · · + Xk = n. The multinomial probability
function is
P [X1 = n1 , . . . , Xk = nk ] =
n!
pn1 · · · pnk k .
n1 ! · · · nk ! 1
For each i ∈ {1, 2, . . . , n}, Xi is a random variable with
E[Xi ] = npi ,
V ar[Xi ] = npi (1 − pi ).
Also,
Cov(Xi , Xj ) = −npi pj .
6.5
Covariance and Functions of Independent Random Variables
• Let X and Y be discrete, independent, random variables, with S = X + Y . Then,
pS (s) =
X
pX (x)pY (s − x).
x∈S
15
• Let X and Y be continuous, independent, random variables, with S = X + Y . Then,
Z ∞
fX (x)fY (s − x)dx.
fS (s) =
−∞
• Let X and Y be independent, exponential random variables with parameters β and λ respectively. If
M = min(X, Y ), then M is an exponential random variable with parameter β + λ.
• Let X and Y be independent random variables. Then,
Smin (t) = SX (t)SY (t)
Fmax (t) = FX (t)FY (t).
• E[h(X)] =
R∞
−∞ h(x)f (x)dx,
• E(h(X, Y )) =
where the integral is replaced with summation when X is discrete.
R∞ R∞
−∞ −∞ h(x, y)f (x, y)dxdy
where the integral is replaced with summation in the
discrete case.
• Let X and Y be independent random variables. Then,
E(XY ) = E(X)E(Y ).
• If g and h are any measurable functions, and X and Y are independent E[g(X)h(Y )] = E[g(X)]E[h(Y )].
• Definition: Let X and Y be random variables. The covariance of X and Y is defined by
Cov(X, Y ) = E[(X − µX )(Y − µY )].
Alternatively,
Cov(X, Y ) = E(X, Y ) − E(X)E(Y ).
In particular, if X and Y are independent,
Cov(X, Y ) = 0.
• V ar(aX + bY + c) = a2 V (X) + b2 V (Y ) + 2ab · Cov(X, Y ).
• Cov(aX + bY + c, dW + eZ + f ) = adCov(X, W ) + aeCov(X, Z) + bdCov(Y, W ) + beCov(Y, Z).
• Definition: Let X and Y be random variables. The correlation coefficient between X and Y is
defined by
ρXY =
Cov(X, Y )
Cov(X, Y )
=p
.
σX σY
V (X)V (Y )
• If X and Y are independent random variables, MX+Y (t) = MX (t)MY (t).
16
• Definition: Let X and Y be random variables. The joint moment generating function is defined by
MX,Y (s, t) = E(esX+tY ).
• Let {Xi }ni=1 be random variables. Then,


n
n
X
X


E
Xj =
E(Xj )
j=1

V
n
X
j=1

Xj  =
n
X
j=1
V (Xj ) + 2
j=1
X
Cov(Xi , Xj ).
i<j
• E[E(X|Y )] = E(X), E[E(Y |X)] = E(Y ).
• V (X) = E[V (X|Y )] + V [E(X|Y )], V (Y ) = E[V (Y |X)] + V [E(Y |X)].
• Let N be a Poisson random variable with parameter λ. If {Xj } are independent, i.i.d. random variables,
with S = X1 + · · · + XN , then
E(S) = E(N ) · E(X) = λE(X)
V (S) = λE(X 2 ) = λ[V (X) + (E(X))2 ].
6.6
Sums of Particular Distributions
Assume X1 , . . . , Xk are independent, i.i.d., Y = X1 + · · · + Xk .
• Xi Bernoulli=⇒ Y Binomial.
P
• Xi Binomial with parameter ni =⇒ Y Binomial with parameter ni .
P
• Xi Poisson with parameter λi =⇒ Y Poisson with parameter λi .
• Xi Geometric with parameter p =⇒ Y Negative Binomial with parameters (k, p).
• Xi Negative Binomial with parameters (ri , p) =⇒ Y Negative Binomial with parameters (
P
P
• Xi Normal with parameters (µi , σi2 ) =⇒ Y Normal with parameters ( µi , σi2 ).
• Xi Exponential with mean µ =⇒ Y Γ with parameters (k, µ1 ).
P
• Xi Γ with parameters (αi , β) =⇒ Y Γ with parameters ( αi , β).
17
P
ri , p).
6.7
Order Statistics
• Definition: Suppose that X has pdf f (x) and cdf F (x). Let {Xi }ni=1 be a collection of independent
i.i.d. random variables. The order statistics of {Xi }ni=1 are {Yi }ni=1 where the Yi ’s are ordered from
smallest to largest, and
gk (t) =
n!
[F (t)]k−1 [1 − F (t)]n−k f (t)
(k − 1)!(n − k)!
g1 (t) = n[1 − F (t)]n−1 f (t)
gn (t) = n[F (t)]n−1 f (t)
where gi is the pdf of Yi .
6.8
Mixtures of Distributions
• Definition: Let X1 and X2 be random variables with pdf’s f1 (x) and f2 (x), and 0 < a < 1. Define a
new random variable X called the mixture of X1 and X2 with pdf
f (x) = af1 (x) + (1 − a)f2 (x).
• E(X k ) = aE(X1k ) + (1 − a)E(X2k ).
• F (x) = aF1 (x) + (1 − a)F2 (x).
• MX (t) = aMX1 (t) + (1 − a)MX2 (t).
18
7
7.1
Insurance Terminology
Insurance Policy Deductible
• If X represents a loss random variable with pdf fX (x) and cdf FX (x), then for an insurance policy
with an ordinary deductible of amount d, the insurance will pay
Y =


0
X≤d

X − d
X>d
= M ax{X − d, 0}.
• When a loss occurs, the expected amount paid by the insurance may be called the expected cost per
loss, and is equal to
Z
∞
Z
∞
(x − d)fX (x)dx =
E[Y ] =
d
(1 − FX (x))dx.
d
• The expected cost per payment is the average amount paid by the insurance for the non-zero payments
that are made. This is
R∞
d
7.2
(x − d)fX (x)dx
.
1 − FX (d)
Insurance Policy Limit
• If X represents a loss random variable with pdf fX (x) and cdf FX (x), then for an insurance policy
with a policy limit of amount u, when a loss occurs, the amount paid by the insurance is
Z =


X
X≤u

u
X>u
= M in{X, u}.
• The average amount paid by the insurance when a loss occurs is
Z u
Z u
E[Z] =
xfX (x)dx + u[1 − FX (u)] =
[1 − FX (x)]dx.
0
0
• If insurance policy 1 has a deductible of c and insurance policy 2 has a limit of c, then when a loss
occurs, the combined payment of the two policies is Y + Z = X so that the two policies combined cover
the loss X.
19
7.3
Combined Policy Limit and Deductible
• If the loss random variable is X and a policy has a deductible of amount d and maximum payment of
u − d, then the policy pays



0
X≤d




X −d d<X ≤u





u − d X > u
• The expected cost per loss will be
Z t
Z t
(x − d)fX (x)dx =
[1 − FX (x)]dx.
d
d
20
Related documents