Download 2001 Training Program for Pension Officers

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Intensive Actuarial Training for
Bulgaria
January 2007
Lecture 0 – Review on
Probability Theory
By Michael Sze, PhD, FSA, CFA
Topics Covered
•
•
•
•
•
Some definitions and properties
Moment generating functions
Some common probability distributions
Conditional probability
Properties of expectations
Some Definitions and Properties
• Cumulative distribution function F(x)
–
–
–
–
F is non-decreasing: a < b  F(a) < F(b)
Limb F(b) = 1
Lima -  F(a) = 0
F is right continuous:bnb  LimnF(bn) = b
• E[X] =  x p(x) = , where p(x) = P(X = x)
–
–
–
–
–
E[g(x)] = i xi g(xi) p(xi)
E[aX + b] = a E[X] + b
E[X2] = i xi2 p(xi)
Var(X) = E[(X - )2] = E[X2] – (E[X])2
Var(a X + b) = a2 Var (X)
Moment Generating Functions
• Definition: mgf MX(t) = E[e t x]
• Properties:
–
–
–
–
–
–
There is a 1 – 1 correspondence between f(x) and MX(t)
X, Y independent r.v.  MX+Y(t)=MX(t).MY(t)
X1 ,…,Xn indep.  M i xi (t)= i Mxi(t)
mgf for f1 + f2 + f3 = Mx1 (t) + Mx2 (t) + M x3 (t)
M’X(0) = E[X]
M(n)X(0) = E[Xn]
Some Common Discrete
Probability Distributions
• Binomial random variable (r.v.) with
parameters (n, p)
• Poisson r.v. with parameter 
• Geometric r.v. with parameter p
• Negative binomial r.v. with parameter (r, p)
Some Common Continuous
Probability Distributions
•
•
•
•
Uniform r.v. on (a, b)
Normal r.v. with parameter (, 2)
Exponential r.v. with parameter 
Gamma r.v. with parameters (t, ), t,  > 0
Binomial r.v. B(n, p)
•
•
•
•
•
•
n is integer, 0  p  1
Probability of getting i heads in n trials
p(i) = nCi pi qn – i
E[X] = n p
Var(X) = n p q
MX (t) = (p et + q)n
Poisson r.v. with parameter 
•  > 0, the expected number of events
• Poisson is good approximation of binomial
for large n, small p, and not too big np
• np
• p(i) = P(X = i) = e -  x (i / i!)
• E[X] = Var(X) = 
• MX (t) = exp [  (et - 1) ]
Geometric r.v with parameter p
• 0  p  1, probability of success in one trial
• Geometric r.v. is used to study the
probability of getting the success in n trials
• p(n) = P(X = n) = qn - 1 p
• E[X] = 1/p
• Var(X) = q / p2 .
• MX (t) = p et / ( 1 - q et )
Negative Binomial r.v. with
parameter r, p
• p = probability of success in each trial
• r = number of successes wanted
• Negative binomial r.v. is used to study the
probability of getting first r successes in n trials
• p(n) = P(X = n) = n - 1Cr - 1 qn - r pr .
• E[X] = r / p
• Var(X) = r q / p2
• MX (t) = [p et / ( 1 - q et )]r
Uniform r.v. on (a, b)
• a<x<b
• f(x) =  1 / (b – a) for a < x < b
 0 otherwise
• F(c) =  (c – a) / (b – a) for a < x < b
 0 otherwise
• E[X] = (a + b) / 2
• Var(X) = (b – a)2 / 12
• MX (t) = (etb - eta) / [t (b - a)]
Normal r.v. with parameters
(, 2)
• By central limit theorem, many r.v. can be
approximated by a normal distribution
• f(x) = [1/(22)] exp [ - (x - )2 / 22]
• E[X] = 
• Var(X) = 2 .
• MX (t) = exp [ t + 2 t2 /2 ]
Exponential r.v. with parameter 
• >0
• Exponential r.v. X gives the amount of waiting
time until the next event happens
• X is memoryless: P(X>s+t|X>t) = P(X>s) for all s,
t0
• f(x) =  e - x. for x  0, 0 otherwise
• F(a) = 1 - e - a
• E[X] = 1 / 
• Var(X) = 1 / 2
• MX (t) =  / (  - t )
Gamma r.v. with parameters (s,  )
• s,  > 0
• Exponential r.v. X gives the amount of waiting
time until the next s events happen
• f(x) =  e - x (x)s – 1 / (t) for x  0, 0 otherwise
• (s) = 0 e - y ys – 1 dy
• (n) = (n – 1)! , (1) = (0) = 1
• E[X] = s / 
• Var(X) = s / 2
• MX (t) = [ / (  - t )] s
Conditional Probability
• Definition:For P(F)>0, P(E|F) = P(EF)/P(F)
• Properties:
– For A1,…,An,whereAiAj = for ij (exclusive), and
Ai = S(exhaustive), then
P(B) = i P(B|Ai ) P(Ai)
– Baye’s Theorem:
For P(B)>0, P(A|B) = [P(B|A).P(A)]/P(B)
– E[X|A] = i xi P(xi |A)
– E[X| Ai ] = i E(X|Ai) P(Ai)
Properties of Expectation
• E[X + Y] = E[X] + E[Y]
• E[i Xi ] = i E[Xi ]
• If X,Y are independent, then
E[g(X) h(Y)] = E[g(X)] E[h(Y)]
• Def.: Cov(X,Y) = E[(X-E[X])(Y-E[Y])]
• Cov(X,Y) = Cov(Y,X)
• Cov(X,X) = Var(X)
• Cov(aX,Y) = a Cov(X,Y)
Properties of Expectation(continued)
• Cov(i Xi, jYj) = i j Cov(Xi,Yj)
• Var(i Xi) = iVar(Xi) + ij Cov(Xi,Yj)
• If SN = X1+…+XN is a compound process
– Xi are mutually independent,
– Xi are independent of N, and
– Xi have the same distribution, then
E[SN] = i E[Xi]
Var(SN) = E[N] Var(X) + Var(N) (E[X])2
Related documents