Download caitlin525-notes

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Chapter 1- Set theory:
𝐴 ∩ (𝐡 βˆͺ 𝐢) = (𝐴 ∩ 𝐡) βˆͺ (𝐴 ∩ 𝐢)
𝐴 βˆͺ (𝐡 ∩ 𝐢) = (𝐴 βˆͺ 𝐡) ∩ (𝐴 βˆͺ 𝐢)
A B
DeMorgan’s Laws:
(𝐴 βˆͺ 𝐡)β€² = 𝐴′ ∩ 𝐡 β€²
(𝐴 ∩ 𝐡)β€² = 𝐴′ βˆͺ 𝐡 β€²
𝑃(𝐡) = 𝑃(𝐡 ∩ 𝐴) + 𝑃(𝐡 ∩ 𝐴′ )
𝑃(𝐴 βˆͺ 𝐡) = 𝑃(𝐴) + 𝑃(𝐡) βˆ’ 𝑃(𝐴 ∩ 𝐡)
𝑃(𝐴 βˆͺ 𝐡 βˆͺ 𝐢) = 𝑃(𝐴) + 𝑃(𝐡) + 𝑃(𝐢) βˆ’ 𝑃(𝐴 ∩ 𝐡) βˆ’ 𝑃(𝐡 ∩ 𝐢) βˆ’ 𝑃(𝐢 ∩ 𝐴) + 𝑃(𝐴 ∩ 𝐡 ∩ 𝐢)
Independence:
𝑃(𝐴 ∩ 𝐡) = 𝑃(𝐴)𝑃(𝐡)
Chapter 2- Counting Techniques:
Permutations:
Order matters!!
𝑃(𝑛, π‘˜) = number of ordered samples of size k from n without replacement.
𝑃(5,3) = 5 βˆ— 4 βˆ— 3
𝑃(10,2) = 10 βˆ— 9
Combinations:
Order doesn’t matter!!
𝐢(𝑛, π‘˜) = number of unordered samples of size k from n using sampling without replacement.
5βˆ—4βˆ—3
10βˆ—9
5
10
𝐢(5,3) = 3βˆ—2βˆ—1 = ( )
𝐢(10,2) = 2βˆ—1 = ( )
2
3
Samples With Replacement:
N objects, sample size k
Probability= π‘›π‘˜
Distinguishable permutations:
11!
How many distinct words can you make from the word Mississippi = 4!4!2!
Chapter 3- Conditional Probability:
𝑃(𝐴|𝐡) = 1 βˆ’ 𝑃(𝐴′ |𝐡)
𝑃(𝐴|𝐡) =
𝑃(𝐴∩𝐡)
𝑃(𝐡)
𝑃(𝐴 ∩ 𝐡) = 𝑃(𝐴|𝐡)𝑃(𝐡)
Bayes’ Theorem:
𝑃(𝐴)𝑃(𝐡|𝐴)
𝑃(𝐴|𝐡) =
𝑃(𝐡)
𝑃(𝐡) = 𝑃(𝐴)𝑃(𝐡|𝐴) + 𝑃(𝐴′ )𝑃(𝐡|𝐴′ )
Independence:
𝑃(𝐴|𝐡) = 𝑃(𝐴)
𝑃(𝐴 ∩ 𝐡 ∩ 𝐢) = 𝑃(𝐴)𝑃(𝐡|𝐴)𝑃(𝐢|𝐴 ∩ 𝐡)
Chapter 4- Random Variables:
Continuous random variables:
The probability of any single value is zero, since there are an infinite number of values.
Probability Density Function (pdf):
𝑓(π‘₯) = 𝑃(𝑋 = π‘₯)
𝑏
𝑃(π‘Ž ≀ 𝑋 ≀ 𝑏) = βˆ«π‘Ž 𝑓(π‘₯)𝑑π‘₯ = 𝐹(π‘Ž) βˆ’ 𝐹(𝑏)
∞
βˆ«βˆ’βˆž 𝑓(π‘₯) = 1
Continuous Discrete Function (cdf):
𝐹(𝑋) = 𝑃(𝑋 ≀ π‘₯)
π‘₯
𝑃(𝑋 ≀ π‘₯) = βˆ«βˆ’βˆž 𝑓(π‘₯)𝑑π‘₯
𝑃(𝑋 ≀ π‘₯) = 𝑃(𝑋 < π‘₯)
𝐹(βˆ’βˆž) = 0
𝐹(∞) = 1
Converting between pdf and cdf:
π‘₯
𝐹(π‘₯) = βˆ«βˆ’βˆž 𝑓(π‘₯)𝑑π‘₯
𝑓(π‘₯) = 𝐹 β€² (π‘₯)
Measures of Central Tendency:
π‘‰π‘Žπ‘Ÿ(𝑋 + π‘Œ + 𝑐) = π‘‰π‘Žπ‘Ÿ(𝑋) + π‘‰π‘Žπ‘Ÿ(π‘Œ) + 2πΆπ‘œπ‘£(𝑋, π‘Œ)
π‘‰π‘Žπ‘Ÿ(π‘Žπ‘‹ βˆ’ π‘π‘Œ) = π‘Ž2 π‘‰π‘Žπ‘Ÿ(𝑋) + 𝑏 2 π‘‰π‘Žπ‘Ÿ(π‘Œ) βˆ’ 2π‘Žπ‘πΆπ‘œπ‘£(𝑋, π‘Œ)
𝜎(π‘Žπ‘‹ + 𝑏) = |π‘Ž|𝜎(𝑋)
𝐸(π‘Žπ‘‹ + 𝑏) = π‘ŽπΈ(𝑋) + 𝑏
Approximations of discrete random variables:
When using an integral to approximate a discrete case (which can only take integer values), you
need to change the limits.
𝑏+0.5
𝑃(π‘Ž ≀ 𝑋 ≀ 𝑏) β‰ˆ 𝑃(π‘Ž βˆ’ 0.5 < π‘Œ < 𝑏 + 0.5) = ∫
π‘Žβˆ’0.5
Chebyshev’s Inequality:
𝑓(𝑦)𝑑𝑦
X is a random variable with mean πœ‡ and variance 𝜎 2 .
1
𝑃(|𝑋 βˆ’ πœ‡| β‰₯ π‘˜πœŽ) ≀ 2
π‘˜
1
The probability that X is not within 2𝜎 of the mean, is π‘˜ 2 .
Percentiles:
𝑃(𝑋 ≀ π‘₯𝑝 ) = 𝐹(π‘₯𝑝 ) = 𝑝
The probability that X is in the 35% percentile:
𝑃(𝑋 ≀ π‘₯0.35 ) = 𝐹(π‘₯0.35 ) = 0.35 Solve for x.
Moment generating function:
𝑀π‘₯ (𝑑) = 𝑝𝑒 𝑑π‘₯
𝑀π‘₯ (0) = 1
𝑀π‘₯ β€² (0) = 𝐸(𝑋)
𝑀π‘₯ β€²β€² (0) = 𝐸(𝑋 2 )
𝑍 = π‘Ž + 𝑏π‘₯
𝑀𝑍 (𝑑) = 𝑒 π‘Žπ‘‘ 𝑀π‘₯ (𝑏𝑑) = 𝑒 π‘Žπ‘‘ βˆ— 𝑒 𝑏𝑑π‘₯
𝑀𝑋+π‘Œ (𝑑) = 𝑀π‘₯ (𝑑) βˆ— π‘€π‘Œ (𝑑)
Mode:
The point where 𝑓(π‘₯) reaches its maximum. Set 𝑓 β€² (π‘₯) = 0 (the maximum value of a function
happens with the derivative =0). Solve for x.
Median:
-If given cdf, set 𝐹(π‘₯) = 0.5, solve for x
π‘₯
-If given pdf, integrate between ∫0 𝑓(π‘₯)𝑑π‘₯ and set answer equal to 0.5, solve for x.
-For split distributions, draw graphs or substitute in limits to decide where 0.5 falls.
Standard deviations:
β€œWhat percent of claims fall within one standard deviation of the mean?”
-Find 𝐸(𝑋), 𝜎
-Make range: [𝐸(𝑋) βˆ’ 𝜎, 𝐸(𝑋) + 𝜎]
-add up the percentage of claims in that range.
Integrals:
4 |π‘₯|
4
2
π‘₯
π‘₯
∫
𝑑π‘₯ = ∫
𝑑π‘₯ βˆ’ ∫
𝑑π‘₯
βˆ’2 10
0 10
0 10
∫ π‘₯ βˆ’1 𝑑π‘₯ = ln π‘₯
Chapter 5- Discrete Distributions:
Name
Binomial
Negative
Binomial
Density
f ( x) ο€½
 p q
f ( x) ο€½

Geometric
Hypergeometric
Poisson
Uniform
MGF
n
k
k nο€­k

M x t  ο€½ pe t  q

n
n failures before k
successes
 1 ο€­ qe t
M x t  ο€½ 
 p
οƒΆ
οƒ·
οƒ·
οƒΈ
n
n failures before the
first success
 1 ο€­ qe t
M x t  ο€½ 
 p
οƒΆ
οƒ·
οƒ·
οƒΈ
c(m1 , x)c(m2 , n ο€­ x)
c(m, n)
Sample of n taken
from total of m
n  k ο€­1
n
p q
k n
f ( x) ο€½ q p
f ( x) ο€½
x = 0,1,2…n
e  x
f ( x) ο€½
x!
1
f ( x) ο€½
n
Bernoulli
f ( x) ο€½ p x q n ο€­ x
Poisson Approx.
to the Binomial
e ο€­ np (np ) x
f ( x) ο€½
x!
 is the rate
of a rare event
x = 0,1,2…n
M x (t ) ο€½
great : n ο‚³ 100, np ο‚£ 10
Var ( X ) ο€½ npq
kq
E( X ) ο€½
p
Var ( X ) ο€½
kq
q
p
Var ( X ) ο€½
q
E( X ) ο€½
E( X ) ο€½
t
ο€­1)
N (e t ο€­ 1)
M x t  ο€½ pe t  q
good : n ο‚³ 20, p ο‚£ 0.05
E ( X ) ο€½ np
ο€­1
e t (e Nt ο€­ 1)

Var(X)
ο€­k
m ο€½ m1  m2
M x (t ) ο€½ e  (e
E(X)

nm1
m
E( X ) ο€½ 
E( X ) ο€½
N 1
2
E( X ) ο€½ p
p2
p2
 m  m  m ο€­ n οƒΆ
Var ( X ) ο€½ n 1  2 
οƒ·
 m  m  m ο€­ 1 οƒΈ
Var ( X ) ο€½ 
N 2 ο€­1
Var ( X ) ο€½
12
Var ( X ) ο€½ pq
Chapter 6- Continuous Distributions:
Name
Uniform
Density
𝑓(π‘₯) =
1
π‘βˆ’π‘Ž
𝑓(π‘₯) =
Beta
Weibull
Pareto
Exponential
(π‘Ž + 𝑏 βˆ’ 1)!
π‘₯ π‘Žβˆ’1 (1 βˆ’ π‘₯)π‘βˆ’1
(π‘Ž βˆ’ 1)! (𝑏 βˆ’ 1)!
𝑓(π‘₯) =
𝜏π‘₯
𝑓(π‘₯) =
Mx t 
CDF
π‘₯
πœβˆ’1 βˆ’( β„πœƒ)
π‘₯βˆ’π‘Ž
π‘βˆ’π‘Ž
π‘Žβ‰€π‘₯≀𝑏
𝐹(π‘₯) =
π‘Ž, 𝑏
0≀π‘₯≀1
No set formula
𝑀π‘₯ (𝑑) =
𝜏
𝜏, πœƒ
𝐹(π‘₯) = 1 βˆ’ 𝑒 βˆ’(
π›Όπœƒ 𝛼
(π‘₯ + πœƒ)𝛼+1
𝛼, πœƒ
πœƒ 𝛼
𝐹(π‘₯) = 1 βˆ’ (
)
π‘₯+πœƒ
1 βˆ’π‘₯⁄
𝑒 πœ‡
πœ‡
πœ‡
𝐹(π‘₯) = 1 βˆ’ 𝑒
π‘₯⁄ )
πœƒ
βˆ’π‘₯⁄
πœ‡
𝑬(π‘Ώπ’Œ ) =
Gamma
Normal
Lognormal
𝑓(π‘₯) =
𝑒
(βˆ’
(π‘₯βˆ’πœ‡)2
)
2𝜎2
𝛼, πœƒ
(πœ‡, 𝜎 )
Must be calculated
with table
of values
(πœ‡, 𝜎 2 )
not equal
to mean,
variance
Must be calculated
with table
of values
2
𝜎√2πœ‹
Not worth it
Not worth it
𝑏+π‘Ž
2
𝐸(𝑋) =
N/A
π‘Ž
𝐸(𝑋) =
π‘Ž+𝑏
N/A
1
𝐸(𝑋) = πœƒ βˆ— ( ) !
𝜏
πœ½π’Œ π’Œ!
(𝜢 βˆ’ 𝟏). . (𝜢 βˆ’ π’Œ)
𝑀π‘₯ (𝑑) =
𝐸(𝑋) =
πœƒ
π›Όβˆ’1
Var X 
π‘‰π‘Žπ‘Ÿ(𝑋) =
(𝑏 βˆ’ π‘Ž)2
12
π‘‰π‘Žπ‘Ÿ(𝑋)
=
π‘Žπ‘
(π‘Ž + 𝑏)2 (π‘Ž + 𝑏 + 1)
Not worth it
use 𝑀π‘₯ (𝑑) to calculate
1
1 βˆ’ πœ‡π‘‘
𝐸(𝑋) = πœ‡
π‘‰π‘Žπ‘Ÿ(𝑋) = πœ‡ 2
1
(1 βˆ’ πœƒπ‘‘)𝛼
𝐸(𝑋) = π›Όπœƒ
π‘‰π‘Žπ‘Ÿ(𝑋) = π›Όπœƒ 2
𝐸(𝑋) = πœ‡
π‘‰π‘Žπ‘Ÿ(𝑋) = 𝜎 2
𝑀π‘₯ (𝑑) =
βˆ’π‘₯
π‘₯ π›Όβˆ’1 𝑒 β„πœƒ
𝑓(π‘₯) = 𝛼
πœƒ (𝑛 βˆ’ 1)!
𝑒 𝑏𝑑 βˆ’ 𝑒 π‘Žπ‘‘
𝑑(𝑏 βˆ’ π‘Ž)
𝜏
𝑒
πœƒπœ
𝑓(π‘₯) =
E X 
𝑀π‘₯ (𝑑) = 𝑒
(πœ‡π‘‘+
𝜎2 𝑑 2
)
2
2
N/A
𝐸(𝑋) = 𝑒
(πœ‡+12𝜎2 )
π‘‰π‘Žπ‘Ÿ(𝑋) = 𝑒 (2πœ‡+2𝜎 )
2
βˆ’ 𝑒 (2πœ‡+𝜎 )
Chapter 7- Normal Distribution:
The normal distribution:
Has a complicated pdf and an unsolvable cdf. Use the table of values to look up the cdf for
certain values.
The Standard Normal Distribution has πœ‡ = 0 and 𝜎 = 1.
𝑋~𝑁(πœ‡, 𝜎 2 )
Evaluating the standard normal:
𝑃(𝑋 ≀ π‘₯) = Ξ¦(x)
𝑃(𝑋 < π‘₯) = 1 βˆ’ 𝑃(𝑋 > π‘₯)
𝑃(𝑋 > π‘₯) = 𝑃(𝑋 ≀ βˆ’π‘₯) = Ξ¦(βˆ’π‘₯) = 1 βˆ’ Ξ¦(π‘₯)
Ξ¦(π‘₯) = 1 βˆ’ Ξ¦(βˆ’π‘₯)
𝑃(𝑋 ≀ βˆ’π‘₯) = Ξ¦(βˆ’π‘₯) = 1 βˆ’ Ξ¦(π‘₯)
𝑃(𝑋 > βˆ’π‘₯) = 𝑃(𝑋 < π‘₯) = Ξ¦(π‘₯)
*Values in the table are for STANDARD normal distribution with πœ‡ = 0 and 𝜎 = 1.*
What if πœ‡ = 20 and 𝜎 = 15?
𝑋~𝑁(20,225)
π‘₯βˆ’πœ‡
π‘₯βˆ’πœ‡
𝑃 (𝑋 <
) = Ξ¦(
)
𝜎
𝜎
12 βˆ’ 20
𝑃(𝑋 < 12) = 𝑃 (
) = 𝑃(𝑋 < βˆ’0.53) = 1 βˆ’ Ξ¦(0.53)
15
𝑃(𝑋 > 5) = 1 βˆ’ 𝑃(𝑋 < 5) = 1 βˆ’ 𝑃 (𝑋 <
5 βˆ’ 20
) = 1 βˆ’ 𝑃(𝑋 < βˆ’1) = 1 βˆ’ (1 βˆ’ Ξ¦(1)) = Ξ¦(1)
15
Adding independent distributions:
𝑋1 ~𝑁(30,100), 𝑋2 ~𝑁(40,150)
π‘Œ = 𝑋1 + 𝑋2
π‘Œ~𝑁(70,250)
𝑋~𝑁(πœ‡, 𝜎 2 ) and π‘Œ = π‘Žπ‘‹ + 𝑏
π‘Œ~𝑁(π‘Žπœ‡ + 𝑏, π‘Ž2 𝜎 2 )
Subtraction of distributions:
𝑋1 ~𝑁(10,20)
𝑋2 ~𝑁(30,10)
π‘Œ = 𝑋1 βˆ’ 𝑋2
π‘Œ~𝑁(πœ‡1 βˆ’ πœ‡2 , 𝜎12 + 𝜎22 )~𝑁(βˆ’20,30)
The Central Limit Theorem:
𝑋1 , 𝑋2 , … , 𝑋𝑛 are independent and identically distributed with πœ‡ and 𝜎 2 . When n is large (𝑛 β‰₯
30), then the sum: 𝑋1 + 𝑋2 + β‹― + 𝑋𝑛 is ~𝑁(π‘›πœ‡, π‘›πœŽ 2 )
So even if 𝑋1 , 𝑋2 , … , 𝑋𝑛 aren’t normally distributed, we can use the normal distribution on the
sum.
Sample mean:
1
𝑍 = (𝑋1 + 𝑋2 + β‹― + 𝑋𝑛 )
𝑛
𝜎2
𝑍~𝑁 (πœ‡, 𝑛 )
1
(𝑋 + 𝑋 )
2 1 2 22
𝜎 +𝜎
𝑍~𝑁 (πœ‡, 1 22 2 )
𝑍=
Using normal distribution to estimate a discrete distribution:
Continuous can take any value, discrete can take only integers.
𝑃(π‘Ž < π‘₯ < 𝑏) β‰ˆ 𝑃(π‘Ž βˆ’ 0.5 < π‘₯ < 𝑏 + 0.5)
If Y follows a normal distribution π‘Œ ∼ 𝑁(πœ‡, 𝜎 2 ), then:
𝑏 + 0.5 βˆ’ πœ‡
π‘Ž βˆ’ 0.5 βˆ’ πœ‡
𝑃(π‘Ž < π‘₯ < 𝑏) = Ξ¦ (
)βˆ’Ξ¦(
)
𝜎
𝜎
Binomial:
π‘Œ~𝑁(𝑛𝑝, π‘›π‘π‘ž)
Poisson:
π‘Œ~𝑁(πœ†, πœ†)
Lognormal Distribution:
-parameters πœ‡ and 𝜎 are not the mean and standard deviation.
-if X follows a lognormal with parameters πœ‡ and 𝜎 2 , and π‘Œ = ln π‘₯, then Y follows a normal
distribution with mean πœ‡ and standard deviation 𝜎.
-if X follows a normal distribution with mean πœ‡ and variance 𝜎 2 , and π‘Œ = 𝑒 π‘₯ , then Y follows a
lognormal distribution with parameters πœ‡ and 𝜎 2 .
Chapter 8- Multivariate Distribution:
Joint pdf:
𝑓π‘₯,𝑦 (π‘₯, 𝑦)
Find 𝑝(π‘₯ = π‘₯) by adding row π‘₯ = π‘₯ in table.
𝑝(𝑋 = π‘₯) = 𝑓π‘₯ (π‘₯)
𝑃(π‘Œ = 𝑦) = 𝑓𝑦 (𝑦)
𝑓π‘₯ (π‘₯|π‘Œ = 𝑦) =
𝑓π‘₯,𝑦 (π‘₯, 𝑦)
𝑓𝑦 (𝑦)
𝑓(π‘₯, 𝑦) = 𝑓(𝑦|π‘₯)𝑓(π‘₯)
Independence:
𝑓π‘₯,𝑦 (π‘₯, 𝑦) = 𝑓π‘₯ (π‘₯)𝑓𝑦 (𝑦)
= marginal probability functions
𝐸(π‘‹π‘Œ) = 𝐸(𝑋)𝐸(π‘Œ)
Joint continuous pdf:
𝑓π‘₯,𝑦 (π‘₯, 𝑦) β‰₯ 0
Double integral:
Total area must sum to 1
∞
∞
∫ ∫ 𝑓π‘₯,𝑦 (π‘₯, 𝑦)𝑑π‘₯𝑑𝑦 = 1
βˆ’βˆž βˆ’βˆž
Marginal Continuous probability functions:
∞
𝑓π‘₯ (π‘₯) = ∫ 𝑓π‘₯,𝑦 (π‘₯, 𝑦)𝑑𝑦
βˆ’βˆž
∞
𝑓𝑦 (𝑦) = ∫ 𝑓π‘₯,𝑦 (π‘₯, 𝑦)𝑑π‘₯
βˆ’βˆž
∞
𝐸(𝑋) = βˆ‘ π‘₯𝑓π‘₯ (π‘₯) = ∫ π‘₯𝑓π‘₯ (π‘₯)𝑑π‘₯
βˆ’βˆž
𝐸(𝑋 + π‘Œ) = 𝐸(𝑋) + 𝐸(π‘Œ)
Moment generating function:
𝑀𝑋,π‘Œ (𝑠, 𝑑) = 𝐸(𝑒 𝑠π‘₯+𝑑𝑦 )
𝐸(𝑋|𝐹) = βˆ‘ π‘₯𝑓π‘₯ (π‘₯|𝑓)
π‘‰π‘Žπ‘Ÿ(π‘Œ|𝑋) = 𝐸(π‘Œ 2 |𝑋) βˆ’ 𝐸(π‘Œ|𝑋)2
Covariance:
πΆπ‘œπ‘£(𝑋, π‘Œ) = 𝐸(𝑋, π‘Œ) βˆ’ 𝐸(𝑋)𝐸(π‘Œ)
π‘‰π‘Žπ‘Ÿ(𝑋) = πΆπ‘œπ‘£(𝑋, 𝑋)
πΆπ‘œπ‘£(𝑋, π‘Œ) = πΆπ‘œπ‘£(π‘Œ, 𝑋)
πΆπ‘œπ‘£(π‘Žπ‘‹ + π‘π‘Œ, 𝑋 + 𝑍) = π‘ŽπΆπ‘œπ‘£(𝑋, 𝑋) + π‘ŽπΆπ‘œπ‘£(𝑋, 𝑍) + π‘πΆπ‘œπ‘£(π‘Œ, 𝑋) + π‘πΆπ‘œπ‘£(π‘Œ, 𝑍)
Independence:
πΆπ‘œπ‘£(𝑋, π‘Œ) = 0
Correlation Coefficient:
πΆπ‘œπ‘£(𝑋, π‘Œ)
𝜌=
𝜎π‘₯ πœŽπ‘¦
Chapter 9- Transformations of Random Variables:
Where x is a random variable, π‘Œ = 𝑔(π‘₯) is a function of x.
Method of Transformations:
𝑓(𝑦) = 𝑓π‘₯ (π‘”βˆ’1 (𝑦)) βˆ— |[π‘”βˆ’1 (𝑦)]β€² |
1. 𝑓π‘₯ (π‘₯) is usually given, as well as π‘Œ = 𝑔(π‘₯)
1 βˆ’π‘₯⁄
𝑓π‘₯ (π‘₯) =
𝑒 100
100
π‘Œ = 1.1𝑋
2. Find π‘”βˆ’1 (𝑦) (aka solve for x in second equation)
π‘Œ = 1.1𝑋
𝑋 = π‘Œβ„1.1
3. Use equation
𝑓(𝑦) = 𝑓π‘₯ (π‘”βˆ’1 (𝑦)) βˆ— |[π‘”βˆ’1 (𝑦)]β€² | = (
1 βˆ’(π‘Œβ„1.1)⁄
1
1 βˆ’π‘Œβ„
100 ) βˆ— (
𝑒
)=
𝑒 110
100
1.1
110
Method of Distribution Functions:
Use 𝐹𝑦 (𝑦) = 𝑃(π‘Œ ≀ 𝑦) and 𝐹π‘₯β€² (π‘₯) = 𝑓π‘₯ (π‘₯)
1. 𝑓π‘₯ (π‘₯) and π‘Œ = 𝑔(π‘₯) given
𝐹𝑦 (𝑦) = 𝑃(π‘Œ ≀ 𝑦)
Substitute Y οƒ  𝑃(π‘Œ ≀ 𝑦) = 𝑃(1.1𝑋 ≀ 𝑦)
Solve for x 𝑃(𝑋 ≀ π‘Œβ„1.1)
Means: 𝐹π‘₯ (π‘Œβ„1.1)
2. Use cdf of 𝑓π‘₯ (π‘₯) = 1 βˆ’ 𝑒 βˆ’(
Notes on SOA 127 Packet:
Average of two variances:
𝑋1 +𝑋2
π‘‰π‘Žπ‘Ÿ (
2
2
) = (12 )π‘‰π‘Žπ‘Ÿ(𝑋1 + 𝑋2 )
Average of n variances:
𝑛𝑋
π‘›π‘‰π‘Žπ‘Ÿ(𝑋) π‘‰π‘Žπ‘Ÿ(𝑋)
π‘‰π‘Žπ‘Ÿ ( ) =
=
𝑛
𝑛2
𝑛
π‘Œβ„ )/100
1.1
= 1 βˆ’ π‘’βˆ’
π‘Œβ„
110
𝑑𝑑
𝑔(𝑦) = 𝑓(𝑑(𝑦)) βˆ— | | = 𝑓(𝑑) βˆ— 𝑑′
𝑑𝑦
Integration by parts:
𝑏
𝑏
∫ 𝑒 𝑑𝑣 = 𝑒𝑣|π‘π‘Ž βˆ’ ∫ 𝑣 𝑑𝑒
π‘Ž
π‘Ž
Example 1:
𝑏
𝑓(π‘₯) = π‘₯𝑒 π‘₯ ,
∫ π‘₯𝑒 π‘₯ 𝑑π‘₯ = ∫ 𝑒 𝑑𝑣
π‘Ž
𝑒=π‘₯
𝑑𝑣 = 𝑒 π‘₯ 𝑑π‘₯
𝑑𝑒 = 𝑑π‘₯
𝑣 = 𝑒π‘₯
implies
𝑏
=
𝑒𝑣|π‘π‘Ž
𝑏
βˆ’ ∫ 𝑣 𝑑𝑒 =
π‘₯𝑒 π‘₯ |π‘π‘Ž
βˆ’ ∫ 𝑒 π‘₯ 𝑑π‘₯
π‘Ž
π‘Ž
Example 2:
𝑓(π‘₯) = π‘₯𝑒
βˆ’π‘₯β„π‘Ž
𝑑
,
∫ π‘₯𝑒 βˆ’
π‘₯⁄
π‘Ž
𝑑π‘₯ = ∫ 𝑒 𝑑𝑣
𝑐
𝑒=π‘₯
π‘₯
𝑑𝑣 = 𝑒 βˆ’ β„π‘Ž 𝑑π‘₯
implies
𝑑
= 𝑒𝑣|𝑑𝑐 βˆ’ ∫ 𝑣 𝑑𝑒 = βˆ’π‘Žπ‘₯𝑒 βˆ’
𝑐
Example 3:
π‘₯ π‘₯
𝑓(π‘₯) = 𝑒 βˆ’ β„π‘Ž ,
π‘Ž
𝑒=π‘₯
π‘₯
1
𝑑𝑣 = π‘Ž 𝑒 βˆ’ β„π‘Ž 𝑑π‘₯
𝑑𝑒 = 𝑑π‘₯
π‘₯
𝑣 = βˆ’π‘Žπ‘’ βˆ’ β„π‘Ž
𝑑
∫ π‘₯𝑒 βˆ’
π‘₯⁄ 𝑏
π‘Ž|
π‘Ž
π‘₯⁄
π‘Ž
𝑏
+ ∫ π‘Žπ‘’ βˆ’
π‘₯⁄
π‘Ž
𝑑π‘₯
π‘Ž
𝑑π‘₯ = ∫ 𝑒 𝑑𝑣
𝑐
implies
𝑑
= 𝑒𝑣|𝑑𝑐 βˆ’ ∫ 𝑣 𝑑𝑒 = βˆ’π‘₯𝑒 βˆ’
𝑐
π‘₯⁄ 𝑏
π‘Ž|
π‘Ž
𝑑𝑒 = 𝑑π‘₯
π‘₯
𝑣 = βˆ’π‘’ βˆ’ β„π‘Ž
𝑏
+ ∫ π‘’βˆ’
π‘₯⁄
π‘Ž
𝑑π‘₯
π‘Ž
Deductibles:
An insurance policy has a deductible of d. This means that if x is less than d, the payout is 0, and
if x is more than d, the payout is x-d.
Payout = {
0
π‘₯<𝑑
π‘₯βˆ’π‘‘ π‘₯ β‰₯𝑑
∞
𝐸(𝑋) = βˆ«π‘‘ (π‘₯ βˆ’ 𝑑)𝑓(π‘₯)𝑑π‘₯
∞
𝐸(𝑋 2 ) = βˆ«π‘‘ (π‘₯ βˆ’ 𝑑)2 𝑓(π‘₯)𝑑π‘₯
Related documents