Download Document

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
The Binomial random variable
A random variable that has the following pmf is said to be a
binomial random variable with parameters n, p
n i
p(i )    p (1  p) n i
i
n
n!
where    n Ai 
,
(n  i )!i !
i
n is an integer  1 and 0  p  1.
Example: A series of n independent trials, each having a
probability p of being a success and 1 – p of being a failure,
are performed. Let X be the number of successes in the n
trials.
The Poisson random variable
A random variable that has the following pmf is said to be a
Poisson random variable with parameter l (l > 0)
p(i)  P{ X  i}  e l
li
i!
, i  0,1,....
Example: The number of cars sold per day by a dealer is
Poisson with parameter l = 2. What is the probability of
selling no cars today? What is the probability of selling 2?
Example: The number of cars sold per day by a dealer is
Poisson with parameter l = 2. What is the probability of
selling no cars today? What is the probability of receiving
100?
Solution:
P(X=0) = e-2  0.135
P(X = 2)= e-2(22 /2!)  0.270
Continuous random variables
We say that X is a continuous random variable if there exists
a non-negative function f(x), for all real values of x, such that
for any set B of real numbers, we have
P{ X  B}   f ( x)dx.
B
The function f(x) is called the probability density function
(pdf) of the random variable X.
Properties

 P{ X  (, )}  

f ( x)dx  1.
 P{a    x  a   }  
a 
a 
f ( x )dx   f (a ).
b
 P{a  x  b}   f ( x)dx
a
a
 P{ X  a}   f ( x)dx  0
a
 F (a)  P( x  a )  
a

dF (a )

 f (a)
da
f ( x )dx
The Uniform random variable
A random variable that has the following pdf is said to be a
uniform random variable over the interval (a, b)
 1

f ( x)   b  a

0
if a  x  b
otherwise.
The Uniform random variable
A random variable that has the following pdf is said to be a
uniform random variable over the interval (a, b)
 1

f ( x)   b  a

0
0
 xa

F ( x)  
 ba

 1
if a  x  b
otherwise.
if x  a
if a  x  b
if x  b.
The Exponential random variable
A random variable that has the following pdf is said to be a
exponential random variable with parameter l > 0
l e - l x
if x  0
f ( x)  
if x  0
0
The Exponential random variable
A random variable that has the following pdf is said to be a
exponential random variable with parameter l > 0
l e - l x
f ( x)  
0
x
if x  0
if x  0
F ( x)   l e- lt dt  1  e  l x ,
0
x  0.
The Gamma random variable
A random variable that has the following pdf is said to be a
gamma random variable with parameters a, l > 0
 l e- l x (l x)a 1

f ( x)  
(a )
0

if x  0
if x  0
x
where (a )   e  x xa 1dx.
0
Note: for integer n, ( n)  ( n  1)!
The Normal random variable
A random variable that has the following pdf is said to be a
normal random variable with parameters m, s2
f ( x) 
1
 ( x  m )2 / 2s 2
e
2s
-  x  
Note: The distribution with parameters m = 0 and s = 1
is called the standard normal distribution.
Expectation of a random variable
If X is a discrete random variable with pmf p(x), then the
expected value of X is defined by
E[ X ]   xp( x)   xP( X  x)
x
x
Expectation of a random variable
If X is a discrete random variable with pmf p(x), then the
expected value of X is defined by
E[ X ]   xp( x)   xP( X  x)
x
x
Example: p(1)=0.2, p(3)=0.3, p(5)=0.2, p(7)=0.3
 E[X] = 0.2(1)+0.3(3)+0.2(5)+0.3(7)=0.2+0.9+1+2.1=4.2
If X is a continuous random variable with pdf f(x), then the
expected value of X is defined by

E[ X ]   xf ( x)dx

Expectation of a Bernoulli random variable
E[X] = 0(1 - p) + 1(p) = p
Expectation of a Bernoulli random variable
E[X] = 0(1 - p) + 1(p) = p
Expectation of a geometric random variable
1


n 1
E[ X ]   n 1 np(n)  n 1 n(1  p) p 
p
Expectation of a Bernoulli random variable
E[X] = 0(1 - p) + 1(p) = p
Expectation of a geometric random variable
1


n 1
E[ X ]   n 1 np(n)  n 1 n(1  p) p 
p
Expectation of a binomial random variable
n i
n
n
E[ X ]   i 0 ip(i )  i 0 i   p (1  p) n i
i
Expectation of a Bernoulli random variable
E[X] = 0(1 - p) + 1(p) = p
Expectation of a geometric random variable
1


n 1
E[ X ]   n 1 np(n)  n 1 n(1  p) p 
p
Expectation of a binomial random variable
n i
n
n
E[ X ]   i 0 ip(i )  i 0 i   p (1  p) n i
i
Expectation of a Poisson random variable
E[ X ]   i 0 ip(i )  i 0 ie l


li
i!
l
Expectation of a uniform random variable
b
x
b2  a 2 a  b
E[ X ]  
dx 

a ba
2(b  a)
2
Expectation of a uniform random variable
b
x
b2  a 2 a  b
E[ X ]  
dx 

a ba
2(b  a)
2
Expectation of an normal random variable

1
 ( x  m )2 / 2s 2
E[ X ]   x
e
m

2s
Expectation of a uniform random variable
b
x
b2  a 2 a  b
E[ X ]  
dx 

a ba
2(b  a)
2
Expectation of an exponential random variable

1
 ( x  m )2 / 2s 2
E[ X ]   x
e
m

2s
Expectation of a exponential random variable

1
l x
E[ X ]   xl e dx 
0
l
Expectation of a function
of a random variable
(1) If X is a discrete random variable with pmf p(x), then for
any real-valued function g,
E[ g ( X )]   g ( x) p( x)   g ( x) P ( X  x)
x
x
(2) If X is a continuous random variable with pdf f(x), then
for any real-valued function g,

E[ g ( X )]   g ( x) f ( x)dx

Expectation of a function
of a random variable
(1) If X is a discrete random variable with pmf p(x), then for
any real-valued function g,
E[ g ( X )]   g ( x) p( x)   g ( x) P ( X  x)
x
x
(2) If X is a continuous random variable with pdf f(x), then
for any real-valued function g,

E[ g ( X )]   g ( x) f ( x)dx

Note: P(Y=g(x))=P(X=x)
If a and b are constants, then E[aX+b]=aE[X]+b
The expected value E[Xn] is called the nth moment of the
random variable X.
The expected value E[(X-E[X])2] is called the variance of the
random variable X and denoted by Var(X)
 Var(X) = E[X2] - E[X]2
Jointly distributed random variables
Let X and Y be two random variables. The joint cumulative
probability distribution of X and Y is defined as
F (a, b)  P{ X  a, Y  b},
-  x  
FX (a )  P{ X  a, Y  }  F (a, )
FY (a )  P{ X  , Y  b}  F (, b)
If X and Y are both discrete random variables, the joint pmf
of X and Y is defined as
p ( x, y )  P{ X  x, Y  y}
p X ( x )   p ( x, y )
y
pY ( y )   p ( x, y )
x
If X and Y are continuous random variables, X and Y are said
to be jointly continuous if there exists a function f(x, y) such
that
P( x  A, y  B)    f ( x, y )dxdy
A B
P{ X  A}  P{ X  A, Y  (, )}  
P{ X  A}  
f X ( x)  

fY ( y )  





A 
f ( x, y )dxdy  
f ( x, y )dy
f ( x, y )dx




A 
f ( x, y )dxdy
f X ( x)dx
Related documents