Download Probability Distributions

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Probability Distributions
Signals and Systems in Biology
Kushal Shah @ EE, IIT Delhi
Random Variable
I
A number assigned to every outcome of an experiment.
I
A function whose domain is the set of all experimental outcomes
X :Ω→R
I
I
X is defined on probability space
I What is P (X = x )?
.
I Ex : For a roll of a fair dice, P (X = 5) = 1 6 = P (X = 1)
I 0 ≤ P (X = x ) ≤ 1
∀x
I ∑ P (x ) = 1
x
Ω can be a discrete or continuous set
Random Variable
I
A number assigned to every outcome of an experiment.
I
A function whose domain is the set of all experimental outcomes
X :Ω→R
I
I
X is defined on probability space
I What is P (X = x )?
.
I Ex : For a roll of a fair dice, P (X = 5) = 1 6 = P (X = 1)
I 0 ≤ P (X = x ) ≤ 1
∀x
I ∑ P (x ) = 1
x
Ω can be a discrete or continuous set
Random Variable
I
A number assigned to every outcome of an experiment.
I
A function whose domain is the set of all experimental outcomes
X :Ω→R
I
I
X is defined on probability space
I What is P (X = x )?
.
I Ex : For a roll of a fair dice, P (X = 5) = 1 6 = P (X = 1)
I 0 ≤ P (X = x ) ≤ 1
∀x
I ∑ P (x ) = 1
x
Ω can be a discrete or continuous set
Random Variable
I
A number assigned to every outcome of an experiment.
I
A function whose domain is the set of all experimental outcomes
X :Ω→R
I
I
X is defined on probability space
I What is P (X = x )?
.
I Ex : For a roll of a fair dice, P (X = 5) = 1 6 = P (X = 1)
I 0 ≤ P (X = x ) ≤ 1
∀x
I ∑ P (x ) = 1
x
Ω can be a discrete or continuous set
Random Variable
I
A number assigned to every outcome of an experiment.
I
A function whose domain is the set of all experimental outcomes
X :Ω→R
I
I
X is defined on probability space
I What is P (X = x )?
.
I Ex : For a roll of a fair dice, P (X = 5) = 1 6 = P (X = 1)
I 0 ≤ P (X = x ) ≤ 1
∀x
I ∑ P (x ) = 1
x
Ω can be a discrete or continuous set
Random Variable
I
A number assigned to every outcome of an experiment.
I
A function whose domain is the set of all experimental outcomes
X :Ω→R
I
I
X is defined on probability space
I What is P (X = x )?
.
I Ex : For a roll of a fair dice, P (X = 5) = 1 6 = P (X = 1)
I 0 ≤ P (X = x ) ≤ 1
∀x
I ∑ P (x ) = 1
x
Ω can be a discrete or continuous set
Random Variable
I
A number assigned to every outcome of an experiment.
I
A function whose domain is the set of all experimental outcomes
X :Ω→R
I
I
X is defined on probability space
I What is P (X = x )?
.
I Ex : For a roll of a fair dice, P (X = 5) = 1 6 = P (X = 1)
I 0 ≤ P (X = x ) ≤ 1
∀x
I ∑ P (x ) = 1
x
Ω can be a discrete or continuous set
Random Variable
I
A number assigned to every outcome of an experiment.
I
A function whose domain is the set of all experimental outcomes
X :Ω→R
I
I
X is defined on probability space
I What is P (X = x )?
.
I Ex : For a roll of a fair dice, P (X = 5) = 1 6 = P (X = 1)
I 0 ≤ P (X = x ) ≤ 1
∀x
I ∑ P (x ) = 1
x
Ω can be a discrete or continuous set
Probability Distribution Function
When Ω is a continuous set,
I P (X = x ) = 0 ∀x (usually)
I P (x < X < x + dx ) = f (x ) dx

Probability Distribution Function [PDF]

I f (x ) ≡
Probability Density Function


Probability Function
I Cumulative Distribution Function [CDF] or Mass Function
FX (x ) = P (X ≤ x )
ˆ x
=
−∞
fX (x ) dx
d
lim F (x ) = 1
x →∞ X
F (x )
⇒ fX (x ) =
dx X
ˆ ∞
f (x ) dx = 1
−∞
f (x ) ≥ 0∀x
Probability Distribution Function
When Ω is a continuous set,
I P (X = x ) = 0 ∀x (usually)
I P (x < X < x + dx ) = f (x ) dx

Probability Distribution Function [PDF]

I f (x ) ≡
Probability Density Function


Probability Function
I Cumulative Distribution Function [CDF] or Mass Function
FX (x ) = P (X ≤ x )
ˆ x
=
−∞
fX (x ) dx
d
lim F (x ) = 1
x →∞ X
F (x )
⇒ fX (x ) =
dx X
ˆ ∞
f (x ) dx = 1
−∞
f (x ) ≥ 0∀x
Probability Distribution Function
When Ω is a continuous set,
I P (X = x ) = 0 ∀x (usually)
I P (x < X < x + dx ) = f (x ) dx

Probability Distribution Function [PDF]

I f (x ) ≡
Probability Density Function


Probability Function
I Cumulative Distribution Function [CDF] or Mass Function
FX (x ) = P (X ≤ x )
ˆ x
=
−∞
fX (x ) dx
d
lim F (x ) = 1
x →∞ X
F (x )
⇒ fX (x ) =
dx X
ˆ ∞
f (x ) dx = 1
−∞
f (x ) ≥ 0∀x
Probability Distribution Function
When Ω is a continuous set,
I P (X = x ) = 0 ∀x (usually)
I P (x < X < x + dx ) = f (x ) dx

Probability Distribution Function [PDF]

I f (x ) ≡
Probability Density Function


Probability Function
I Cumulative Distribution Function [CDF] or Mass Function
FX (x ) = P (X ≤ x )
ˆ x
=
−∞
fX (x ) dx
d
lim F (x ) = 1
x →∞ X
F (x )
⇒ fX (x ) =
dx X
ˆ ∞
f (x ) dx = 1
−∞
f (x ) ≥ 0∀x
Probability Distribution Function
When Ω is a continuous set,
I P (X = x ) = 0 ∀x (usually)
I P (x < X < x + dx ) = f (x ) dx


Probability Distribution Function [PDF]
I f (x ) ≡
Probability Density Function


Probability Function
I Cumulative Distribution Function [CDF] or Mass Function
FX (x ) = P (X ≤ x )
ˆ x
=
−∞
⇒ fX (x ) =
lim F (x ) = 1
x →∞ X
ˆ
fX (x ) dx
d
F (x )
dx X
∞
−∞
f (x ) dx = 1
f (x ) ≥ 0∀x
Probability Distribution Function
When Ω is a continuous set,
I P (X = x ) = 0 ∀x (usually)
I P (x < X < x + dx ) = f (x ) dx


Probability Distribution Function [PDF]
I f (x ) ≡
Probability Density Function


Probability Function
I Cumulative Distribution Function [CDF] or Mass Function
FX (x ) = P (X ≤ x )
ˆ x
=
−∞
fX (x ) dx
d
lim F (x ) = 1
x →∞ X
F (x )
⇒ fX (x ) =
dx X
ˆ ∞
f (x ) dx = 1
−∞
f (x ) ≥ 0∀x
PDF and CDF
When Ω is a continuous set,
I
I
I
P (X = x ) = 0 ∀x (usually)
P (x < X < x + dx ) = f (x ) dx
´x
FX (x ) = P (X ≤ x ) = −∞
fX (x ) dx
Example of PDF : Schrodinger’s equation
ψ (x ) : Quantum Wave Function
−
h̄2 ∂ 2 ψ
= [E − V (x )] ψ
2m ∂ x 2
m : Mass of the particle
E : Energy of the particle
V (x ) : Potential energy due to the force field
|ψ (x )|2 dx : probability of a particle to be in the region (x , x + dx )
Discrete Distributions
Bernoulli Distribution :
P (X = 1) = p
P (X = 0 ) = q = 1 − p
Binomial Distribution :
n
P (Y = k ) =
p k q n−k ,
k
Ex : No. of forward steps in a random walk
k = 0, 1, 2, ..., n
Discrete Distributions
Bernoulli Distribution :
P (X = 1) = p
P (X = 0 ) = q = 1 − p
Binomial Distribution :
n
P (Y = k ) =
p k q n−k ,
k
Ex : No. of forward steps in a random walk
k = 0, 1, 2, ..., n
Discrete Distributions
Bernoulli Distribution :
P (X = 1) = p
P (X = 0 ) = q = 1 − p
Binomial Distribution :
n
P (Y = k ) =
p k q n−k ,
k
Ex : No. of forward steps in a random walk
k = 0, 1, 2, ..., n
Normal (Gaussian) Distribution
f (x ) = √
1
2πσ
e
2
−(x −µ)2
.
2σ 2
µ : Mean
σ 2 : Variance
I Distribution of velocities of molecules
I Central Limit Theorem (CLT)
∼ N µ, σ 2
Normal (Gaussian) Distribution
f (x ) = √
1
2πσ
e
2
−(x −µ)2
.
2σ 2
µ : Mean
σ 2 : Variance
I Distribution of velocities of molecules
I Central Limit Theorem (CLT)
∼ N µ, σ 2
Normal (Gaussian) Distribution
f (x ) = √
1
2πσ
e
2
−(x −µ)2
.
2σ 2
µ : Mean
σ 2 : Variance
I Distribution of velocities of molecules
I Central Limit Theorem (CLT)
∼ N µ, σ 2
Central Limit Theorem (CLT) : Bernoulli Trials
Xi ∈ {0, 1}
P {Xi = 1} = p
P {Xi = 0} = q = 1 − p
2
µ = p
σ = pq
X1 + X2 + · · · + Xn
Fraction of successes in n trials
Xn =
n
k
P Xn =
n
= b (n, p , k ) =
n
k
p k q n−k
By CLT,
f Xn = x
(x − µ)2
∼ N µ,
=q
exp
−
2 n
n
2
σ
2
2πσ n
"
#
1
(x − p )2
= q
exp − 2pq n
2π pq n
σ2
1
"
#
Central Limit Theorem (CLT) : Bernoulli Trials
Xi ∈ {0, 1}
P {Xi = 1} = p
P {Xi = 0} = q = 1 − p
2
µ = p
σ = pq
X1 + X2 + · · · + Xn
Fraction of successes in n trials
Xn =
n
k
P Xn =
n
= b (n, p , k ) =
n
k
p k q n−k
By CLT,
f Xn = x
(x − µ)2
∼ N µ,
=q
exp
−
2 n
n
2
σ
2
2πσ n
"
#
1
(x − p )2
= q
exp − 2pq n
2π pq n
σ2
1
"
#
Central Limit Theorem (CLT) : Bernoulli Trials
Xi ∈ {0, 1}
P {Xi = 1} = p
P {Xi = 0} = q = 1 − p
2
µ = p
σ = pq
X1 + X2 + · · · + Xn
Fraction of successes in n trials
Xn =
n
k
P Xn =
n
= b (n, p , k ) =
n
k
p k q n−k
By CLT,
f Xn = x
(x − µ)2
∼ N µ,
=q
exp
−
2 n
n
2
σ
2
2πσ n
"
#
1
(x − p )2
= q
exp − 2pq n
2π pq n
σ2
1
"
#
Central Limit Theorem (CLT) : Bernoulli Trials
Xi ∈ {0, 1}
P {Xi = 1} = p
P {Xi = 0} = q = 1 − p
2
µ = p
σ = pq
X1 + X2 + · · · + Xn
Xn =
Fraction of successes in n trials
n
k
P Xn =
n
= b (n, p , k ) =
n
k
p k q n−k
By CLT,
f Xn = x
(x − µ)2
∼ N µ,
exp
−
=q
n
2σ 2 n
2πσ 2 n
"
#
1
(x − p )2
= q
exp − 2pq n
2π pq n
σ2
1
"
#
Central Limit Theorem (CLT) : Bernoulli Trials
Xi ∈ {0, 1}
P {Xi = 1} = p
P {Xi = 0} = q = 1 − p
2
µ = p
σ = pq
X + X + · · · + Xn
Xn = 1 2
Fraction of successes in n trials
n
k
P Xn =
n
= b (n, p , k ) =
n
k
p k q n−k
By CLT, for large n,
f Xn = x
(x − µ)2
=q
exp
−
∼ N µ,
n
2σ 2 n
2πσ 2 n
"
#
1
(x − p )2
= q
exp − 2pq n
2π pq n
σ2
1
"
#
Central Limit Theorem (CLT) : Bernoulli Trials
Xi ∈ {0, 1}
P {Xi = 1} = p
P {Xi = 0} = q = 1 − p
2
µ = p
σ = pq
X + X + · · · + Xn
Xn = 1 2
Fraction of successes in n trials
n
k
P Xn =
n
= b (n, p , k ) =
n
k
p k q n−k
By CLT, for large n,
f Xn = x
(x − µ)2
=q
exp
−
∼ N µ,
n
2σ 2 n
2πσ 2 n
"
#
1
(x − p )2
= q
exp − 2pq n
2π pq n
σ2
1
"
#
Exponential Distribution
(
λ e −λ t
f (t ) =
0
,
,
t ≥0
otherwise
1λ : Mean
1 λ 2 : Variance
I Arrival time of telephone calls
I Bus arrival times at a bus stop
I Inter nucleotide distancein DNA sequences
Exponential Distribution
(
λ e −λ t
f (t ) =
0
,
,
t ≥0
otherwise
1λ : Mean
1 λ 2 : Variance
I Arrival time of telephone calls
I Bus arrival times at a bus stop
I Inter nucleotide distancein DNA sequences
Exponential Distribution
(
λ e −λ t
f (t ) =
0
,
,
t ≥0
otherwise
1λ : Mean
1 λ 2 : Variance
I Arrival time of telephone calls
I Bus arrival times at a bus stop
I Inter nucleotide distancein DNA sequences
Exponential Distribution
(
λ e −λ t
f (t ) =
0
,
,
t ≥0
otherwise
1λ : Mean
1 λ 2 : Variance
I Arrival time of telephone calls
I Bus arrival times at a bus stop
I Inter nucleotide distance in DNA sequences
Poisson Distribution
P (X = k ) = e −λ
λk
k!
k = 0, 1, 2, 3, ..., ∞
λ : Mean and Variance
I No. of phone calls at an exchange over a fixed duration of time
I No. of printing errors in a book
Poisson and Exponential Distributions
Poisson theorem or Law of rare events :
λk
n
lim
p k q n−k = e −λ
k!
np=λ , n→∞ k
Markov Models
I
Markov Process
I
I
I
I
Markov Property :
Current state depends stochastically only on the previous step
Ex: Random walk or Brownian motion
Non-Markovian processes may have Markovian representation
Hidden Markov Model
I
Markov process with unobserved (hidden) states
Markov Models
I
Markov Process
I
I
I
I
Markov Property :
Current state depends stochastically only on the previous step
Ex: Random walk or Brownian motion
Non-Markovian processes may have Markovian representation
Hidden Markov Model
I
Markov process with unobserved (hidden) states
Markov Models
I
Markov Process
I
I
I
I
Markov Property :
Current state depends stochastically only on the previous step
Ex: Random walk or Brownian motion
Non-Markovian processes may have Markovian representation
Hidden Markov Model
I
Markov process with unobserved (hidden) states
Markov Models
I
Markov Process
I
I
I
I
Markov Property :
Current state depends stochastically only on the previous step
Ex: Random walk or Brownian motion
Non-Markovian processes may have Markovian representation
Hidden Markov Model
I
Markov process with unobserved (hidden) states
Markov Models
I
Markov Process
I
I
I
I
Markov Property :
Current state depends stochastically only on the previous step
Ex: Random walk or Brownian motion
Non-Markovian processes may have Markovian representation
Hidden Markov Model
I
Markov process with unobserved (hidden) states
Markov Models
I
Markov Process
I
I
I
I
Markov Property :
Current state depends stochastically only on the previous step
Ex: Random walk or Brownian motion
Non-Markovian processes may have Markovian representation
Hidden Markov Model
I
Markov process with unobserved (hidden) states
Related documents