Download Hidden Markov Models

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Artificial intelligence wikipedia , lookup

Computational phylogenetics wikipedia , lookup

Theoretical computer science wikipedia , lookup

Algorithm characterizations wikipedia , lookup

Factorization of polynomials over finite fields wikipedia , lookup

Pattern recognition wikipedia , lookup

Fisher–Yates shuffle wikipedia , lookup

Knapsack problem wikipedia , lookup

Birthday problem wikipedia , lookup

Smith–Waterman algorithm wikipedia , lookup

Lateral computing wikipedia , lookup

Genetic algorithm wikipedia , lookup

Computational electromagnetics wikipedia , lookup

Algorithm wikipedia , lookup

Simplex algorithm wikipedia , lookup

Simulated annealing wikipedia , lookup

Travelling salesman problem wikipedia , lookup

Expectation–maximization algorithm wikipedia , lookup

Mathematical optimization wikipedia , lookup

Inverse problem wikipedia , lookup

Probabilistic context-free grammar wikipedia , lookup

Secretary problem wikipedia , lookup

Time complexity wikipedia , lookup

Multiple-criteria decision analysis wikipedia , lookup

Computational complexity theory wikipedia , lookup

Transcript
Hidden Markov
Models
Adapted from
Dr Catherine Sweeney-Reed’s
slides
Summary
Introduction
 Description
 Central problems in HMM modelling
 Extensions
 Demonstration

Description
Specification of an HMM

N - number of states
Q

= {q1; q2; : : : ;qT} - set of states
M - the number of symbols (observables)
O
= {o1; o2; : : : ;oT} - set of symbols
Description
Specification of an HMM

A - the state transition probability matrix
 aij

= P(qt+1 = j|qt = i)
B- observation probability distribution
 bj(k)

= P(ot = k|qt = j)
i≤k≤M
π - the initial state distribution
Specification of an HMM

Description
Full HMM is thus specified as a triplet:
λ
= (A,B,π)
Central problems in HMM
modelling

Central
problems
Problem 1
Evaluation:
 Probability
of occurrence of a particular
observation sequence, O = {o1,…,ok}, given
the model
 P(O|λ)
 Complicated – hidden states
 Useful in sequence classification
Central problems in HMM
modelling

Central
problems
Problem 2
Decoding:
 Optimal
state sequence to produce given
observations, O = {o1,…,ok}, given model
 Optimality criterion
 Useful in recognition problems
Central problems in HMM
modelling

Central
problems
Problem 3
Learning:
 Determine
optimum model, given a training
set of observations
 Find λ, such that P(O|λ) is maximal
Problem 1: Naïve solution


Central
problems
State sequence Q = (q1,…qT)
Assume independent observations:
T
P(O | q,     P(ot | qt ,  )  bq1 (o1 )bq 2 (o2 )...bqT (oT )
i 1
NB Observations are mutually independent, given the
hidden states. (Joint distribution of independent
variables factorises into marginal distributions of the
independent variables.)
Problem 1: Naïve solution

Central
problems
Observe that :
P(q |  )   q1aq1q 2 aq 2 q 3 ...aqT 1qT

And that:
P(O |  )   P(O | q,  ) P(q |  )
q
Problem 1: Naïve solution

Finally get:
P(O |  )   P(O | q,  ) P(q |  )
q
NB:
-The above sum is over all state paths
-There are NT states paths, each ‘costing’
O(T) calculations, leading to O(TNT)
time complexity.
Central
problems
Problem 1: Efficient solution
Central
problems
Forward algorithm:

Define auxiliary forward variable α:
 t (i)  P(o1 ,..., ot | qt  i,  )
αt(i) is the probability of observing a partial sequence of
observables o1,…ot such that at time t, state qt=i
Problem 1: Efficient solution

Central
problems
Recursive algorithm:
 Initialise:
1 (i)   i bi (o1 )
(Partial obs seq to t AND state i at t)
x (transition to j at t+1) x (sensor)
 Calculate:
N
 t 1 ( j )  [  t (i )aij ]b j (ot 1 )
i 1
 Obtain:
Sum, as can reach j from
any preceding state
 incorporates partial obs seq to t
N
P(O |  )    T (i )
i 1
Sum of different ways
of getting obs seq
Complexity is O(N2T)
Central
problems
Problem 1: Alternative solution
Backward algorithm:

Define auxiliary
forward variable β:
t (i)  P(ot 1 , ot 2 ,..., oT | qt  i,  )
t(i) – the probability of observing a sequence of
observables ot+1,…,oT given state qt =i at time t, and 
Central
problems
Problem 1: Alternative solution

Recursive algorithm:
 Initialise:
T ( j )  1
 Calculate:
N
t (i)   t 1 ( j )aij b j (ot 1 )
j 1
 Terminate:
N
p(O |  )   1 (i )
t  T  1,...,1
i 1
Complexity is O(N2T)
Problem 2: Decoding
Central
problems
Choose state sequence to maximise
probability of observation sequence
 Viterbi algorithm - inductive algorithm that
keeps the best state sequence at each
instance

Problem 2: Decoding
Central
problems
Viterbi algorithm:

State sequence to maximise P(O,Q|):
P(q1, q2 ,...qT | O,  )

Define auxiliary variable δ:
 t (i)  max P(q1 , q2 ,..., qt  i, o1 , o2 ,...ot |  )
q
δt(i) – the probability of the most probable
path ending in state qt=i
Problem 2: Decoding

Recurrent property:
To get state seq, need to keep track
of argument to maximise this, for each
t and j. Done via the array ψt(j).
 t 1 ( j )  max ( t (i)aij )b j (ot 1 )
i

Algorithm:
 1.
Initialise:
1 (i)   i bi (o1 )
 1 (i)  0
Central
problems
1 i  N
Problem 2: Decoding

Central
problems
2. Recursion:
 t ( j )  max ( t 1 (i )aij )b j (ot )
1i  N
 t ( j )  arg max ( t 1 (i )aij )
1i  N

2  t  T ,1  j  N
3. Terminate:
P   max T (i)
P* gives the state-optimised probability
1i  N
qT  arg max T (i)
1i  N
Q* is the optimal state sequence
(Q* = {q1*,q2*,…,qT*})
Problem 2: Decoding

Central
problems
4. Backtrack state sequence:
qt   t 1 (qt1 )
t  T  1, T  2,...,1
O(N2T) time complexity
Problem 3: Learning



Central
problems
Training HMM to encode obs seq such that HMM
should identify a similar obs seq in future
Find λ=(A,B,π), maximising P(O|λ)
General algorithm:
 Initialise: λ0
 Compute new
model λ, using λ0 and observed
sequence O
 Then o  
 Repeat steps 2 and 3 until:
log P(O |  )  log P(O | 0 )  d
Central
problems
Problem 3: Learning
Step 1 of Baum-Welch algorithm:

Let ξ(i,j) be a probability of being in state i at time
t and at state j at time t+1, given λ and O seq
 t (i)aijb j (ot 1 )t 1 ( j )
 (i, j ) 
P(O |  )

 t (i)aijb j (ot 1 )  t 1 ( j )
N
N
 (i)a b (o
i 1 j 1
t
ij
j
t 1
)  t 1 ( j )
Problem 3: Learning
Central
problems
Operations required for the computation
of the joint event that the system is in state
Si and time t and State Sj at time t+1
Problem 3: Learning

Central
problems
Let  t (i) be a probability of being in state i at
time t, given O
N
 t (i)   t (i, j )
j 1
T 1

  (i)
t 1
t
T 1

  (i)
t 1
t
- expected no. of transitions from state i
- expected no. of transitions i  j
Problem 3: Learning
Central
problems
Step 2 of Baum-Welch algorithm:

ˆ   1 (i) the expected frequency of state i at time t=1
aˆij

 (i, j )


  (i) ratio of expected no. of transitions from
t
t
state i to j over expected no. of transitions from state i
 ( j)

bˆ (k ) 
  ( j ) ratio of expected no. of times in state j
t ,ot  k
t
j

t
observing symbol k over expected no. of times in state j
Problem 3: Learning


Baum-Welch algorithm uses the forward and
backward algorithms to calculate the auxiliary
variables α,β
B-W algorithm is a special case of the EM
algorithm:
calculation of  and 
 M-step: iterative calculation of ˆ , â ij , bˆ j ( k )
 E-step:

Central
problems
Practical issues:
 Can
get stuck in local maxima
 Numerical problems – log and scaling
Extensions

Extensions
Problem-specific:
 Left
to right HMM (speech recognition)
 Profile HMM (bioinformatics)
Extensions

General machine learning:
 Factorial
HMM
 Coupled HMM
 Hierarchical HMM
 Input-output HMM
 Switching state systems
 Hybrid HMM (HMM +NN)
 Special case of graphical models


Bayesian nets
Dynamical Bayesian nets
Extensions
Extensions
Examples
Coupled HMM
Factorial HMM
HMMs – Sleep Staging




Demonstrations
Flexer, Sykacek, Rezek, and Dorffner (2000)
Observation sequence: EEG data
Fit model to data according to 3 sleep stages
to produce continuous probabilities: P(wake),
P(deep), and P(REM)
Hidden states correspond with recognised
sleep stages. 3 continuous probability plots,
giving P of each at every second
HMMs – Sleep Staging
Demonstrations
Manual scoring of sleep stages
Staging by HMM
Probability plots for the 3 stages
Demonstrations
Excel

Demonstration of a working HMM
implemented in Excel
Further Reading



L. R. Rabiner, "A tutorial on Hidden Markov Models and
selected applications in speech recognition,"
Proceedings of the IEEE, vol. 77, pp. 257-286, 1989.
R. Dugad and U. B. Desai, "A tutorial on Hidden Markov
models," Signal Processing and Artifical Neural
Networks Laboratory, Dept of Electrical Engineering,
Indian Institute of Technology, Bombay Technical Report
No.: SPANN-96.1, 1996.
W.H. Laverty, M.J. Miket, and I.W. Kelly, “Simulation of
Hidden Markov Models with EXCEL”, The Statistician,
vol. 51, Part 1, pp. 31-40, 2002