Difference between revisions of "Siddiqi et al 2009 Reduced-Rank Hidden Markov Models"

From Cohen Courses
Jump to navigationJump to search
 
(4 intermediate revisions by 2 users not shown)
Line 5: Line 5:
  
 
== Summary ==
 
== Summary ==
This paper introduces Reduced-Rank Hidden Markov Models (RR-HMMs).  RR-HMMs are similar to standard HMMs, except the rank of the transition matrix is less than the number of hidden states.  Thus the dynamics evolve in a subspace of the hidden state probability space.
+
This [[Category::paper]] introduces Reduced-Rank Hidden Markov Models (RR-HMMs).  A RR-HMM is similar to standard [[UsesMethod::HMM]], except the rank of the transition matrix is less than the number of hidden states.  The dynamics evolve in a subspace of the hidden state probability space.
  
 
== Method ==
 
== Method ==
Sample singles, doubles, and triples from the from the observed output of the RR-HMM.  Then let:
+
Let <math>x_t</math> be the observed output of the RR-HMM and let:
  
 
[[File:Siddiqi et al 2009 Definition of P.png]]
 
[[File:Siddiqi et al 2009 Definition of P.png]]
  
The learning algorithm uses a singular value decomposition (SVD) of the correlation matrix between past and future observations.  The algorithm is borrowed from Hsu et al 2009, with no change for the reduced-rank case. Learning is O(
+
The learning algorithm uses a singular value decomposition (SVD) of the correlation matrix between past and future observations.  The algorithm is borrowed from Hsu et al 2009, with no change for the reduced-rank case.
  
 
[[File:Siddiqi et al 2009 Algorithm.png]]
 
[[File:Siddiqi et al 2009 Algorithm.png]]
  
<math>\hat{b}_1</math> is the initial state distribution, <math>\hat{b}_\infty</math> is the final state distribution, and <math>\hat{B}_x</math> is the transition matrix when x is observed.  Note that <math>X^+</math> denotes the Moore-Penrose pseudo-inverse of the matrix <math>X</math>.
+
<math>\hat{b}_1</math> is the initial state distribution, <math>\hat{b}_\infty</math> is the final state distribution, and <math>\hat{B}_x</math> is the transition matrix when x is observed.  <math>m</math> is the rank of the reduced state space.  Note that <math>X^+</math> denotes the Moore-Penrose pseudo-inverse of the matrix <math>X</math>.
  
  
Line 25: Line 25:
 
== Experimental Result ==
 
== Experimental Result ==
  
 +
[[File:Siddiqi et al 2009 Results.png]]
  
 
== Related Papers ==
 
== Related Papers ==
  
 
In progress by [[User:Jmflanig]]
 
In progress by [[User:Jmflanig]]
 +
 +
== Comment ==
 +
 +
This stuff is super cool but a little tricky to wrap one's head around.  I wrote up these notes about the older Hsu paper, and Siddiqi too, a while ago and they may not be totally correct but here we go: http://brenocon.com/matrix_hmm.pdf
 +
--[[User:Brendan|Brendan]] 23:21, 13 October 2011 (UTC)

Latest revision as of 19:22, 13 October 2011

Citation

Online version

[1]

Summary

This paper introduces Reduced-Rank Hidden Markov Models (RR-HMMs). A RR-HMM is similar to standard HMM, except the rank of the transition matrix is less than the number of hidden states. The dynamics evolve in a subspace of the hidden state probability space.

Method

Let be the observed output of the RR-HMM and let:

Siddiqi et al 2009 Definition of P.png

The learning algorithm uses a singular value decomposition (SVD) of the correlation matrix between past and future observations. The algorithm is borrowed from Hsu et al 2009, with no change for the reduced-rank case.

Siddiqi et al 2009 Algorithm.png

is the initial state distribution, is the final state distribution, and is the transition matrix when x is observed. is the rank of the reduced state space. Note that denotes the Moore-Penrose pseudo-inverse of the matrix .


Inference can be performed using the model parameters:

Siddiqi et al 2009 Inference.png

Experimental Result

Siddiqi et al 2009 Results.png

Related Papers

In progress by User:Jmflanig

Comment

This stuff is super cool but a little tricky to wrap one's head around. I wrote up these notes about the older Hsu paper, and Siddiqi too, a while ago and they may not be totally correct but here we go: http://brenocon.com/matrix_hmm.pdf --Brendan 23:21, 13 October 2011 (UTC)