Difference between revisions of "Siddiqi et al 2009 Reduced-Rank Hidden Markov Models"
Line 30: | Line 30: | ||
In progress by [[User:Jmflanig]] | In progress by [[User:Jmflanig]] | ||
+ | |||
+ | == Comment == | ||
+ | |||
+ | I wrote up these notes about the older Hsu paper, and Siddiqi too. A while ago and they may not be totally correct but here we go: http://brenocon.com/matrix_hmm.pdf | ||
+ | --[[User:Brendan|Brendan]] 23:21, 13 October 2011 (UTC) |
Revision as of 19:21, 13 October 2011
Contents
Citation
Online version
Summary
This paper introduces Reduced-Rank Hidden Markov Models (RR-HMMs). A RR-HMM is similar to standard HMM, except the rank of the transition matrix is less than the number of hidden states. The dynamics evolve in a subspace of the hidden state probability space.
Method
Let be the observed output of the RR-HMM and let:
The learning algorithm uses a singular value decomposition (SVD) of the correlation matrix between past and future observations. The algorithm is borrowed from Hsu et al 2009, with no change for the reduced-rank case.
is the initial state distribution, is the final state distribution, and is the transition matrix when x is observed. is the rank of the reduced state space. Note that denotes the Moore-Penrose pseudo-inverse of the matrix .
Inference can be performed using the model parameters:
Experimental Result
Related Papers
In progress by User:Jmflanig
Comment
I wrote up these notes about the older Hsu paper, and Siddiqi too. A while ago and they may not be totally correct but here we go: http://brenocon.com/matrix_hmm.pdf --Brendan 23:21, 13 October 2011 (UTC)