Difference between revisions of "Posterior Regularization for Expectation Maximization"
From Cohen Courses
Jump to navigationJump to searchLine 4: | Line 4: | ||
== Method Description == | == Method Description == | ||
− | For a given set x of observed data, a set of latent data z and a set of parameters <math>\theta</math>, the [[Expectation Maximization]] algorithm can be viewed as the alternation between two maximization steps of the function <math>F(q,\theta)</math>. | + | For a given set <math>x\in X</math> of observed data, a set of latent data <math>z\in Z</math> and a set of parameters <math>\theta</math>, the [[Expectation Maximization]] algorithm can be viewed as the alternation between two maximization steps of the function <math>F(q,\theta)</math>, by marginalizing different free variables. |
The E-step is defined as: | The E-step is defined as: | ||
Line 17: | Line 17: | ||
<math> | <math> | ||
− | \theta^{t+1} = | + | \theta^{t+1} = argmax_{\theta} F(q^{t+1},\theta) = argmax_{\theta} log L(\theta|x) = |
</math> | </math> |
Revision as of 17:35, 29 September 2011
Summary
This is a method to impose contraints on posteriors in the Expectation Maximization algorithm, allowing a finer-level control over these posteriors.
Method Description
For a given set of observed data, a set of latent data and a set of parameters , the Expectation Maximization algorithm can be viewed as the alternation between two maximization steps of the function , by marginalizing different free variables.
The E-step is defined as:
where is the Kullback-Leibler divergence given by
The M-step is defined as: