Difference between revisions of "Posterior Regularization for Expectation Maximization"
From Cohen Courses
Jump to navigationJump to searchLine 12: | Line 12: | ||
</math> | </math> | ||
− | where <math>D_{KL}</math> is the Kullback-Leibler divergence given by <math>D_{KL}(q||p) = E_q[log \frac{q}{p}]</math> | + | where <math>D_{KL}</math> is the Kullback-Leibler divergence given by <math>D_{KL}(q||p) = E_q[log \frac{q}{p}]</math>, and q(z|x) is an arbitrary probability distribution over the latent variable z. |
The M-step is defined as: | The M-step is defined as: |
Revision as of 17:53, 29 September 2011
Summary
This is a method to impose contraints on posteriors in the Expectation Maximization algorithm, allowing a finer-level control over these posteriors.
Method Description
For a given set of observed data, a set of latent data and a set of parameters , the Expectation Maximization algorithm can be viewed as the alternation between two maximization steps of the function , by marginalizing different free variables.
The E-step is defined as:
where is the Kullback-Leibler divergence given by , and q(z|x) is an arbitrary probability distribution over the latent variable z.
The M-step is defined as:
The goal of this method is to define a way to constrains over posteriors.