Difference between revisions of "Posterior Regularization for Expectation Maximization"

From Cohen Courses
Jump to navigationJump to search
Line 4: Line 4:
  
 
== Method Description ==
 
== Method Description ==
For a given set x of observed data, a set of latent data z and a set of parameters <math>\theta</math>, the [[Expectation Maximization]] algorithm can be viewed as the alternation between two maximization steps.
+
For a given set x of observed data, a set of latent data z and a set of parameters <math>\theta</math>, the [[Expectation Maximization]] algorithm can be viewed as the alternation between two maximization steps of the function <math>F(q,\theta)</math>.
Where the E-step is defined as:
+
 
 +
The E-step is defined as:
  
 
<math>
 
<math>
Line 12: Line 13:
  
 
where <math>D_{KL}</math> is the Kullback-Leibler divergence given by <math>D_{KL}(q||p) = E_q[log \frac{q}{p}]</math>
 
where <math>D_{KL}</math> is the Kullback-Leibler divergence given by <math>D_{KL}(q||p) = E_q[log \frac{q}{p}]</math>
 +
 +
The M-step is defined as:

Revision as of 18:25, 29 September 2011

Summary

This is a method to impose contraints on posteriors in the Expectation Maximization algorithm, allowing a finer-level control over these posteriors.

Method Description

For a given set x of observed data, a set of latent data z and a set of parameters , the Expectation Maximization algorithm can be viewed as the alternation between two maximization steps of the function .

The E-step is defined as:

where is the Kullback-Leibler divergence given by

The M-step is defined as: