Difference between revisions of "Posterior Regularization for Expectation Maximization"

From Cohen Courses
Jump to navigationJump to search
Line 17: Line 17:
  
 
<math>
 
<math>
\theta^{t+1} = argmax_{\theta} F(q^{t+1},\theta) = argmax_{\theta} log L(\theta|x) =  
+
\theta^{t+1} = argmax_{\theta} F(q^{t+1},\theta) = argmax_{\theta} log\ L(\theta|x) =  
 
</math>
 
</math>

Revision as of 18:36, 29 September 2011

Summary

This is a method to impose contraints on posteriors in the Expectation Maximization algorithm, allowing a finer-level control over these posteriors.

Method Description

For a given set of observed data, a set of latent data and a set of parameters , the Expectation Maximization algorithm can be viewed as the alternation between two maximization steps of the function , by marginalizing different free variables.

The E-step is defined as:

where is the Kullback-Leibler divergence given by

The M-step is defined as: