Difference between revisions of "Posterior Regularization for Expectation Maximization"
From Cohen Courses
Jump to navigationJump to searchLine 19: | Line 19: | ||
\theta^{t+1} = argmax_{\theta} F(q^{t+1},\theta) = argmax_{\theta}\ E_X[\sum_s q^{t+1}(z|x)\ log\ p_{\theta}(x,z)] | \theta^{t+1} = argmax_{\theta} F(q^{t+1},\theta) = argmax_{\theta}\ E_X[\sum_s q^{t+1}(z|x)\ log\ p_{\theta}(x,z)] | ||
</math> | </math> | ||
+ | |||
+ | The goal of this method is to define a way to constrains over posteriors. |
Revision as of 17:48, 29 September 2011
Summary
This is a method to impose contraints on posteriors in the Expectation Maximization algorithm, allowing a finer-level control over these posteriors.
Method Description
For a given set of observed data, a set of latent data and a set of parameters , the Expectation Maximization algorithm can be viewed as the alternation between two maximization steps of the function , by marginalizing different free variables.
The E-step is defined as:
where is the Kullback-Leibler divergence given by
The M-step is defined as:
The goal of this method is to define a way to constrains over posteriors.