Difference between revisions of "Posterior Regularization for Expectation Maximization"
From Cohen Courses
Jump to navigationJump to search| Line 1: | Line 1: | ||
== Summary == | == Summary == | ||
| + | This is a [[Category::method]] to impose contraints on posteriors in the [[AddressesProblem::Expectation Maximization]] algorithm, allowing a finer-level control over these posteriors. | ||
| − | + | == Method Description == | |
| + | For a given set x of observed data, a set of latent data z and a set of parameters <math>\theta</math>, the [[Expectation Maximization]] algorithm can be viewed as the alternation between two maximization steps. | ||
| + | Where the E-step is defined as: | ||
| + | |||
| + | <math> | ||
| + | q^{t+1} = argmax_{q} F(q,\theta^t) = argmax_{q} -D_{KL}(q||p_{\theta^t}(z|x)) | ||
| + | </math> | ||
Revision as of 17:18, 29 September 2011
Summary
This is a method to impose contraints on posteriors in the Expectation Maximization algorithm, allowing a finer-level control over these posteriors.
Method Description
For a given set x of observed data, a set of latent data z and a set of parameters , the Expectation Maximization algorithm can be viewed as the alternation between two maximization steps. Where the E-step is defined as: