Posterior Regularization for Expectation Maximization

From Cohen Courses
Revision as of 18:53, 29 September 2011 by Lingwang (talk | contribs)
Jump to navigationJump to search

Summary

This is a method to impose contraints on posteriors in the Expectation Maximization algorithm, allowing a finer-level control over these posteriors.

Method Description

For a given set of observed data, a set of latent data and a set of parameters , the Expectation Maximization algorithm can be viewed as the alternation between two maximization steps of the function , by marginalizing different free variables.

The E-step is defined as:

where is the Kullback-Leibler divergence given by , and q(z|x) is an arbitrary probability distribution over the latent variable z.

The M-step is defined as:

The goal of this method is to define a way to constrains over posteriors.