Expectation Regularization

From Cohen Courses
Revision as of 16:53, 30 November 2010 by PastStudents (talk | contribs)
Jump to navigationJump to search

This is a method introduced in G.S Mann and A. McCallum, ICML 2007. It is often served as a regularized term with the likelihood function. In practice human often have an insight of label prior distribution. This method introduced a way to take advantage of this prior knowledge.

Let's denote human-provided prior as . We minimizes the distance between and . KL-distance is used here so the regularization becomes

For semi-supervised learning purposes, we can augment the objective function by adding regularization term. For example, the new conditional likelihood of data becomes

<math> =\sum_{n}\text{log}p_{\theta}(y^{(n)}|x^{(n)}) - \lambda (\tilde{p}, \hat{p}) <\math>