Difference between revisions of "Posterior Regularization for Expectation Maximization"

From Cohen Courses
Jump to navigationJump to search
 
(8 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
== Original Paper ==
 +
[http://www.seas.upenn.edu/~kuzman/publications/graca_nips_2007.pdf pdf]
 +
 
== Summary ==
 
== Summary ==
  
 
The [[Expectation Maximization]] algorithm is a method for finding the maximum likelihood estimates for the parameters in a statistical model. During the E-step of this algorithm, posterior probabilities are calculated for the latent data by fixing the parameters.  
 
The [[Expectation Maximization]] algorithm is a method for finding the maximum likelihood estimates for the parameters in a statistical model. During the E-step of this algorithm, posterior probabilities are calculated for the latent data by fixing the parameters.  
  
In many fields, prior knowledge about the posterior probabilities are known and can be applied to the model to improve the statistical model, yet the method to include such in the most efficient way in EM is not clear.
+
In many fields, prior knowledge about the posterior probabilities can be applied to the model in order to improve the statistical model, yet the method to include such in the most efficient way in EM is not clear.
  
Posterior Regularization is a [[Category::method]] used to impose such contraints on posteriors in the [[AddressesProblem::Expectation Maximization]] algorithm, allowing a finer-level control over these posteriors. The key advantage of this method is the fact that the base model remains unchanged, but during learning it is driven to obey the constrains that are set.
+
Posterior Regularization is a [[Category::method]] used to impose such constraints on posteriors in the [[AddressesProblem::Expectation Maximization]] algorithm, allowing a finer-level control over these posteriors.  
  
 
== Method Description ==
 
== Method Description ==
Line 24: Line 27:
 
</math>
 
</math>
  
The goal of this method is to define a way to constrains over posteriors, so that prior information can be set over these posteriors by defining a constraint set <math>Q_x</math> of allowed distribution over the latent variables <math>z</math>. This method addresses this by setting the restriction on <math>q(z|x)</math>. Thus, <math>Q_x</math> is defined as:
+
The goal of this method is to define a way to constrains over posteriors, so that prior information can be set over these posteriors by defining a constraint set <math>Q_x</math> of allowed distribution over the latent variables <math>z</math>.  
 +
 
 +
This method addresses this by setting the restriction on <math>q(z|x)</math>. Thus, <math>Q_x</math> is defined as:
  
 
<math>
 
<math>
 
Q_x = \{ q(z|x):\exists_\xi \ E_q[f(x,z)] - b_x \le \xi; \left \| \xi \right \|_2^2 < \epsilon^2 \}
 
Q_x = \{ q(z|x):\exists_\xi \ E_q[f(x,z)] - b_x \le \xi; \left \| \xi \right \|_2^2 < \epsilon^2 \}
 
</math>
 
</math>
 +
 
where <math>Q_x</math> defines a set of valid distributions where some feature expectations <math>f(x,z)</math> are bounded by <math>b_x</math> and a free variable <math>\epsilon</math> that denotes the amount of violation allowed.
 
where <math>Q_x</math> defines a set of valid distributions where some feature expectations <math>f(x,z)</math> are bounded by <math>b_x</math> and a free variable <math>\epsilon</math> that denotes the amount of violation allowed.
  
Thus, the E-step is redefined as:
+
To apply the constraints, the E-step is redefined as:
  
 
<math>
 
<math>
Line 37: Line 43:
 
</math>
 
</math>
  
 +
The new distribution for <math>q</math> is then used in the M-step, which remains unchanged.
  
 +
We can see that the key advantage of this method is the fact that the base model remains unchanged, but during learning it is driven to obey the constrains by setting appropriate parameters.
  
 
== Applications ==
 
== Applications ==
 
Posterior regularization has been used to improve [[Word Alignments]] in [http://www.seas.upenn.edu/~taskar/pubs/cl10.pdfSimilar Graca et al, 2010], by defining bijectivity and symmetry constraints over alignment posteriors.
 
Posterior regularization has been used to improve [[Word Alignments]] in [http://www.seas.upenn.edu/~taskar/pubs/cl10.pdfSimilar Graca et al, 2010], by defining bijectivity and symmetry constraints over alignment posteriors.
 +
 +
Improvements using Posterior regularization have been report [[Dependency Grammar Induction]] for low resource languages in the work by [http://www.seas.upenn.edu/~kuzman/publications/ganchev_acl_2009.pdf Ganchev et al, 2009]
 +
 +
This method has also been used in other applications such as [[Document Classification]], [[Information Extraction]] and [[POS Induction]]. An tutorial, describing the method and its applications can be found [http://sideinfo.wikkii.com/wiki/Main_Page here].

Latest revision as of 14:11, 30 September 2011

Original Paper

pdf

Summary

The Expectation Maximization algorithm is a method for finding the maximum likelihood estimates for the parameters in a statistical model. During the E-step of this algorithm, posterior probabilities are calculated for the latent data by fixing the parameters.

In many fields, prior knowledge about the posterior probabilities can be applied to the model in order to improve the statistical model, yet the method to include such in the most efficient way in EM is not clear.

Posterior Regularization is a method used to impose such constraints on posteriors in the Expectation Maximization algorithm, allowing a finer-level control over these posteriors.

Method Description

For a given set of observed data, a set of latent data and a set of parameters , the Expectation Maximization algorithm can be viewed as the alternation between two maximization steps of the function , by marginalizing different free variables.

The E-step is defined as:

where is the Kullback-Leibler divergence given by , q(z|x) is an arbitrary probability distribution over the latent variable z and is the posterior probability for z, for the fixed parameters .

The new is then used in the M-step, which is defined as:

The goal of this method is to define a way to constrains over posteriors, so that prior information can be set over these posteriors by defining a constraint set of allowed distribution over the latent variables .

This method addresses this by setting the restriction on . Thus, is defined as:

where defines a set of valid distributions where some feature expectations are bounded by and a free variable that denotes the amount of violation allowed.

To apply the constraints, the E-step is redefined as:

The new distribution for is then used in the M-step, which remains unchanged.

We can see that the key advantage of this method is the fact that the base model remains unchanged, but during learning it is driven to obey the constrains by setting appropriate parameters.

Applications

Posterior regularization has been used to improve Word Alignments in Graca et al, 2010, by defining bijectivity and symmetry constraints over alignment posteriors.

Improvements using Posterior regularization have been report Dependency Grammar Induction for low resource languages in the work by Ganchev et al, 2009

This method has also been used in other applications such as Document Classification, Information Extraction and POS Induction. An tutorial, describing the method and its applications can be found here.