Difference between revisions of "Contrastive Estimation"
Line 22: | Line 22: | ||
Assume we have a log-linear model that is paratermized by <math>\theta</math>, | Assume we have a log-linear model that is paratermized by <math>\theta</math>, | ||
the input example is <math>x</math>, and the output label is <math>y</math>. A standard log-liner model takes the form | the input example is <math>x</math>, and the output label is <math>y</math>. A standard log-liner model takes the form | ||
− | :<math>p(x,y | \theta) \overset{\underset{\mathrm{def}}{}}{=} \frac{1}{Z} \exp (\theta \cdot f(x,y))</math> | + | :<math>p(x,y | \theta) \overset{\underset{\mathrm{def}}{}}{=} \frac{1}{Z(\theta)} \exp (\theta \cdot f(x,y))</math> |
== Some Reflections == | == Some Reflections == | ||
== Related Papers == | == Related Papers == |
Revision as of 16:56, 29 September 2011
This is a method proposed by Smith and Eisner 2005:Contrastive Estimation: Training Log-Linear Models on Unlabeled Data.
The proposed approach deals with the estimation of log-linear models (e.g. Conditional Random Fields) in an unsupervised fashion. The method focuses on the denominator of the log-linear models by exploiting the so called implicit negative evidence in the probability mass.
Contents
Motivation
In the Smith and Eisner (2005) paper, the authors have surveyed different estimation techniques (See the Figure above) for probabilistic graphic models. It is clear that for HMMs, people usually optimize the joint likelihood. For log-linear models, various methods were proposed to optimize the conditional probabilities. In addition to this, there are also methods to directly maximize the classification accuracy, the sum of conditional likelihoods, or expected local accuracy. However, none of the above estimation techniques have specifically focused on the implicit negative evidence in the denominator of the standard log-linear model in an unsupervised setting.
How it Works
Unlike the above methods, the contrastive estimation approach optimizes:
here, the function means a set of implicit negative examples and the itself. The idea here is to move the probability mass from the neighborhood of to itself, so that a good denominator in log-linear models can not only improve the task accuracy, but also reduce the computation of the normalization part of the model.
Problem Formulation and the Detailed Algorithm
Assume we have a log-linear model that is paratermized by , the input example is , and the output label is . A standard log-liner model takes the form