Difference between revisions of "Contrastive Estimation"

From Cohen Courses
Jump to navigationJump to search
Line 5: Line 5:
 
== Motivation ==
 
== Motivation ==
 
[[File:Ce survey.png]]
 
[[File:Ce survey.png]]
In the Smith and Eisner (2005) paper, the authors have surveyed different estimation techniques for probabilistic graphic model.
+
 
 +
In the Smith and Eisner (2005) paper, the authors have surveyed different estimation techniques (See the Figure above) for probabilistic graphic models. It is clear that for HMMs, people usually optimize the joint likelihood. For log-linear models, various methods were proposed to optimize the conditional probabilities. In addition to this, there are also methods to directly maximize the classification accuracy, the sum of conditional likelihoods, or expected local accuracy.
  
 
== Problem Formulation ==
 
== Problem Formulation ==

Revision as of 16:22, 29 September 2011

This is a method proposed by Smith and Eisner 2005:Contrastive Estimation: Training Log-Linear Models on Unlabeled Data.

The proposed approach deals with the estimation of log-linear models (e.g. Conditional Random Fields) in an unsupervised fashion. The method focuses on the denominator of the log-linear models by exploiting the so called implicit negative evidence in the probability mass.

Motivation

Ce survey.png

In the Smith and Eisner (2005) paper, the authors have surveyed different estimation techniques (See the Figure above) for probabilistic graphic models. It is clear that for HMMs, people usually optimize the joint likelihood. For log-linear models, various methods were proposed to optimize the conditional probabilities. In addition to this, there are also methods to directly maximize the classification accuracy, the sum of conditional likelihoods, or expected local accuracy.

Problem Formulation

The Algorithm

Some Reflections

Related Papers