Difference between revisions of "Contrastive Estimation"

From Cohen Courses
Jump to navigationJump to search
Line 1: Line 1:
 
This is a [[Category::method]] proposed by [[RelatedPaper::Smith and Eisner 2005:Contrastive Estimation: Training Log-Linear Models on Unlabeled Data]].
 
This is a [[Category::method]] proposed by [[RelatedPaper::Smith and Eisner 2005:Contrastive Estimation: Training Log-Linear Models on Unlabeled Data]].
  
The proposed approach deals with the estimation of log-linear models (e.g. [[AddressesProblem::Conditional Random Fields]]) in an unsupervised fashion. The method focuses on the denominator <math> \sum_{x\prime,y\prime} p (x\prime,y\prime)</math> of the log-linear models by exploiting the so called ''implicit negative evidence''.
+
The proposed approach deals with the estimation of log-linear models (e.g. [[AddressesProblem::Conditional Random Fields]]) in an unsupervised fashion. The method focuses on the denominator <math> \sum_{x\prime,y\prime} p (x\prime,y\prime)</math> of the log-linear models by exploiting the so called ''implicit negative evidence'' in the probability mass.
  
 
== Motivation ==
 
== Motivation ==

Revision as of 16:04, 29 September 2011

This is a method proposed by Smith and Eisner 2005:Contrastive Estimation: Training Log-Linear Models on Unlabeled Data.

The proposed approach deals with the estimation of log-linear models (e.g. Conditional Random Fields) in an unsupervised fashion. The method focuses on the denominator of the log-linear models by exploiting the so called implicit negative evidence in the probability mass.

Motivation

Problem Formulation

The Algorithm

Some Reflections

Related Papers