Difference between revisions of "Contrastive Estimation"

From Cohen Courses
Jump to navigationJump to search
Line 1: Line 1:
 
This is a [[Category::method]] proposed by [[RelatedPaper::Smith and Eisner 2005:Contrastive Estimation: Training Log-Linear Models on Unlabeled Data]].
 
This is a [[Category::method]] proposed by [[RelatedPaper::Smith and Eisner 2005:Contrastive Estimation: Training Log-Linear Models on Unlabeled Data]].
  
The proposed approach deals with the estimation of log-linear models (e.g. [[AddressesProblem::Conditional Random Fields]]) in an unsupervised fashion. The authors focus on the denominator <math> \sum_{x\prime,y\prime} p (x\prime,y\prime)</math> of the log-linear models by exploiting the so called ''implicit negative evidence''. When applying this technique to POS tagging, the observed results outperforms EM, and is robust when the dictionary quality is poor.
+
The proposed approach deals with the estimation of log-linear models (e.g. [[AddressesProblem::Conditional Random Fields]]) in an unsupervised fashion. The method focuses on the denominator <math> \sum_{x\prime,y\prime} p (x\prime,y\prime)</math> of the log-linear models by exploiting the so called ''implicit negative evidence''.
  
 
== Motivation ==
 
== Motivation ==

Revision as of 17:03, 29 September 2011

This is a method proposed by Smith and Eisner 2005:Contrastive Estimation: Training Log-Linear Models on Unlabeled Data.

The proposed approach deals with the estimation of log-linear models (e.g. Conditional Random Fields) in an unsupervised fashion. The method focuses on the denominator of the log-linear models by exploiting the so called implicit negative evidence.

Motivation

Problem Formulation

The Algorithm

Some Reflections

Related Papers