Difference between revisions of "Contrastive Estimation"
From Cohen Courses
Jump to navigationJump to searchLine 1: | Line 1: | ||
This is a [[Category::method]] proposed by [[RelatedPaper::Smith and Eisner 2005:Contrastive Estimation: Training Log-Linear Models on Unlabeled Data]]. | This is a [[Category::method]] proposed by [[RelatedPaper::Smith and Eisner 2005:Contrastive Estimation: Training Log-Linear Models on Unlabeled Data]]. | ||
− | The proposed approach deals with the estimation of log-linear models (e.g. [[AddressesProblem::Conditional Random Fields]]) in an unsupervised fashion. The authors focus on the denominator <math> \sum_{x\prime,y\prime} p (x\prime,y\prime)</math> of the log-linear models. | + | The proposed approach deals with the estimation of log-linear models (e.g. [[AddressesProblem::Conditional Random Fields]]) in an unsupervised fashion. The authors focus on the denominator <math> \sum_{x\prime,y\prime} p (x\prime,y\prime)</math> of the log-linear models by exploiting the so called ''implicit negative evidence''. When applying this technique to POS tagging, the observed results outperforms EM, and is robust when the dictionary quality is poor. |
== Motivation == | == Motivation == |
Revision as of 16:03, 29 September 2011
This is a method proposed by Smith and Eisner 2005:Contrastive Estimation: Training Log-Linear Models on Unlabeled Data.
The proposed approach deals with the estimation of log-linear models (e.g. Conditional Random Fields) in an unsupervised fashion. The authors focus on the denominator of the log-linear models by exploiting the so called implicit negative evidence. When applying this technique to POS tagging, the observed results outperforms EM, and is robust when the dictionary quality is poor.