Difference between revisions of "Entropy Gradient for Semi-Supervised Conditional Random Fields"

From Cohen Courses
Jump to navigationJump to search
Line 1: Line 1:
This [[Category::method]] is used by [[RelatedPaper::Mann and McCallum, 2007]] for efficient computation of the entropy gradient used as a regularizer to train [[AddressesProblem::semi-supervised]] [[UsesMethod::Conditional Random Fields | conditional random fields]].
+
This [[Category::method]] is used by [[RelatedPaper::Mann and McCallum, 2007]] for efficient computation of the entropy gradient used as a regularizer to train [[AddressesProblem::semi-supervised]] [[UsesMethod::Conditional Random Fields | conditional random fields]]. The [[Category::method]] is an improvement over the original proposed approach by [[RelatedPaper::Jiao et al., 2006]] in terms of computing the gradient on unlabeled part of the training data.
  
 
== Summary ==
 
== Summary ==
 
+
Entropy regularization (ER) is a method applied to [[AddressesProblem::semi-supervised learning]] that augments a standard conditional likelihood objective function with an additional term that aims to minimize the predicted label entropy on unlabeled data. By insisting on peaked, confident predictions, ER guides the decision boundary away from dense regions of input space. Entropy regularization for semi-supervised learning was first proposed for classification tasks by [[RelatedPaper::Grandvalet and Bengio, 2004]].
  
 
== General Definition ==
 
== General Definition ==

Revision as of 00:04, 30 October 2011

This method is used by Mann and McCallum, 2007 for efficient computation of the entropy gradient used as a regularizer to train semi-supervised conditional random fields. The method is an improvement over the original proposed approach by Jiao et al., 2006 in terms of computing the gradient on unlabeled part of the training data.

Summary

Entropy regularization (ER) is a method applied to semi-supervised learning that augments a standard conditional likelihood objective function with an additional term that aims to minimize the predicted label entropy on unlabeled data. By insisting on peaked, confident predictions, ER guides the decision boundary away from dense regions of input space. Entropy regularization for semi-supervised learning was first proposed for classification tasks by Grandvalet and Bengio, 2004.

General Definition

Related Papers