Entropy Minimization for Semi-supervised Learning

From Cohen Courses
Revision as of 21:06, 8 October 2010 by PastStudents (talk | contribs)
Jump to navigationJump to search

This is a method introduced in Y. Grandvalet. Minimum entropy regularization can be applied to any model of posterior distribution. For this technique, one assumption for unlabeled examples to be informative is that classes are well apart, separated by a low density area.

The learning set is denoted , where : If is labeled as , then and for ; if is unlabeled, then for .

The conditional entropy of class labels conditioned on the observed variables:

Assuming that labels are missing at random, we have that

The posterior distribution is defined as the conditional log likelihood and an entropy-regularized term: