Entropy Minimization for Semi-supervised Learning

From Cohen Courses
Jump to navigationJump to search

This is a method introduced in Y. Grandvalet.

Minimum entropy regularization can be applied to any model of posterior distribution. For this technique, one assumption for unlabeled examples to be informative is that classes are well apart, separated by a low density area.

The learning set is denoted , where : If is labeled as , then and for ; if is unlabeled, then for .

The conditional entropy of class labels conditioned on the observed variables:

Assuming that labels are missing at random, we have that

The posterior distribution is defined as the conditional log likelihood and an entropy-regularized term:

Minimum entropy regularizers have been used to encode learnability of priors M. Brand and to learn weight function parameters in the context of transduction in manifold learning Zhu et al..