Difference between revisions of "Entropy Minimization for Semi-supervised Learning"

From Cohen Courses
Jump to navigationJump to search
Line 1: Line 1:
 
Minimum entropy regularization can be applied to any model of posterior distribution.
 
Minimum entropy regularization can be applied to any model of posterior distribution.
 +
 
The learning set is denoted <math> L_{n} = \{x_{i}, z_{i}\}^{n}_{i=1} </math>,  
 
The learning set is denoted <math> L_{n} = \{x_{i}, z_{i}\}^{n}_{i=1} </math>,  
 
where <math> z_{i} \in \{0,1\}^K </math>:
 
where <math> z_{i} \in \{0,1\}^K </math>:
Line 5: Line 6:
 
and <math> z_{il}  = 0 </math> for <math> l \not= k </math>; if <math> x_{i} </math> is unlabeled,  
 
and <math> z_{il}  = 0 </math> for <math> l \not= k </math>; if <math> x_{i} </math> is unlabeled,  
 
then <math> z_{il} = 1 </math> for <math> l = 1 \dots K </math>.
 
then <math> z_{il} = 1 </math> for <math> l = 1 \dots K </math>.
 +
 +
The conditional entropy of class labels conditioned on the observed variables:
 +
<math>
 +
H(Y|X,Z; L_{n}) = -\frac{1}{n} \sum^{n}_{i=1} \sum^{K}_{k=1} P(Y=w_{k}|x_{i}, z_{i})\text{log}P(Y=w_{k}|x_{i},z_{i})
 +
</math>

Revision as of 20:17, 8 October 2010

Minimum entropy regularization can be applied to any model of posterior distribution.

The learning set is denoted , where : If is labeled as , then and for ; if is unlabeled, then for .

The conditional entropy of class labels conditioned on the observed variables: