Difference between revisions of "Entropy Minimization for Semi-supervised Learning"

From Cohen Courses
Jump to navigationJump to search
Line 2: Line 2:
  
 
The learning set is denoted <math> \mathcal{L}_{n} = \{X^{(i)}, Z^{(i)}\}^{n}_{i=1} </math>,  
 
The learning set is denoted <math> \mathcal{L}_{n} = \{X^{(i)}, Z^{(i)}\}^{n}_{i=1} </math>,  
where <math> z_{i} \in \{0,1\}^K </math>:
+
where <math> Z^{(i)} \in \{0,1\}^K </math>:
If <math> x_{i} </math> is labeled as <math> w_{i} </math>, then <math> z_{ik} = 1</math>
+
If <math> X^{(i)} </math> is labeled as <math> w_{i} </math>, then <math> Z^{(i)}_{k} = 1</math>
and <math> z_{il}  = 0 </math> for <math> l \not= k </math>; if <math> x_{i} </math> is unlabeled,  
+
and <math> Z^{(i)}_{l}  = 0 </math> for <math> l \not= k </math>; if <math> X^{(i)} </math> is unlabeled,  
then <math> z_{il} = 1 </math> for <math> l = 1 \dots K </math>.
+
then <math> Z^{(i)}_{l} = 1 </math> for <math> l = 1 \dots K </math>.
  
 
The conditional entropy of class labels conditioned on the observed variables:
 
The conditional entropy of class labels conditioned on the observed variables:

Revision as of 20:40, 8 October 2010

Minimum entropy regularization can be applied to any model of posterior distribution.

The learning set is denoted , where : If is labeled as , then and for ; if is unlabeled, then for .

The conditional entropy of class labels conditioned on the observed variables:

The posterior distribution is defined as