Difference between revisions of "Entropy Minimization for Semi-supervised Learning"

From Cohen Courses
Jump to navigationJump to search
Line 1: Line 1:
 
Minimum entropy regularization can be applied to any model of posterior distribution.
 
Minimum entropy regularization can be applied to any model of posterior distribution.
  
The learning set is denoted <math> L_{n} = \{x_{i}, z_{i}\}^{n}_{i=1} </math>,  
+
The learning set is denoted <math> \mathcal{L}_{n} = \{X^{(i)}, Z^{(i)}\}^{n}_{i=1} </math>,  
 
where <math> z_{i} \in \{0,1\}^K </math>:
 
where <math> z_{i} \in \{0,1\}^K </math>:
 
If <math> x_{i} </math> is labeled as <math> w_{i} </math>, then <math> z_{ik} = 1</math>
 
If <math> x_{i} </math> is labeled as <math> w_{i} </math>, then <math> z_{ik} = 1</math>

Revision as of 20:38, 8 October 2010

Minimum entropy regularization can be applied to any model of posterior distribution.

The learning set is denoted , where : If is labeled as , then and for ; if is unlabeled, then for .

The conditional entropy of class labels conditioned on the observed variables:

The posterior distribution is defined as