Difference between revisions of "Entropy Minimization for Semi-supervised Learning"

From Cohen Courses
Jump to navigationJump to search
Line 10: Line 10:
  
 
<math>
 
<math>
H(Y|X,Z; L_{n}) = -\frac{1}{n} \sum^{n}_{i=1} \sum^{K}_{k=1} P(Y=w_{k}|x_{i}, z_{i})\text{log}P(Y=w_{k}|x_{i},z_{i})
+
H(Y|X,Z; L_{n}) = -\frac{1}{n} \sum^{n}_{i=1} \sum^{K}_{k=1} P(Y^{(i)}=w_{k}|X^{(i)}, Z^{(i)})\text{log}P(Y^{(i)}=w_{k}|X^{(i)},Z^{(i)})
 
</math>
 
</math>
  

Revision as of 20:42, 8 October 2010

Minimum entropy regularization can be applied to any model of posterior distribution.

The learning set is denoted , where : If is labeled as , then and for ; if is unlabeled, then for .

The conditional entropy of class labels conditioned on the observed variables:

The posterior distribution is defined as