Difference between revisions of "Entropy Minimization for Semi-supervised Learning"
From Cohen Courses
Jump to navigationJump to searchPastStudents (talk | contribs) |
PastStudents (talk | contribs) |
||
Line 2: | Line 2: | ||
The learning set is denoted <math> \mathcal{L}_{n} = \{X^{(i)}, Z^{(i)}\}^{n}_{i=1} </math>, | The learning set is denoted <math> \mathcal{L}_{n} = \{X^{(i)}, Z^{(i)}\}^{n}_{i=1} </math>, | ||
− | where <math> | + | where <math> Z^{(i)} \in \{0,1\}^K </math>: |
− | If <math> | + | If <math> X^{(i)} </math> is labeled as <math> w_{i} </math>, then <math> Z^{(i)}_{k} = 1</math> |
− | and <math> | + | and <math> Z^{(i)}_{l} = 0 </math> for <math> l \not= k </math>; if <math> X^{(i)} </math> is unlabeled, |
− | then <math> | + | then <math> Z^{(i)}_{l} = 1 </math> for <math> l = 1 \dots K </math>. |
The conditional entropy of class labels conditioned on the observed variables: | The conditional entropy of class labels conditioned on the observed variables: |
Revision as of 20:40, 8 October 2010
Minimum entropy regularization can be applied to any model of posterior distribution.
The learning set is denoted , where : If is labeled as , then and for ; if is unlabeled, then for .
The conditional entropy of class labels conditioned on the observed variables:
The posterior distribution is defined as