Difference between revisions of "Entropy Minimization for Semi-supervised Learning"

From Cohen Courses
Jump to navigationJump to search
Line 16: Line 16:
  
 
<math>
 
<math>
C(\theta, \lambda; L_{n}) = L(\theta; \mathcal{L}_{n}) - \lambda H(Y|X,Z; \mathcal{L}_{n})
+
\begin{eqnarray}
 +
C(\theta, \lambda; L_{n}) & = & L(\theta; \mathcal{L}_{n}) - \lambda H(Y|X,Z; \mathcal{L}_{n})
 +
& = & \sum^{n}_{i=1} \text{log}(\sum^{K}_{k=1} z_{ik}P(Y^{i}=w_{k}|X^{i}))
 +
\end{eqnarray}
 
</math>
 
</math>

Revision as of 20:27, 8 October 2010

Minimum entropy regularization can be applied to any model of posterior distribution.

The learning set is denoted , where : If is labeled as , then and for ; if is unlabeled, then for .

The conditional entropy of class labels conditioned on the observed variables:

The posterior distribution is defined as

Failed to parse (unknown function "\begin{eqnarray}"): {\displaystyle \begin{eqnarray} C(\theta, \lambda; L_{n}) & = & L(\theta; \mathcal{L}_{n}) - \lambda H(Y|X,Z; \mathcal{L}_{n}) & = & \sum^{n}_{i=1} \text{log}(\sum^{K}_{k=1} z_{ik}P(Y^{i}=w_{k}|X^{i})) \end{eqnarray} }