Difference between revisions of "Expectation Regularization"
From Cohen Courses
Jump to navigationJump to searchPastStudents (talk | contribs) |
PastStudents (talk | contribs) |
||
Line 3: | Line 3: | ||
This method introduced a way to take advantage of this prior knowledge. | This method introduced a way to take advantage of this prior knowledge. | ||
− | Let's denote human-provided prior <math> \tilde{p} </math>. | + | Let's denote human-provided prior as <math> \tilde{p} </math>. |
+ | We minimizes the distance between <math> \tilde{p} </math> and <math> \hat{p} </math>. | ||
+ | KL-distance is used here so the regularization becomes | ||
+ | <math> | ||
+ | D(\tilde{p}||\hat{p})=\sum_{y} \tilde{p}(y) \text{log} \frac{\tilde{p}(y)}{\hat{p}(y)} | ||
+ | </math> |
Revision as of 16:46, 30 November 2010
This is a method introduced in G.S Mann and A. McCallum, ICML 2007. It is often served as a regularized term with the likelihood function. In practice human often have an insight of label prior distribution. This method introduced a way to take advantage of this prior knowledge.
Let's denote human-provided prior as . We minimizes the distance between and . KL-distance is used here so the regularization becomes