Difference between revisions of "Gimpel and Smith, NAACL 2010"
Line 11: | Line 11: | ||
==Brief Description of the Softmax-Margin objective function== | ==Brief Description of the Softmax-Margin objective function== | ||
− | + | Consider the objective functions for these four methods. Our | |
+ | |||
+ | Conditional log likelihood: <math>\min_\theta \sum_{i=1}^n -\boldsymbol{\theta}^T\boldsymbol{f}(x^{(i)},y^{(i)}) + \log \sum_{y \in \mathcal{Y}(x^{(i)})} \exp \{ \boldsymbol{\theta}^T \boldsymbol{f}(x^{(i)},y) \}</math> | ||
Max-margin: <math>\min_\theta \sum_{i=1}^n -\boldsymbol{\theta}^T\boldsymbol{f}(x^{(i)},y^{(i)}) + \max_{y \in \mathcal{Y}(x^{(i)})} (\boldsymbol{\theta}^T \boldsymbol{f}(x^{(i)},y) + cost(y^{(i)}, y))</math> | Max-margin: <math>\min_\theta \sum_{i=1}^n -\boldsymbol{\theta}^T\boldsymbol{f}(x^{(i)},y^{(i)}) + \max_{y \in \mathcal{Y}(x^{(i)})} (\boldsymbol{\theta}^T \boldsymbol{f}(x^{(i)},y) + cost(y^{(i)}, y))</math> |
Revision as of 17:56, 25 September 2011
Softmax-Margin CRFs: Training Log-Linear Models with Cost Functions
Online: [1]
Contents
Citation
Kevin Gimpel and Noah A. Smith. Softmax-margin CRFs: Training log-linear models with loss functions. In Proceedings of the Human Language Technologies Conference of the North American Chapter of the Association for Computational Linguistics, pages 733-736, Los Angeles, California, USA, June 2010.
Summary
The authors want to be able to incorporate a cost function (present in structured SVMs) into standard conditional log-likelihood models. They introduce the softmax-margin objective function that achieves the best of both worlds. Using a NER task, it performs significantly better than a standard conditional loglikelihood model, a max-margin model, and the perceptron, but is indistinguishable from MIRA, risk, and JRB (Jensen risk bound; defined in the paper).
Brief Description of the Softmax-Margin objective function
Consider the objective functions for these four methods. Our
Conditional log likelihood:
Max-margin:
Risk:
Softmax-margin: