Difference between revisions of "Martins et al 2010"
Line 17: | Line 17: | ||
[[file:Martins et al 2010 Parameter Choices.png]] | [[file:Martins et al 2010 Parameter Choices.png]] | ||
− | The function | + | The function minimized is the empirical risk with a regularizer: |
[[file:Martins et al 2010 Learning Problem.png]] [[file:Martins et al Relarizer.png]] | [[file:Martins et al 2010 Learning Problem.png]] [[file:Martins et al Relarizer.png]] | ||
[[file:Martins et al Regularize Coeff.png]] | [[file:Martins et al Regularize Coeff.png]] | ||
+ | |||
+ | The algorithm proposed in the paper is called Dual Coordinate Ascent (DCA): | ||
+ | |||
+ | [[file:Martins et al 2010 DCA.png]] | ||
+ | |||
+ | The parameters are updated using algorithm 2: | ||
+ | |||
+ | [[file:Martins et al 2010 Alg2.png]] | ||
=== Experimental Result === | === Experimental Result === |
Revision as of 21:12, 1 October 2011
Citation and Online Link
A. F. T. Martins, K. Gimpel. N. A. Smith, E. P. Xing, P. M. Q. Aguiar, M. A. T. Figueiredo, 2010. Aggressive Online Learning of Structured Classifiers. Technical report CMU-ML-10-109.
Summary
This paper generalizes the loss function of CRFs, structured SVMs, structured perceptron, and Softmax-margin CRFs into a single loss function, and then derives an online learning algorithm that can be used to learn with that more general loss function. For the hinge loss, the learning algorithm reduces to MIRA.
Method
The general loss function they use is:
Different choices of and correspond to various well known loss functions. They are:
The function minimized is the empirical risk with a regularizer:
The algorithm proposed in the paper is called Dual Coordinate Ascent (DCA):
The parameters are updated using algorithm 2:
Experimental Result
Related Papers
MIRA CRF Softmax-margin CRFs
In progress by User:Jmflanig