Difference between revisions of "Martins et al 2010"

From Cohen Courses
Jump to navigationJump to search
 
(5 intermediate revisions by the same user not shown)
Line 5: Line 5:
 
=== Summary ===
 
=== Summary ===
  
This paper generalizes the loss function of CRFs, structured SVMs, structured perceptron, and Softmax-margin CRFs into a single loss function, and then derives an online learning algorithm that can be used to learn with that more general loss function.  For the hinge loss, the learning algorithm reduces to MIRA.
+
This [[Category::paper]] generalizes the loss function of CRFs, structured SVMs, structured perceptron, and Softmax-margin CRFs into a single loss function, and then derives an online learning algorithm that can be used to learn with that more general loss function.  For the hinge loss, the learning algorithm reduces to MIRA.
  
 
=== Method ===
 
=== Method ===
  
The general loss function that they use is:
+
The general loss function is:
  
 
[[file:Martins et al 2010 Loss Function.png]]
 
[[file:Martins et al 2010 Loss Function.png]]
Line 17: Line 17:
 
[[file:Martins et al 2010 Parameter Choices.png]]
 
[[file:Martins et al 2010 Parameter Choices.png]]
  
 +
The function minimized is the loss function with a regularizer:
 +
 +
[[file:Martins et al 2010 Learning Problem.png]]  [[file:Martins et al Relarizer.png]]
 +
 +
[[file:Martins et al Regularize Coeff.png]]
 +
 +
The online learning algorithm proposed to minimize this function is called Dual Coordinate Ascent (DCA):
 +
 +
[[file:Martins et al 2010 DCA.png]]
 +
 +
The parameters can be updated using algorithm 2:
 +
 +
[[file:Martins et al 2010 Alg2.png]]
  
 
=== Experimental Result ===
 
=== Experimental Result ===

Latest revision as of 02:08, 11 October 2011

Citation and Online Link

A. F. T. Martins, K. Gimpel. N. A. Smith, E. P. Xing, P. M. Q. Aguiar, M. A. T. Figueiredo, 2010. Aggressive Online Learning of Structured Classifiers. Technical report CMU-ML-10-109.

Summary

This paper generalizes the loss function of CRFs, structured SVMs, structured perceptron, and Softmax-margin CRFs into a single loss function, and then derives an online learning algorithm that can be used to learn with that more general loss function. For the hinge loss, the learning algorithm reduces to MIRA.

Method

The general loss function is:

Martins et al 2010 Loss Function.png

Different choices of and correspond to various well known loss functions. They are:

Martins et al 2010 Parameter Choices.png

The function minimized is the loss function with a regularizer:

Martins et al 2010 Learning Problem.png Martins et al Relarizer.png

Martins et al Regularize Coeff.png

The online learning algorithm proposed to minimize this function is called Dual Coordinate Ascent (DCA):

Martins et al 2010 DCA.png

The parameters can be updated using algorithm 2:

Martins et al 2010 Alg2.png

Experimental Result

Related Papers

MIRA CRF Softmax-margin CRFs

In progress by User:Jmflanig