Difference between revisions of "Martins et al 2010"

From Cohen Courses
Jump to navigationJump to search
Line 9: Line 9:
 
=== Method ===
 
=== Method ===
  
The general loss function used in the paper is:
+
The general loss function is:
  
 
[[file:Martins et al 2010 Loss Function.png]]
 
[[file:Martins et al 2010 Loss Function.png]]
Line 17: Line 17:
 
[[file:Martins et al 2010 Parameter Choices.png]]
 
[[file:Martins et al 2010 Parameter Choices.png]]
  
The function minimized is the empirical risk with a regularizer:
+
The function minimized is the loss function with a regularizer:
  
 
[[file:Martins et al 2010 Learning Problem.png]]  [[file:Martins et al Relarizer.png]]
 
[[file:Martins et al 2010 Learning Problem.png]]  [[file:Martins et al Relarizer.png]]
Line 23: Line 23:
 
[[file:Martins et al Regularize Coeff.png]]
 
[[file:Martins et al Regularize Coeff.png]]
  
The algorithm proposed in the paper is called Dual Coordinate Ascent (DCA):
+
The online learning algorithm proposed to minimize this function is called Dual Coordinate Ascent (DCA):
  
 
[[file:Martins et al 2010 DCA.png]]
 
[[file:Martins et al 2010 DCA.png]]
  
The parameters are updated using algorithm 2:
+
The parameters can be updated using algorithm 2:
  
 
[[file:Martins et al 2010 Alg2.png]]
 
[[file:Martins et al 2010 Alg2.png]]

Revision as of 21:22, 1 October 2011

Citation and Online Link

A. F. T. Martins, K. Gimpel. N. A. Smith, E. P. Xing, P. M. Q. Aguiar, M. A. T. Figueiredo, 2010. Aggressive Online Learning of Structured Classifiers. Technical report CMU-ML-10-109.

Summary

This paper generalizes the loss function of CRFs, structured SVMs, structured perceptron, and Softmax-margin CRFs into a single loss function, and then derives an online learning algorithm that can be used to learn with that more general loss function. For the hinge loss, the learning algorithm reduces to MIRA.

Method

The general loss function is:

Martins et al 2010 Loss Function.png

Different choices of and correspond to various well known loss functions. They are:

Martins et al 2010 Parameter Choices.png

The function minimized is the loss function with a regularizer:

Martins et al 2010 Learning Problem.png Martins et al Relarizer.png

Martins et al Regularize Coeff.png

The online learning algorithm proposed to minimize this function is called Dual Coordinate Ascent (DCA):

Martins et al 2010 DCA.png

The parameters can be updated using algorithm 2:

Martins et al 2010 Alg2.png

Experimental Result

Related Papers

MIRA CRF Softmax-margin CRFs

In progress by User:Jmflanig