Difference between revisions of "Globerson et al. ICML 2007. Exponentiated Gradient Algorithms for Log Linear Structured Prediction"
m |
m |
||
Line 60: | Line 60: | ||
At each iteration, we choose a single training instance, and update <math>\alpha'</math> | At each iteration, we choose a single training instance, and update <math>\alpha'</math> | ||
− | === Convergence rate === | + | === Convergence rate of batch algorithm === |
To get within <math>\epsilon</math> of the optimum parameters, we need <math>O(\frac{1}{\eta\epsilon})</math> iterations. | To get within <math>\epsilon</math> of the optimum parameters, we need <math>O(\frac{1}{\eta\epsilon})</math> iterations. | ||
Line 66: | Line 66: | ||
== Experimental Result == | == Experimental Result == | ||
− | + | The authors compared the performance of the EG algorithm to conjugated-gradient and L-BFGS methods. | |
+ | |||
+ | === Multiclass classification | ||
+ | |||
+ | The authors used a subset of the MNIST handwritten digits classification. | ||
+ | |||
+ | [[File:multiclass.png]] | ||
== Related papers == | == Related papers == |
Revision as of 16:32, 24 September 2011
Exponentiated gradient algorithms for log-linear structured prediction, by A. Globerson, T. Y Koo, X. Carreras, M. Collins. In Proceedings of the 24th international conference on Machine learning, 2007.
This Paper is available online [1].
Contents
Under construction
Summary
This paper describes an exponentiated gradient (EG) algorithm for training conditional log-linear models. In this paper, they propose a fast and efficient algorithm for optimizing log-linear models such as CRFs.
The common practice of optimizing the conditional log likelihood of a CRF is often via conjugate-gradient or L-BFGS algorithms (Sha & Pereira, 2003), which typically would require at least one pass through the entire dataset before updating the weight vector. The author's approach here is an online algorithm based on exponentiated gradient updates (Kivinen & Warmuth, 1997).
Brief description of the method
Consider a supervised learning setting with objects and corresponding labels , which maybe trees, sequences or other high dimensional structure. Also, assume we are given a function that maps pairs to feature vectors . Given a parameter vector , a conditional log-linear model defines a distribution over labels as:
where is a partition function.
The problem of learning from the training data is thus finding which maximizes the regularized log-likelihood:
where is the regularization parameter. The above equation has a convex dual which is derived in Lebanon and Lafferty (2001). With dual variables , and , we define:
where
The dual problem is thus
EG Algorithm
Given a set of distributions , the update equations are
where
and
Batch learning
At each iteration, is updated simultaneously with all (or subset of) the available training instances.
Online learning
At each iteration, we choose a single training instance, and update
Convergence rate of batch algorithm
To get within of the optimum parameters, we need iterations.
Experimental Result
The authors compared the performance of the EG algorithm to conjugated-gradient and L-BFGS methods.
=== Multiclass classification
The authors used a subset of the MNIST handwritten digits classification.
Related papers
The widely cited Pang et al EMNLP 2002 paper was influenced by this paper - but considers supervised learning techniques. The choice of movie reviews as the domain was suggested by the (relatively) poor performance of Turney's method on movies.
An interesting follow-up paper is Turney and Littman, TOIS 2003 which focuses on evaluation of the technique of using PMI for predicting the semantic orientation of words.