# Globerson et al. ICML 2007. Exponentiated Gradient Algorithms for Log Linear Structured Prediction

Exponentiated gradient algorithms for log-linear structured prediction, by A. Globerson, T. Y Koo, X. Carreras, M. Collins. In Proceedings of the 24th international conference on Machine learning, 2007.

This Paper is available online [1].

## Summary

This paper describes an exponentiated gradient (EG) algorithm for training conditional log-linear models. Conditional log-linear models are used for several key structured prediction tasks such as NER, POS tagging, Parsing. In this paper, they propose a fast & efficient algorithm for optimizing log-linear models, such as CRFs.

The common practice of optimizing the conditional log likelihood of a CRF is often via conjugate-gradient or L-BFGS algorithms (Sha & Pereira, 2003), which typically would require at least one pass through the entire dataset before updating the weight vector. The EG algorithm described in the paper is online, meaning the weight vector can be updated as we see more training data. This is a useful property to have if we do not know the size of the training data in advance.

## Brief description of the method

Consider a supervised learning setting with objects ${\displaystyle x\in {\mathcal {X}}}$ and corresponding labels ${\displaystyle y\in {\mathcal {Y}}}$, which maybe trees, sequences or other high dimensional structure. Also, assume we are given a function ${\displaystyle \phi (x,y)}$ that maps ${\displaystyle (x,y)}$ pairs to feature vectors ${\displaystyle {\mathcal {R}}^{d}}$. Given a parameter vector ${\displaystyle \mathbf {w} \in {\mathcal {R}}^{d}}$, a conditional log-linear model defines a distribution over labels as:

${\displaystyle p(y|x;\mathbf {w} )={\frac {1}{Z_{x}}}\exp \left(\mathbf {w} \phi (x,y)\right)}$

where ${\displaystyle Z_{x}}$ is a partition function.

The problem of learning ${\displaystyle \mathbf {w} }$ from the training data is thus finding ${\displaystyle \mathbf {w} }$ which maximizes the regularized log-likelihood:

${\displaystyle \mathbf {w} ^{*}=\arg \max _{w}\sum _{i}\log p(y_{i}|x_{i};\mathbf {w} )-{\frac {C}{2}}\lVert \mathbf {w} \rVert ^{2}}$

where ${\displaystyle C}$ is the regularization parameter. The above equation has a convex dual which is derived in Lebanon and Lafferty NIPS 2001. With dual variables ${\displaystyle \alpha _{i,y}}$, and ${\displaystyle \mathbf {\alpha } =[\mathbf {\alpha } _{1},\mathbf {\alpha } _{2},\cdots ,\mathbf {\alpha } _{n}]}$, we define:

${\displaystyle Q(\mathbf {\alpha } )=\sum _{i}\sum _{y}\alpha _{i,y}\log \alpha _{i,y}+{\frac {1}{2C}}\lVert \mathbf {w} (\alpha )\rVert ^{2}}$

where ${\displaystyle \mathbf {w} (\alpha )=\sum _{i}\sum _{y}\alpha _{i,y}\left(\phi (x_{i},y_{i})-\phi (x_{i},y)\right)}$

The dual problem is thus

${\displaystyle \alpha ^{*}=\arg \min _{\alpha \in \Delta ^{n}}Q(\alpha )}$

## EG Algorithm

Given a set of distributions ${\displaystyle \alpha \in \Delta ^{n}}$, the EG algorithm gives up the update equations

${\displaystyle \alpha _{i,y}^{'}={\frac {1}{Z}}\alpha _{i,y}\exp(-\eta \nabla _{i,y})}$

where

${\displaystyle Z_{i}=\sum _{\hat {y}}\alpha _{i,{\hat {y}}}\exp(-\eta \nabla _{i,{\hat {y}}})}$

and

${\displaystyle \nabla {i,y}={\frac {\partial Q(\alpha )}{\partial \alpha _{i,y}}}=1+\log \alpha _{i,y}+{\frac {1}{C}}\mathbf {w} (\alpha )\cdot \left(\phi (x_{i},y_{i})-\phi (x_{i},y)\right)}$

### Batch learning

At each iteration, ${\displaystyle \alpha '}$ is updated simultaneously with all (or subset of) the available training instances.

### Online learning

At each iteration, we choose a single training instance, and update ${\displaystyle \alpha '}$

### Convergence rate of batch algorithm

To get within ${\displaystyle \epsilon }$ of the optimum parameters, we would need ${\displaystyle O({\frac {1}{\eta \epsilon }})}$ iterations.

## Experimental Result

The authors compared the performance of the EG algorithm to conjugated-gradient and L-BFGS methods.

### Multiclass classification

The authors used a subset of the MNIST handwritten digits classification.

It can be seen that the EG algorithm converges considerably faster than the other methods.

### Structured learning (dependency parsing)

The author used the Slovene data in CoNLL-X Shared Task on Multilingual dependency parsing.

It can be seen that the EG algorithm converges faster in terms of objective function and accuracy measures.

## Related Papers

In Bartlett et al NIPS 2004, they used the EG algorithm for large margin structured classification.