Difference between revisions of "Structured SVMs"
Line 19: | Line 19: | ||
== The Algorithm == | == The Algorithm == | ||
+ | For a set of training instances <math>(x_n, y_n) \in X \times Y, n=1,...,l</math>, the SSVM minimizes the risk function: | ||
+ | |||
+ | |||
+ | <math> | ||
+ | \min_{w} ||w||^2 + C \sum^l_{n=1} \max_{y \in Y} (\Delta(y_n,y) + w'\Psi(x_n, y) - w' \Psi(x_n,y_n)) | ||
+ | </math> | ||
+ | |||
+ | where <math>\Delta</math> and <math>\Psi</math> | ||
Since the regularized risk function above is non-differentiable, it is often reformulated in terms of a quadratic program by introducing one slack variables <math>\xi_i</math> for each sample, each representing the value of the maximum. The standard structured SVM primal formulation is given as follows. | Since the regularized risk function above is non-differentiable, it is often reformulated in terms of a quadratic program by introducing one slack variables <math>\xi_i</math> for each sample, each representing the value of the maximum. The standard structured SVM primal formulation is given as follows. | ||
Revision as of 15:53, 2 November 2011
Being edited by Rui Correia
The Method and When to Use it
Structured (or Structural) Support Vector Machines (SSVM), as the name states, is a machine learning model that generalizes the Support Vector Machine (SVM) classifier, allowing training a classifier for structured output.
In general, SSVMs perform supervised learning by approximating a mapping
where is a set of labeled training examples and is a complex structured object, like trees, sequences, or sets, instead of simple univariate predictions (as in the SVM case).
Thus, training a SSVM classifier consists of showing pairs of correct sample and output label pairs, that are used for training, allowing to predict for new sample instances the corresponding output label
In NLP one can fing a great variety of problems that rely on complex outputs, such as parsing and Markov Models for part-of-speech tagging.
The Algorithm
For a set of training instances , the SSVM minimizes the risk function:
where and Since the regularized risk function above is non-differentiable, it is often reformulated in terms of a quadratic program by introducing one slack variables for each sample, each representing the value of the maximum. The standard structured SVM primal formulation is given as follows.
Slightly different version of the loss function:
Related Papers
- I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support Vector Learning for Interdependent and Structured Output Spaces, ICML, 2004.
- Optimization Algorithms
- Taskar et al. (2003): SMO based on factored dual
- Bartlet et al. (2004) and Collins et al. (2008): exponentiated gradient
- Tsochantaridis et al. (2005): cutting planes (based on dual)
- Taskar et al. (2005): dual extragradient
- Ratliff et al. (2006): (stochastic) subgradient descent
- Crammer et al. (2006): online “passive‐aggressive” algorithms