Difference between revisions of "Chun-Nam John Yu, Hofmann , Learning structural SVMs with latent variables 2009"
Line 22: | Line 22: | ||
Consider set of Structed input out put pairs <math>S = {(x1,y1),.......(xn,yn)} \epsilon (X x Y)^n</math> | Consider set of Structed input out put pairs <math>S = {(x1,y1),.......(xn,yn)} \epsilon (X x Y)^n</math> | ||
− | The prediction rule | + | The prediction rule comes as |
<math>f_w(x) = argmax_{y \epsilon Y} [w.G(x,y)]</math> | <math>f_w(x) = argmax_{y \epsilon Y} [w.G(x,y)]</math> | ||
where G is the joint feature vector that describes the relation between input and output.This paper introduces an extra latent variable h | where G is the joint feature vector that describes the relation between input and output.This paper introduces an extra latent variable h | ||
− | so now the prediction rule | + | so now the prediction rule will be |
<math> f_w(x) = argmax_{(y,h) \epsilon YxH} Y[w.G(h,x,y)] </math> | <math> f_w(x) = argmax_{(y,h) \epsilon YxH} Y[w.G(h,x,y)] </math> |
Revision as of 01:09, 1 October 2011
Contents
Citation
Chun-Nam John Yu and Thorsten Joachims. Learning structural SVMs with latent variables. In Proceedings of the 26th International Conference on Machine Learning,Montréal, Québec, Canada, 2009.
Online version
Summary
In this paper author talks about the use of latent variable in the structural SVM. The paper also identifies the formulation for which their exists effecient algorithm to find the local optimum using convex-concave optimization techniques. The paper argues that this is the first time latent variable are being used in large margin classifiers.Experiments were then performed in various domains of computational Biology, IR and NLP to prove the generality of the proposed method.
Method Used
This paper extends the formulation of Structured SVM given by Tsochantaridis to include a latent variable in it.
Consider set of Structed input out put pairs
The prediction rule comes as
where G is the joint feature vector that describes the relation between input and output.This paper introduces an extra latent variable h so now the prediction rule will be
Similary extending the loss function to include latent variable will be:
where
Loss function is the difference between the pair given by prediction rule and the latent variable which explains the
Like in the case of structural svm we can derive the upper bound of this function maximizing over y and h.It further assumes that loss function does not depend upon the latent variable for the tasks in considerations. The final objective function comes as