Difference between revisions of "Chun-Nam John Yu, Hofmann , Learning structural SVMs with latent variables 2009"

From Cohen Courses
Jump to navigationJump to search
Line 24: Line 24:
 
Let  
 
Let  
  
<math> S = {(x1,y1),.......(xn,ym)\epsilon(X x Y)^n </math>.
+
<math> S = {(x1,y1),.......(xn,ym) \epsilon (X x Y)^n </math>.
  
 
The prediction rule will be
 
The prediction rule will be
  
<math> f_w(x) = argmax_{y\epsilonY} [w.G(x,y)] </math>
+
<math> f_w(x) = argmax_{y \epsilon Y} [w.G(x,y)] </math>
 
   
 
   
 
where G is the joint feature vector that describes the relation between input and output.This paper introduces an extra latent variable h
 
where G is the joint feature vector that describes the relation between input and output.This paper introduces an extra latent variable h
 
so now the prediction rule changes to  
 
so now the prediction rule changes to  
 
   
 
   
<math> f_w(x) = argmax_{(y,h)\epsilonYxH} Y[w.G(h,x,y)] </math>
+
<math> f_w(x) = argmax_{(y,h) \epsilon YxH} Y[w.G(h,x,y)] </math>
  
 
==  
 
==  

Revision as of 00:17, 1 October 2011

Citation

Chun-Nam John Yu and Thorsten Joachims. Learning structural SVMs with latent variables. In Proceedings of the 26th International Conference on Machine Learning,Montréal, Québec, Canada, 2009.

Online version

[1]

Summary

In this paper author talks about the use of latent variable in the structural SVM. The paper also identifies the formulation for which their exists effecient algorithm to find the local optimum using convex-concave optimization techniques. The paper argues that this is the first time latent variable are being used in large margin classifiers.Experiments were then performed in various domains of computational Biology, IR and NLP to prove the generality of the proposed method.

Method Used

This paper extends the formulation of Structured SVM given by Tsochantaridis to include a latent variable in it.

Consider set of Structed input out put pairs S

Let

Failed to parse (syntax error): {\displaystyle S = {(x1,y1),.......(xn,ym) \epsilon (X x Y)^n } .

The prediction rule will be

where G is the joint feature vector that describes the relation between input and output.This paper introduces an extra latent variable h so now the prediction rule changes to

== Let .

The prediction rule will be



where G is the joint feature vector that describes the relation between input and output.This paper introduces an extra latent variable h
so now the prediction rule changes to 


==