Difference between revisions of "Taskar et al. 2004. Max-margin Parsing"

From Cohen Courses
Jump to navigationJump to search
m (Created page with '{{MyCiteconference | booktitle = Proc. EMNLP| coauthors = Taskar, B. and Klein, D. and Collins, M. and Koller, D. and Manning, C.| date = 2004| first = Ben| last = Taskar| pages …')
 
m
 
(22 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
{{MyCiteconference | booktitle = Proc. EMNLP| coauthors = Taskar, B. and Klein, D. and Collins, M. and Koller, D. and Manning, C.| date = 2004| first = Ben| last = Taskar| pages = 305-312| title = Max-margin parsing| url = http://acl.ldc.upenn.edu/acl2004/emnlp/pdf/Taskar.pdf }}
 
{{MyCiteconference | booktitle = Proc. EMNLP| coauthors = Taskar, B. and Klein, D. and Collins, M. and Koller, D. and Manning, C.| date = 2004| first = Ben| last = Taskar| pages = 305-312| title = Max-margin parsing| url = http://acl.ldc.upenn.edu/acl2004/emnlp/pdf/Taskar.pdf }}
  
This [[Category::Paper]] is available online [http://people.csail.mit.edu/mcollins/papers/icml07.pdf].
+
This [[Category::Paper]] is available online [http://acl.ldc.upenn.edu/acl2004/emnlp/pdf/Taskar.pdf].
  
 
== Summary ==
 
== Summary ==
  
This paper describes an exponentiated gradient (EG) algorithm for training conditional log-linear models. Conditional log-linear models are used for several key structured prediction tasks such as [[AddressesProblem::Named Entity Recognition | NER]], [[AddressesProblem::POS Tagging | POS tagging]], [[AddressesProblem::Parsing | Parsing]]. In this paper, they propose a fast & efficient algorithm for optimizing log-linear models, such as [[RelatedPaper::Lafferty_2001_Conditional_Random_Fields | CRFs]].
+
This paper presents a novel approach to [[AddressesProblem::Parsing]] by maximizing separating margins using [[UsesMethod::Support Vector Machines]]. They show how we can reformulate the parsing problem as a discriminative task, which allow an arbitrary number of features to be used. Also, such a formulation allows them to incorporate a loss function that directly penalizes incorrect parse trees appropriately.
 
 
The common practice of optimizing the conditional log likelihood of a CRF is often via [[conjugate-gradient]] or [[L-BFGS]] algorithms ([[RelatedPaper::Sha_2003_shallow_parsing_with_conditional_random_fields | Sha & Pereira, 2003]]), which typically would require at least one pass through the entire dataset before updating the weight vector. The EG algorithm described in the paper is online, meaning the weight vector can be updated as we see more training data. This is a useful property to have if we do not know the size of the training data in advance.
 
  
 
== Brief description of the method ==
 
== Brief description of the method ==
  
Consider a supervised learning setting with objects <math>x\in\mathcal{X}</math> and corresponding labels <math>y\in\mathcal{Y}</math>, which maybe trees, sequences or other high dimensional structure. Also, assume we are given a function <math>\phi(x,y)</math> that maps <math>(x,y)</math> pairs to feature vectors <math>\mathcal{R}^d</math>. Given a parameter vector <math>\mathbf{w}\in\mathcal{R}^d</math>, a conditional log-linear model defines a distribution over labels as:
+
Instead of a probabilistic interpretation for parse trees, we seek to find:
  
<math>p(y|x;\mathbf{w})=\frac{1}{Z_x}\exp\left(\mathbf{w}\phi(x,y)\right)</math>
+
<math>y_i=\arg\max_{y\in\mathbf{G}(x_i)} \langle\mathbf{w}, \Phi(x_i,y)\rangle</math>
  
where <math>Z_x</math> is a partition function.
+
for all sentences <math>x_i</math> in the training data, <math>y_i</math> being the parse tree, <math>\mathbf G(x_i)</math> the set of possible parses for <math>x_i</math>.
  
The problem of learning <math>\mathbf{w}</math> from the training data is thus finding <math>\mathbf{w}</math> which maximizes the regularized log-likelihood:
+
Formulating it as an optimization problem,
  
<math>\mathbf{w}^{*}=\arg\max_w\sum_i\log p(y_i|x_i;\mathbf{w})-\frac{C}{2}\lVert\mathbf{w}\rVert^2</math>
+
<math>\max_\gamma \{\langle\mathbf{w}, \Phi_{i,y_i}-\Phi_{i,y}\rangle \geq \gamma L_{i,y}\} \forall y\in\mathbf{G}(x_i),||\mathbf w||^2 \leq 1</math>
  
where <math>C</math> is the regularization parameter. The above equation has a convex dual which is derived in [[RelatedPaper::Lebanon and Lafferty NIPS 2001]]. With dual variables <math>\alpha_{i,y}</math>, and <math>\mathbf{\alpha}=[\mathbf{\alpha}_1, \mathbf{\alpha}_2, \cdots, \mathbf{\alpha}_n]</math>, we define:
+
Using SVM, we can find the dual of the above program
  
<math>
+
<math>\max C\sum_{i,y} \alpha_{i,y}L_{i,y}-\dfrac{1}{2} || C\sum_{i,y} (I_{i,y}-\alpha_{i,y})\Phi_{i,y}||</math>
Q(\mathbf{\alpha})=\sum_i\sum_y \alpha_{i,y}\log\alpha_{i,y}+\frac{1}{2C}\lVert\mathbf{w}(\alpha)\rVert^2
 
</math>
 
  
where
+
s.t <math>\sum_y \alpha_{i,y} =1, \forall i; \alpha_{i,y}\geq 0, \forall i,y</math>
<math>
 
\mathbf{w}(\alpha)=\sum_i\sum_y\alpha_{i,y}\left(\phi(x_i,y_i)-\phi(x_i,y)\right)
 
</math>
 
  
The dual problem is thus
+
where <math>I_{i,y}</math> indicates whether <math>y</math> is the true parse for sentence <math>i</math>
  
<math>\alpha^*=\arg\min_{\alpha\in\Delta^n} Q(\alpha)</math>
+
For each sentence, we need to enumerate all possible parse trees, which is exponential in size. However, we can make use of local substructures similar to [[UsesMethod::CYK Parsing | chart parsing dynamic programming algorithm]] to factor these trees into parts like <math>\langle A,s,e,i\rangle</math> and <math>\langle A\rightarrow B C,s,m,e,i\rangle</math>, where <math>s,m,e,i</math> refers to start, split, end points and sentence number respectively.
  
== EG Algorithm ==
+
Therefore,
  
Given a set of distributions <math>\alpha\in\Delta^n</math>, the EG algorithm gives up the update equations
+
<math>\Phi(x,y)=\sum_{r\in R(x,y)}\phi(x,r)</math>
  
<math>\alpha^'_{i,y}=\frac{1}{Z}\alpha_{i,y}\exp(-\eta\nabla_{i,y})</math>
+
where <math>R(x,y)</math> is the set of all possible parts. <math>\phi</math> can be any function that maps a rule production part to some feature vector representation. In addition, the loss function can also be decomposed into sum of parts similar to above. In the paper, the loss function used was the number of constituent errors made in a parse.
  
where
+
By incorporating parts, the factored dual objective can be expressed in polynomial number of variables, which is in fact cubic in the length of the sentence.
  
<math>Z_i=\sum_\hat{y}\alpha_{i,\hat{y}}\exp(-\eta\nabla_{i,\hat{y}})</math>
+
== Results ==
  
and
+
Experiments on the [[UsesDataset::Penn Treebank]] dataset with lexical features achieved 0.43 f-score over the Collins 99 parser.
  
<math>\nabla{i,y}=\frac{\partial Q(\alpha)}{\partial\alpha_{i,y}}=1+\log\alpha_{i,y}+\frac{1}{C}\mathbf{w}(\alpha)\cdot\left(\phi(x_i,y_i)-\phi(x_i,y)\right)</math>
+
== Related Papers ==
 
 
=== Batch learning ===
 
 
 
At each iteration, <math>\alpha'</math> is updated simultaneously with all (or subset of) the available training instances.
 
 
 
=== Online learning ===
 
 
 
At each iteration, we choose a single training instance, and update <math>\alpha'</math>
 
  
=== Convergence rate of batch algorithm ===
+
[[RelatedPaper::McDonald_et_al,_ACL_2005:_Non-projective_dependency_parsing_using_spanning_tree_algorithms]] Margin learning for dependency parsing
 
 
To get within <math>\epsilon</math> of the optimum parameters, we would need <math>O(\frac{1}{\eta\epsilon})</math> iterations.
 
 
 
== Experimental Result ==
 
 
 
The authors compared the performance of the EG algorithm to conjugated-gradient and L-BFGS methods.
 
 
 
=== Multiclass classification ===
 
 
 
The authors used a subset of the [[UsesDataset::MNIST]] handwritten digits classification.
 
 
 
[[File:multiclass.png]]
 
 
 
It can be seen that the EG algorithm converges considerably faster than the other methods.
 
 
 
=== Structured learning (dependency parsing) ===
 
 
 
The author used the Slovene data in [[UsesDataset::CoNLL-X]] Shared Task on Multilingual dependency parsing.
 
 
 
[[File:depparse.png]]
 
 
 
It can be seen that the EG algorithm converges faster in terms of objective function and accuracy measures.
 
 
 
== Related Papers ==
 
  
In [[RelatedPaper::Bartlett et al NIPS 2004]], they used the EG algorithm for large margin structured classification.
+
[[RelatedPaper::Tsochantaridis,_Joachims_,_Support_vector_machine_learning_for_interdependent_and_structured_output_spaces_2004]] Using SVMs for structured output space.

Latest revision as of 19:04, 30 October 2011

Max-margin parsing, by Ben Taskar, Taskar, B. and Klein, D. and Collins, M. and Koller, D. and Manning, C.. In Proc. EMNLP, 2004.

This Paper is available online [1].

Summary

This paper presents a novel approach to Parsing by maximizing separating margins using Support Vector Machines. They show how we can reformulate the parsing problem as a discriminative task, which allow an arbitrary number of features to be used. Also, such a formulation allows them to incorporate a loss function that directly penalizes incorrect parse trees appropriately.

Brief description of the method

Instead of a probabilistic interpretation for parse trees, we seek to find:

for all sentences in the training data, being the parse tree, the set of possible parses for .

Formulating it as an optimization problem,

Using SVM, we can find the dual of the above program

s.t

where indicates whether is the true parse for sentence

For each sentence, we need to enumerate all possible parse trees, which is exponential in size. However, we can make use of local substructures similar to chart parsing dynamic programming algorithm to factor these trees into parts like and , where refers to start, split, end points and sentence number respectively.

Therefore,

where is the set of all possible parts. can be any function that maps a rule production part to some feature vector representation. In addition, the loss function can also be decomposed into sum of parts similar to above. In the paper, the loss function used was the number of constituent errors made in a parse.

By incorporating parts, the factored dual objective can be expressed in polynomial number of variables, which is in fact cubic in the length of the sentence.

Results

Experiments on the Penn Treebank dataset with lexical features achieved 0.43 f-score over the Collins 99 parser.

Related Papers

McDonald_et_al,_ACL_2005:_Non-projective_dependency_parsing_using_spanning_tree_algorithms Margin learning for dependency parsing

Tsochantaridis,_Joachims_,_Support_vector_machine_learning_for_interdependent_and_structured_output_spaces_2004 Using SVMs for structured output space.