Difference between revisions of "Koo and Collins ACL 2010"

From Cohen Courses
Jump to navigationJump to search
Line 11: Line 11:
 
This [[Category::paper]] presents a higher-order [[AddressesProblem::Dependency Parsing|dependency parser]] that can evaluate substructures containing three dependencies, using both sibling-style and grandchild-style interactions. The algorithms presented require only <math>O(n^4)</math> time and were evaluated on the [[UsesDataset::Penn Treebank]] and the [[UsesDataset::Prague Dependency Treebank]]. The implementation code was publicly released [http://groups.csail.mit.edu/nlp/dpo3/].
 
This [[Category::paper]] presents a higher-order [[AddressesProblem::Dependency Parsing|dependency parser]] that can evaluate substructures containing three dependencies, using both sibling-style and grandchild-style interactions. The algorithms presented require only <math>O(n^4)</math> time and were evaluated on the [[UsesDataset::Penn Treebank]] and the [[UsesDataset::Prague Dependency Treebank]]. The implementation code was publicly released [http://groups.csail.mit.edu/nlp/dpo3/].
  
Dependency parsing is defined as a search for the highest-scoring analysis of <math>\, x</math>:
+
Dependency parsing is defined as a search for the highest-scoring analysis of a sentence <math>\, x</math>:
  
 
<math>y^*(x) = _{y\in Y(x)}^{argmax}\textrm{Score}(x,y)</math>
 
<math>y^*(x) = _{y\in Y(x)}^{argmax}\textrm{Score}(x,y)</math>
  
Where <math>\, Y(x) </math> is the set of all trees compatible with <math>\, x </math> and <math>\, \textrm{Score}(x,y) </math> evaluates the event that tree <math>\, y </math> is the analysis of sentence <math>\, x </math>. Directly solving the equation is unfeasible because the number of possible trees grow exponentially with the length of the sentence. A common strategy is to factor each dependency tree into small parts which can be scored individually, then:
+
where <math>\, Y(x) </math> is the set of all trees compatible with <math>\, x </math> and <math>\, \textrm{Score}(x,y) </math> evaluates the event that tree <math>\, y </math> is the analysis of <math>\, x </math>. Directly solving the equation is unfeasible because the number of possible trees grow exponentially with the length of the sentence. A common strategy is to factor each dependency tree into small parts which can be scored individually, then:
  
 
<math>\textrm{Score}(x,y) = \sum_{p \in y}\textrm{ScorePart}(x,p)</math>
 
<math>\textrm{Score}(x,y) = \sum_{p \in y}\textrm{ScorePart}(x,p)</math>

Revision as of 21:25, 25 November 2011

Citation

Koo, T. and Collins, M. 2010. Efficient Third-Order Dependency Parsers. In Proceedings of ACL, pp. 1-11. Association for Computational Linguistics.

Online version

ACL

Summary

This paper presents a higher-order dependency parser that can evaluate substructures containing three dependencies, using both sibling-style and grandchild-style interactions. The algorithms presented require only time and were evaluated on the Penn Treebank and the Prague Dependency Treebank. The implementation code was publicly released [1].

Dependency parsing is defined as a search for the highest-scoring analysis of a sentence :

where is the set of all trees compatible with and evaluates the event that tree is the analysis of . Directly solving the equation is unfeasible because the number of possible trees grow exponentially with the length of the sentence. A common strategy is to factor each dependency tree into small parts which can be scored individually, then:

Experimental results

Bla bla.

Related papers

Bla bla.