Difference between revisions of "Koo and Collins ACL 2010"
Line 12: | Line 12: | ||
Dependency parsing is defined as a search for the highest-scoring analysis of <math> x </math>: | Dependency parsing is defined as a search for the highest-scoring analysis of <math> x </math>: | ||
+ | |||
<math>y^*(x) = _{y\in Y(x)}^{argmax}\textrm{Score}(x,y)</math> | <math>y^*(x) = _{y\in Y(x)}^{argmax}\textrm{Score}(x,y)</math> | ||
+ | |||
+ | Where <math> Y(x) </math> is the set of all trees compatible with <math> x </math> and <math> \textrm{Score}(x,y) </math>evaluates the event that tree <math> y </math> is the analysis of sentence <math> x </math>. Directly solving the equation is unfeasible because the number of possible trees grow exponentially with the length of the sentence. A common strategy is to factor each dependency tree into small parts which can be scored individually, then: | ||
+ | |||
+ | <math>\textrm{Score}(x,y) = \sum_{p \in y}\textrm{ScorePart}(x,p)</math> | ||
== Experimental results == | == Experimental results == |
Revision as of 21:23, 25 November 2011
Citation
Koo, T. and Collins, M. 2010. Efficient Third-Order Dependency Parsers. In Proceedings of ACL, pp. 1-11. Association for Computational Linguistics.
Online version
Summary
This paper presents a higher-order dependency parser that can evaluate substructures containing three dependencies, using both sibling-style and grandchild-style interactions. The algorithms presented require only time and were evaluated on the Penn Treebank and the Prague Dependency Treebank. The implementation code was publicly released [1].
Dependency parsing is defined as a search for the highest-scoring analysis of :
Where is the set of all trees compatible with and evaluates the event that tree is the analysis of sentence . Directly solving the equation is unfeasible because the number of possible trees grow exponentially with the length of the sentence. A common strategy is to factor each dependency tree into small parts which can be scored individually, then:
Experimental results
Bla bla.
Related papers
Bla bla.