McDonald et al, ACL 2005: Non-Projective Dependency Parsing Using Spanning Tree Algorithms

From Cohen Courses
Jump to navigationJump to search

Citation

R. McDonald, F. Pereira, K. Ribarov, J. Hajič. Non-projective dependency parsing using spanning tree algorithms, Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pp. 523-530, Vancouver, October 2005.

Online Version

PDF version

Summary

This paper addresses the problem of non-projective dependency parsing.

Given a sentence , we can construct a directed graph . The vertex set contains one vertex for each word in the sentence, plus a dummy vertex for the "root". The edge set contains all directed edges of the form , where , , and . Each edge has a score of the form , where is a feature vector depending on the words and , and is a weight factor.

A dependency parse tree is a subgraph of which covers all the vertices in , and in which each vertex has exactly one predecessor (except for the root vertex which has no predecessor). A projective dependency parse tree has the additional constraint that each of its subtrees covers a contiguous region of the sentence. In either case, the score of a dependency tree is factored as the sum of the scores of its edges:

The (projective) decoding problem is to find the max-scoring (projective) dependency parse tree given a sentence , assuming that the weight vector is known. The learning problem is to find an optimal weight vector , such that the sum of the scores of the dependency parse trees in a training corpus is maximized.

The Decoding Problem

The non-projective decoding problem is equivalent to finding the maximum spanning tree in a directed graph (also called the maximum arborescence). This can be solved using the Chu-Liu-Edmonds algorithm [1].

A naive implementation of the Chu-Liu-Edmonds algorithm has a time complexity of . In 1977, Robert Tarjan implemented the algorithm with complexity for sparse graphs and complexity for dense graphs [2], the latter of which is used by this paper. In 1986, Gabow, Galil, Spencer, and Tarjan made an even faster implementation with a complexity of [3].

A major advantage of the maximum spanning tree solution over previous solutions is its uniformity and simplicity. Previous algorithms for non-projective dependency parsing were modifications to the Eisner algorithm (a dynamic programming algorithm of complexity ), and often involve approximation. In contrast, the maximum spanning tree solution searches the entire space of dependency parse trees, and it reveals the fact that non-projective dependency parsing is actually easier than projective dependency parsing.

The Learning Problem

An online large-margin learning algorithm, called MIRA, is used to train the weight vector . The algorithm passes through the training corpus multiple times, and for each training example , it updates the weight vector so that the scores of and any other parse tree are separated at least by a loss function . The loss function is defined as the number of vertices that have different parents in the two trees. The weight vectors after each update are averaged to yield the final weight vector.

MIRA.png

Step 4 in the MIRA algorithm is an optimization problem with exponentially many constraints. In order to make the optimization tractable, the paper proposes two solutions. The first is called single-best MIRA, where only one constraint corresponding to the max-scoring parse tree is considered:

MIRA single best.png

The second is called factored MIRA, where the constraints are factored down to the edges:

MIRA factored.png

resulting in constraints.

Both variations are easy to implement. The factored MIRA is more restrictive than the original problem, and may therefore rule out the optimal parse tree.

Experiments

Dataset

The experiments use two datasets: Prague Dependency Treebank and Penn Treebank. The former is a corpus of dependency parse trees of sentences in Czech, a language more non-projective than English. The entire corpus is referred to as Czech-A, and its subset of sentences with non-projective parse trees (23% of all sentences) is referred to as Czech-B. The Penn Treebank is used to evaluate the performance of the non-projective dependency parsing algorithm on a dominantly projective language (English).

Criteria

  • Accuracy: Percentage of words with their parents correctly identified.
  • Complete: Percentage of dependency parse trees correctly reconstructed.

Involved Systems

Previous systems:

  • COLL1999: The projective lexicalized phrase-structure parse of Collins et al. (1999).
  • N&N2005: The pseudo-projective parse of Nivre and Nilsson (2005).
  • McD2005: The projective parser of McDonald et al. (2005) that uses the Eisner algorithm for both training and testing. This system uses 5-best MIRA.

Proposed systems:

  • Single-best MIRA
  • Factored MIRA

Results

McDonald 2005 Results.png

The proposed systems perform better for a highly non-projective language (Czech), but less well for a dominantly projective language (English).