Difference between revisions of "Chiang 2005"
Line 9: | Line 9: | ||
== Summary == | == Summary == | ||
− | This [[Category::paper]] introduces a [[UsesMethod::Hierarchical phrase-based translation|hierarchical phrase-based]] model for statistical [[AddressesProblem::Machine Translation|machine translation]]. It describes a training method to extract the synchronous context-free grammar rules from parallel text and also an efficient decoding algorithm to apply the model. Using BLEU as a metric, the model is shown to outperform previous state-of-the-art phrase-based systems. | + | This [[Category::paper]] introduces a [[UsesMethod::Hierarchical phrase-based translation|hierarchical phrase-based]] model for statistical [[AddressesProblem::Machine Translation|machine translation]]. It describes a training method to extract the synchronous context-free grammar (synchronous CFG) rules from parallel text and also an efficient decoding algorithm to apply the model. Using BLEU as a metric, the model is shown to outperform previous state-of-the-art phrase-based systems. |
=== Rule features === | === Rule features === |
Revision as of 02:04, 2 November 2011
Contents
Citation
Chiang, D. 2005. A Hierarchical Phrase-Based Model for Statistical Machine Translation. In Proceedings of the 43rd Annual Meeting of the ACL, pp. 263–270, Ann Arbor. Association for Computational Linguistics.
Online version
Information Sciences Institute, University of Southern California
Summary
This paper introduces a hierarchical phrase-based model for statistical machine translation. It describes a training method to extract the synchronous context-free grammar (synchronous CFG) rules from parallel text and also an efficient decoding algorithm to apply the model. Using BLEU as a metric, the model is shown to outperform previous state-of-the-art phrase-based systems.
Rule features
According to the proposed hierarchical model, the weights of the synchronous CFG rules: are determined by a log-linear model:
The features used for the experiments were taken from analogous features of conventional phrase-based systems:
- , from the noisy-channel model
- , proven to be a helpful feature
- lexical weights which estimate how well the words in translate the words in
- a phrase penalty to allow the model assign preferences for longer or shorter derivations.
Training
The training process starts with a word-aligned corpus and produces "initial phrase pairs" using conventional phrase-based methods (from Koehn et. al. 2003 and Och and Ney 2004). Then, it forms all possible differences of phrase pairs, defining the set of rules to be extracted as the smallest set satisfying the following:
1. If is an initial phrase pair, then is a rule.
2. If is a rule and is an initial phrase pair such that and , then is a rule.
This procedure generates too many rules, making training and decoding very slow and creating spurious ambiguity. Then, the grammar is filtered according to some principles designed to balance grammar size and performance on a development set, including: keep only the smallest initial phrase pairs containing the same set of alignment points, limit initial phrases to a length of 10 and rules to 5 (terminals plus non-terminals) on the source-language right-hand side, discard rules with more than 2 non-terminals and rules with adjacent non-terminals in the right-hand side, and keep only rules with at least one pair of aligned words.
Regarding the rule weights, since the training process extracts many rules from a single initial phrase pair, it distributes weight equally among intial phrase pairs but distribute that weight equally among the related rules.
Decoding
The decoder process uses a CKY parser with beam search together with a postprocessor for mapping source-language derivations to target-language derivations. Given a sentence in the source language , it finds the best derivation (or N best derivations) that generates for some .
The search space is pruned in several ways: an item that has a score worse than times the best score in the same cell is discarded; and an item that is worse than the -th best item in the same cell is also discarded. The values of and are chosen to balance speed and performance in a development set.
Experimental results
The model was tested on Mandarin-to-English translation, using the FBIS corpus for the translation model, the 2002 NIST MT evaluation dataset as the development set and the 2003 test set as the test set. Three different systems were compared: the baseline system (Pharaoh, the current state-of-the-art phrase-based system), the hierarchical model and an "enhanced" hierarchical model using a constituent feature; the following table shows the results:
The following figure shows a selection of extracted rules, with ranks after filtering for the development set:
Related papers
A follow-up paper, Chiang 2007, expands this one with a detailed description and experimental results of the decoding algorithm ("cube pruning").
Enhanced versions of this model have been described in several papers, such as Watanabe et al., EMNLP 2007. Online Large-Margin Training for Statistical Machine Translation and A Discriminative Latent Variable Model for SMT, where they incorporate new sets of features for the hierarchical model.