Difference between revisions of "Chiang 2005"

From Cohen Courses
Jump to navigationJump to search
 
(42 intermediate revisions by the same user not shown)
Line 9: Line 9:
 
== Summary ==
 
== Summary ==
  
This [[Category::paper]] presents a statistical phrase-based [[AddressesProblem::Machine Translation|machine translation]] model that uses hierarchical phrases (phrases that contain subphrases). The model is ''formally'' syntax-based because it uses [[UsesMethod::Synchronous Context-Free Grammars]] (synchronous CFG) but not ''linguistically'' syntax-based because the grammar is learned from a parallel text without using any linguistic annotations or assumptions. Using BLEU as a metric, it is shown to outperform previous state-of-the-art phrase-based systems.
+
This [[Category::paper]] introduces a [[UsesMethod::Hierarchical phrase-based translation|hierarchical phrase-based]] model for statistical [[AddressesProblem::Machine Translation|machine translation]]. It describes a training method to extract the synchronous context-free grammar (synchronous CFG) rules from parallel text and also an efficient decoding algorithm to apply them. Using BLEU as a metric, the model is shown to outperform previous state-of-the-art phrase-based systems.
  
=== Motivation ===
+
=== Rule features ===
  
The hierarchical model is motivated by the inability of conventional phrase-based models to learn reorderings of phrases (they only learn local reorderings of words). For example, considering the following Mandarin sentence:
+
According to the proposed [[UsesMethod::Hierarchical phrase-based translation#Definition|hierarchical model]], the weights of the synchronous CFG rules: <math> X \rightarrow \left \langle \gamma , \alpha \right \rangle </math> are determined by a log-linear model:
Aozhou    shi yu  Bei  Han  you  bangjiao            de  shaoshu guojia    zhiyi
 
Australia is  with North Korea have diplomatic relations that few    countries one of
 
 
(Australia is one of the few countries that have diplomatic relations with North Korea)
 
the typical output of a conventional phrase-based system would be:
 
Australia is diplomatic relations with North Korea is one of the few countries
 
because it is able to do the local reorderings of "diplomatic ... Korea" and "one ... countries" but fails to perform the inversion of the two groups.
 
  
To solve this problem, the proposal is to have pairs of hierarchical phrases that consist of both words and subphrases. These pairs are formally defined as productions of a synchronous CFG. Then, in the previous example, the following productions are sufficient to translate the previous sentence correctly:
+
[[File:f2.png]]
 +
 
 +
The <math>\, \phi_{i} \,</math> features used in this paper were taken from analogous features of conventional phrase-based systems:
 +
* <math>\, P(\gamma | \alpha) \,</math>, from the noisy-channel model
 +
* <math>\, P(\alpha | \gamma) \,</math>, proven to be a helpful feature
 +
* lexical weights which estimate how well the words in <math>\, \alpha \,</math> translate the words in <math>\, \gamma \,</math>
 +
* a phrase penalty to allow the model assign preferences for longer or shorter derivations.
 +
 
 +
=== Training ===
 +
 
 +
The training process starts with a word-aligned corpus and produces "initial phrase pairs" using conventional phrase-based methods (from [[RelatedPaper::Koehn et. al. 2003]] and [[RelatedPaper::Och and Ney 2004]]). Then, it extracts a set of rules from these phrase pairs, formally defined as the smallest set satisfying the following:
  
[[File:f1.png]]
+
1. If <math> \left \langle f , e \right \rangle </math> is an initial phrase pair, then <math> X \rightarrow \left \langle f , e \right \rangle </math> is a rule.
  
=== The synchronous CFG model ===
+
2. If <math>r = X \rightarrow \left \langle \gamma , \alpha \right \rangle </math> is a rule and <math> \left \langle f , e \right \rangle </math> is an initial phrase pair such that <math>\, \gamma = \gamma_{1} \, f \, \gamma_{2} \,</math> and <math>\, \alpha = \alpha_{1} \, e \, \alpha_{2} \,</math>, then <math> X \rightarrow \left \langle \, \gamma_{1} \, X_{k} \, \gamma_{2} \, , \, \alpha_{1} \, X_{k} \, \alpha_{2} \, \right \rangle </math> is a rule.
  
Based on the definition of synchronous CFGs, the basic elements of the model are weighted rewrite rules with aligned pairs of right-handed sides, of the form: <math> X \rightarrow \left \langle \gamma , \alpha \right \rangle </math>, where X is a non-terminal, and <math>\gamma</math> and <math>\alpha</math> are strings of terminals and non-terminals (one in the source language and the other in the target language). The weight of each rule is determined by a log-linear model:
+
This procedure generates too many rules, making training and decoding very slow and creating spurious ambiguity. Therefore, the grammar is filtered according to a set of principles designed to balance grammar size and performance on a development set:
  
[[File:f2.png]]
+
* keep only the smallest initial phrase pairs containing the same set of alignment points
 +
* limit initial phrases to a length of 10
 +
* limit rules to a length of 5 (terminals plus non-terminals) on the source-language right-hand side
 +
* discard rules with more than 2 non-terminals
 +
* discard rules with adjacent non-terminals in the right-hand side
 +
* keep only rules with at least one pair of aligned words
  
where <math>\phi_{i}</math> are features defined on rules, including: noisy-channel model features, lexical weights which estimate how well the words in <math>\alpha</math> translate to words in <math>\gamma</math> and also a phrase penalty to allow the model assign preferences for longer or shorter derivations. Additionally, the model uses two special "glue" rules which enables the model to build only partial translations with hierarchical phrases and then serially combine them:
+
Regarding the rule weights, since the training process extracts many rules from a single initial phrase pair, it distributes weight equally among intial phrase pairs but distribute that weight equally among the related rules.
  
[[File:f3.png]]
+
=== Decoding ===
  
The weight of the first special rule is <math>exp(-\lambda_{g})</math> which controls the preference for hierarchical phrases over serial combination of phrases, and the weight of the second one is always <math>1</math>. The following partial derivation of a synchronous CFG shows how the "glue" rules and the standard ones are combined together:
+
The decoder process uses a [[UsesMethod::CYK Parsing|CKY parser]] with beam search together with a postprocessor for mapping source-language derivations to target-language derivations. Given a sentence in the source language <math>\, f \,</math>, it finds the best derivation (or N best derivations) that generates <math> \left \langle f , e \right \rangle </math> for some <math>\, e \,</math>.
  
[[File:f4.png]]
+
The search space is pruned in several ways: an item that has a score worse than <math>\beta</math> times the best score in the same cell is discarded; and an item that is worse than the <math>b</math>-th best item in the same cell is also discarded. The values of <math>\beta</math> and <math>b</math> are chosen to balance speed and performance in a development set.
  
 
== Experimental results ==
 
== Experimental results ==
 +
 +
The model was tested on Mandarin-to-English translation, using the [[UsesDataset::FBIS corpus]] for the translation model, the 2002 [[UsesDataset::NIST MT]] evaluation dataset as the development set and the 2003 test set as the test set. The training process obtained a grammar of 24M rules in the first step, which were filtered and reduced to 2.2M rules after using the development set. The following figure shows a selection of extracted rules and their ranks:
 +
 +
[[File:f7.png]]
 +
 +
Three different systems were compared: the baseline system (Pharaoh, the current state-of-the-art phrase-based system), the hierarchical model and an "enhanced" hierarchical model using a constituent feature. The following table shows the results:
 +
 +
[[File:f6.png]]
 +
 +
The paper concludes that hierarchical phrase pairs significantly improve translation accuracy compared to conventional phrase-based systems. They also make easier the incorporation of syntactic information, however, the experiment of adding a constituent feature did not provide a statistically significant gain.
  
 
== Related papers ==
 
== Related papers ==
 +
 +
A follow-up paper, [[RelatedPaper::Chiang 2007]], expands this one with a detailed description and experimental results of the decoding algorithm ("cube pruning").
 +
 +
Enhanced versions of this model have been described in several papers, such as [[RelatedPaper::Watanabe et al., EMNLP 2007. Online Large-Margin Training for Statistical Machine Translation]] and [[RelatedPaper::A Discriminative Latent Variable Model for SMT]], where they incorporate new sets of features for the hierarchical model.

Latest revision as of 02:29, 2 November 2011

Citation

Chiang, D. 2005. A Hierarchical Phrase-Based Model for Statistical Machine Translation. In Proceedings of the 43rd Annual Meeting of the ACL, pp. 263–270, Ann Arbor. Association for Computational Linguistics.

Online version

Information Sciences Institute, University of Southern California

Summary

This paper introduces a hierarchical phrase-based model for statistical machine translation. It describes a training method to extract the synchronous context-free grammar (synchronous CFG) rules from parallel text and also an efficient decoding algorithm to apply them. Using BLEU as a metric, the model is shown to outperform previous state-of-the-art phrase-based systems.

Rule features

According to the proposed hierarchical model, the weights of the synchronous CFG rules: are determined by a log-linear model:

F2.png

The features used in this paper were taken from analogous features of conventional phrase-based systems:

  • , from the noisy-channel model
  • , proven to be a helpful feature
  • lexical weights which estimate how well the words in translate the words in
  • a phrase penalty to allow the model assign preferences for longer or shorter derivations.

Training

The training process starts with a word-aligned corpus and produces "initial phrase pairs" using conventional phrase-based methods (from Koehn et. al. 2003 and Och and Ney 2004). Then, it extracts a set of rules from these phrase pairs, formally defined as the smallest set satisfying the following:

1. If is an initial phrase pair, then is a rule.

2. If is a rule and is an initial phrase pair such that and , then is a rule.

This procedure generates too many rules, making training and decoding very slow and creating spurious ambiguity. Therefore, the grammar is filtered according to a set of principles designed to balance grammar size and performance on a development set:

  • keep only the smallest initial phrase pairs containing the same set of alignment points
  • limit initial phrases to a length of 10
  • limit rules to a length of 5 (terminals plus non-terminals) on the source-language right-hand side
  • discard rules with more than 2 non-terminals
  • discard rules with adjacent non-terminals in the right-hand side
  • keep only rules with at least one pair of aligned words

Regarding the rule weights, since the training process extracts many rules from a single initial phrase pair, it distributes weight equally among intial phrase pairs but distribute that weight equally among the related rules.

Decoding

The decoder process uses a CKY parser with beam search together with a postprocessor for mapping source-language derivations to target-language derivations. Given a sentence in the source language , it finds the best derivation (or N best derivations) that generates for some .

The search space is pruned in several ways: an item that has a score worse than times the best score in the same cell is discarded; and an item that is worse than the -th best item in the same cell is also discarded. The values of and are chosen to balance speed and performance in a development set.

Experimental results

The model was tested on Mandarin-to-English translation, using the FBIS corpus for the translation model, the 2002 NIST MT evaluation dataset as the development set and the 2003 test set as the test set. The training process obtained a grammar of 24M rules in the first step, which were filtered and reduced to 2.2M rules after using the development set. The following figure shows a selection of extracted rules and their ranks:

F7.png

Three different systems were compared: the baseline system (Pharaoh, the current state-of-the-art phrase-based system), the hierarchical model and an "enhanced" hierarchical model using a constituent feature. The following table shows the results:

F6.png

The paper concludes that hierarchical phrase pairs significantly improve translation accuracy compared to conventional phrase-based systems. They also make easier the incorporation of syntactic information, however, the experiment of adding a constituent feature did not provide a statistically significant gain.

Related papers

A follow-up paper, Chiang 2007, expands this one with a detailed description and experimental results of the decoding algorithm ("cube pruning").

Enhanced versions of this model have been described in several papers, such as Watanabe et al., EMNLP 2007. Online Large-Margin Training for Statistical Machine Translation and A Discriminative Latent Variable Model for SMT, where they incorporate new sets of features for the hierarchical model.