Difference between revisions of "Koehn et al, ACL 2003"
(13 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | + | == Citation == | |
+ | Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLT-NAACL 2003, pages 127–133. [http://www.aclweb.org/anthology-new/N/N03/N03-1017.pdf] | ||
− | + | == Summary == | |
+ | |||
+ | In this [[Category::paper]] the authors propose a new framework that aims at explaining and understanding why phrase-based models in [[AddressesProblem::Machine Translation]] outperform word-based models. | ||
+ | |||
+ | Within this framework (phrase-based translation model and decoding algorithm) the authors carry experiments that explore three different methods for learning phrase translation (based on word alignments, on syntactic information and "pure" phrase alignments). Additionally the authors also explore phrase length, lexical weighting, and the impact of different language pairs in the overall BLEU score. | ||
+ | |||
+ | The results confirm the already proved hypotheses that phrase translation achieve better performance than word-based methods, adding that three-word phrase are sufficient to outperform the traditional methods. Moreover, the authors conclude that lexical weighting of phrase translation boost results, and that syntactic considerations, on the other hand, hinder the results. | ||
+ | |||
+ | == Evaluation Framework == | ||
+ | |||
+ | The phrase translation model used in the proposed framework is based on the noisy channel model. The best English output sentence <math>e_{best}</math> given a foreign input sentence <math>f</math> is given by: | ||
+ | |||
+ | <math> | ||
+ | e_{best} = \arg \max_e p(e|f) = \arg\max_e p(f|e) p_{LM}(e) \omega ^{length(e)} | ||
+ | </math> | ||
+ | |||
+ | where: | ||
+ | *<math>p(f|e)</math> is the translation model (see below); | ||
+ | *<math>p_{LM}(e)</math> is a trigram language model; | ||
+ | *and, <math>\omega</math> is a factor that calibrates the output length (\omega > 1, biasing longer output). | ||
+ | |||
+ | |||
+ | The translation model <math>p(f|e)</math> can be decomposed into: | ||
+ | |||
+ | <math>p(\bar{f}^{I}_1| \bar{e}^{I}_1) = \prod^I_{i=1} \phi(\bar{f}_i|\bar{e}_i) d(a_i - b_{i-1})</math> | ||
+ | |||
+ | where: | ||
+ | *<math>\bar{f}^I_1</math> is a sequence of <math>I</math> segmented from the input sentence <math>f</math>; | ||
+ | *<math>\phi(\bar{f}_i|\bar{e}_i)</math> is a probability distribution that models the phrase translation; | ||
+ | * and, <math>d</math> is a relative distortion probability distribution between the start position of the foreign phrase that was translated into the <math>i</math>th English phrase (<math>a_i</math>) and the end position of the foreign phrase translated into the <math>(i-1)</math>th English phrase (<math>b - 1</math>). | ||
+ | |||
+ | The decoder that was adopted in the framework employs a [[UsesMethod::Beam Search]] algorithm. | ||
+ | |||
+ | == Methods for Learning Phrase Translation == | ||
+ | |||
+ | In this work the authors compare three methods to build phrase translation probability tables. The first one builds the phrase alignments using word alignment information, i.e., all the phrase pairs that are considered must be consistent with the word alignments. | ||
+ | |||
+ | The second method explored act as a filter to the previous set of alignments, restricting possible phrases to syntactically correct ones. | ||
+ | |||
+ | Finally, the last method takes the [[RelatedPaper::Marcus and Wong, EMNLP 2002]] approach, learning phrase-level alignments directly from the parallell corpora. | ||
+ | |||
+ | == Experimental Results == | ||
+ | The authors used the [[UsesDataset::EUROPARL]] for the pair German-English. | ||
+ | |||
+ | The first result reported compares the three methods described in the previous section. The next figure plots the BLEU scores against the size of the corpus size for each of the three approaches: based on word alignments (AP), syntactic restrictions (Syn) and "pure" phrase alignments (Joint). The results obtained from the IBM Model 4 are also plotted. | ||
+ | |||
+ | [[File:Koehncoremethods.png|200px]] | ||
+ | |||
+ | |||
+ | The second result concerns the limit of sentence length that should be considered when learning them. The next figure shows the results from comparing several lengths, showing that length 3 is enough, achieving similar BLEU scores than higher values. | ||
+ | |||
+ | [[File:Koehnphraselen.png|200px]] | ||
+ | |||
+ | |||
+ | The last result is presented in the table below. In the first place, the authors prove that lexical weighting always improves the results, i.e., taking in consideration how well, in a phrase translation pair, its words translate to each other. Lastly, the authors showed that their approach achieve better BLEU scores for several language pairs, when compared with the IBM Model 4. | ||
+ | |||
+ | [[File:Koehnlangpairs.png|300px]] |
Latest revision as of 09:19, 29 November 2011
Contents
Citation
Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLT-NAACL 2003, pages 127–133. [1]
Summary
In this paper the authors propose a new framework that aims at explaining and understanding why phrase-based models in Machine Translation outperform word-based models.
Within this framework (phrase-based translation model and decoding algorithm) the authors carry experiments that explore three different methods for learning phrase translation (based on word alignments, on syntactic information and "pure" phrase alignments). Additionally the authors also explore phrase length, lexical weighting, and the impact of different language pairs in the overall BLEU score.
The results confirm the already proved hypotheses that phrase translation achieve better performance than word-based methods, adding that three-word phrase are sufficient to outperform the traditional methods. Moreover, the authors conclude that lexical weighting of phrase translation boost results, and that syntactic considerations, on the other hand, hinder the results.
Evaluation Framework
The phrase translation model used in the proposed framework is based on the noisy channel model. The best English output sentence given a foreign input sentence is given by:
where:
- is the translation model (see below);
- is a trigram language model;
- and, is a factor that calibrates the output length (\omega > 1, biasing longer output).
The translation model can be decomposed into:
where:
- is a sequence of segmented from the input sentence ;
- is a probability distribution that models the phrase translation;
- and, is a relative distortion probability distribution between the start position of the foreign phrase that was translated into the th English phrase () and the end position of the foreign phrase translated into the th English phrase ().
The decoder that was adopted in the framework employs a Beam Search algorithm.
Methods for Learning Phrase Translation
In this work the authors compare three methods to build phrase translation probability tables. The first one builds the phrase alignments using word alignment information, i.e., all the phrase pairs that are considered must be consistent with the word alignments.
The second method explored act as a filter to the previous set of alignments, restricting possible phrases to syntactically correct ones.
Finally, the last method takes the Marcus and Wong, EMNLP 2002 approach, learning phrase-level alignments directly from the parallell corpora.
Experimental Results
The authors used the EUROPARL for the pair German-English.
The first result reported compares the three methods described in the previous section. The next figure plots the BLEU scores against the size of the corpus size for each of the three approaches: based on word alignments (AP), syntactic restrictions (Syn) and "pure" phrase alignments (Joint). The results obtained from the IBM Model 4 are also plotted.
The second result concerns the limit of sentence length that should be considered when learning them. The next figure shows the results from comparing several lengths, showing that length 3 is enough, achieving similar BLEU scores than higher values.
The last result is presented in the table below. In the first place, the authors prove that lexical weighting always improves the results, i.e., taking in consideration how well, in a phrase translation pair, its words translate to each other. Lastly, the authors showed that their approach achieve better BLEU scores for several language pairs, when compared with the IBM Model 4.