Difference between revisions of "Vogal et al, COLING 1996"
(33 intermediate revisions by the same user not shown) | |||
Line 5: | Line 5: | ||
== Online version == | == Online version == | ||
− | [http://dl.acm.org/ft_gateway.cfm?id=993313&type=pdf&CFID=44190696&CFTOKEN=85108373 | + | [http://dl.acm.org/ft_gateway.cfm?id=993313&type=pdf&CFID=44190696&CFTOKEN=85108373 pdf] |
== Summary == | == Summary == | ||
− | [[Word Alignments]] | + | This is a highly influential [[Category::paper]] on [[AddressesProblem::Word Alignments]]. This work extends [[IBM Model 1]] and [[IBM Model 2]], which models lexical translation probabilities and absolute distortion probabilities, by also modeling relative distortion. |
− | + | The relative distortion is modeled by applying a first-order [[UsesMethod::Hidden Markov Model]], where each alignment probability is dependent on the distortion of the previous alignment. | |
− | + | Results indicate that Modeling the relative distortion can improve the overall quality of the Word Alignments. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
== Model == | == Model == | ||
+ | [[IBM Model 2]] attempts to model the absolute distortion of words in sentence pairs. However, alignments have a strong tendency to maintain the local neighborhood after translation. | ||
− | + | This model uses a first order [[Hidden Markov Model]] to restructure the alignment model <math>Pr(t,a|s)</math> used in [[IBM Model 2]] to include first order alignment dependencies. It defines the probability of a sentence <math>s_1^J</math>, with length <math>J</math>, being translated to a sentence <math>t_1^I</math>, with length <math>I</math>, with the alignment <math>a_1^J</math> as: | |
− | |||
− | |||
<math> | <math> | ||
Line 60: | Line 40: | ||
</math> | </math> | ||
− | The Viterbi alignment is the alignment with the highest <math>Pr(a|t,s)</math>. While in previous alignment models, the Viterbi alignment could be determined in polynomial time, by maximizing the alignment probability for each target word, due to the independence assumptions that are made, finding the optimum alignment for the HMM-based model is more complex, due to the first order dependencies between alignments. This can still be calculated in polynomial time, with complexity <math>O(I^2J)</math>, using the dynamic programming algorithm, similar to [[Viterbi Decoding]] proposed this work. This algorithm defines the partial alignment probability <math>Q(i,j)</math>, which is defined as | + | The Viterbi alignment is the alignment with the highest <math>Pr(a|t,s)</math> and is widely used in statistical [[AddressesProblem::Machine Translation]] systems. While in previous alignment models, the Viterbi alignment could be determined in polynomial time, by maximizing the alignment probability for each target word, due to the independence assumptions that are made, finding the optimum alignment for the HMM-based model is more complex, due to the first order dependencies between alignments. This can still be calculated in polynomial time, with complexity <math>O(I^2J)</math>, using the dynamic programming algorithm, similar to [[Viterbi Decoding]] proposed this work. This algorithm defines the partial alignment probability <math>Q(i,j)</math>, which is defined as |
<math> | <math> | ||
Line 68: | Line 48: | ||
<math>Q(i,j)</math> can be seen as the Viterbi alignment from the partial target sentence from <math>t_1</math> to <math>t_j</math>, that contains the word alignment from <math>t_j</math> to <math>t_i</math>. This can be done because each word alignment is only dependent on the previous alignment. | <math>Q(i,j)</math> can be seen as the Viterbi alignment from the partial target sentence from <math>t_1</math> to <math>t_j</math>, that contains the word alignment from <math>t_j</math> to <math>t_i</math>. This can be done because each word alignment is only dependent on the previous alignment. | ||
− | == | + | == Corpora == |
Tests were performed using the following corpora: | Tests were performed using the following corpora: | ||
{| class="wikitable" border="1" | {| class="wikitable" border="1" | ||
Line 76: | Line 56: | ||
! Words | ! Words | ||
! Vocabulary | ! Vocabulary | ||
+ | ! Description | ||
|- | |- | ||
− | | [[Avalanche Bulletins]] | + | | [[UsesDataset::Avalanche Bulletins]] |
| French-German | | French-German | ||
| French:62849 German:44805 | | French:62849 German:44805 | ||
| French:1993 German:2265 | | French:1993 German:2265 | ||
+ | | Avalanche Bulletins published by the Swiss Federal Institute for Snow and Avalanche Research | ||
|- | |- | ||
− | | [[Verbmobil | + | | [[UsesDataset::Verbmobil]] |
| Spanish-English | | Spanish-English | ||
− | | | + | | Spanish:13768 English:15888 |
− | | | + | | Spanish2008, English:1830 |
+ | | Spontaneous spoken dialog in the domain of appointment scheduling | ||
|- | |- | ||
− | | [[ | + | | [[UsesDataset::EuTrans]] |
| German-English | | German-English | ||
− | | | + | | German:150279, English:154727 |
− | | | + | | German:4017, English:2443 |
+ | | Typical phrases in the tourism and travel domain | ||
+ | |} | ||
+ | |||
+ | == Training == | ||
+ | |||
+ | This work compares the HMM-based alignment model with IBM model 2. The training setup for both models start with 10 EM iterations using IBM model 1, to obtain the initial distribution for the lexical translation probabilities <math>tr(t_j|s_{a(j)})</math>. This was used to initialize both the IBM model 2 and the HMM-based model. Next, 5 EM iterations were run for the IBM Model 2 and the HMM-based Model. | ||
+ | |||
+ | == Experimental Results == | ||
+ | The quality of the alignments produced by each model is measured in terms of the translation, alignment and total perplexity: | ||
+ | {| class="wikitable" border="1" | ||
+ | |- | ||
+ | ! Avalanche Bulletins | ||
+ | ! Translation | ||
+ | ! Alignment | ||
+ | ! Total | ||
+ | |- | ||
+ | | IBM Model 2 | ||
+ | | 3.18 | ||
+ | | 10.05 | ||
+ | | 32.00 | ||
+ | |- | ||
+ | | HMM Model | ||
+ | | 3.45 | ||
+ | | 5.84 | ||
+ | | 20.18 | ||
|} | |} | ||
+ | |||
+ | {| class="wikitable" border="1" | ||
+ | |- | ||
+ | ! EuTrans Corpus | ||
+ | ! Translation | ||
+ | ! Alignment | ||
+ | ! Total | ||
+ | |- | ||
+ | | IBM Model 2 | ||
+ | | 2.44 | ||
+ | | 4.00 | ||
+ | | 9.78 | ||
+ | |- | ||
+ | | HMM Model | ||
+ | | 2.46 | ||
+ | | 3.93 | ||
+ | | 9.69 | ||
+ | |} | ||
+ | |||
+ | {| class="wikitable" border="1" | ||
+ | |- | ||
+ | ! Verbmobil Corpus | ||
+ | ! Translation | ||
+ | ! Alignment | ||
+ | ! Total | ||
+ | |- | ||
+ | | IBM Model 2 | ||
+ | | 4.70 | ||
+ | | 6.54 | ||
+ | | 30.71 | ||
+ | |- | ||
+ | | HMM Model | ||
+ | | 4.86 | ||
+ | | 5.42 | ||
+ | | 26.50 | ||
+ | |} | ||
+ | |||
+ | From these results, it is concluded that IBM Model 2 gives slightly better results for the perplexity of translation probabilities, while the HMM-based Model gives better perplexity values for alignment probabilities. This is explained by the fact that in some cases the relative distortion used in HMM-based model gives more accurate results than the absolute distortion used in IBM Model 2 and vice-versa. | ||
+ | |||
+ | == Related Papers == | ||
+ | This work defines a relative reordering model for Word Alignments. A similar work is done in [[IBM Model 4]], which has a more robust model. However, the HMM-based model has the advantage that the inference needed to obtain the Viterbi Alignment can be performed in polynomial time, while [[IBM Model 4]] can only sample a subspace of all possible alignments, using [[Hill Climbing]]. |
Latest revision as of 08:39, 30 September 2011
Contents
Citation
Vogel, S., Ney, H., & Tillmann, C. (1996). Hmm-based word alignment in statistical translation. In Proceedings of the 16th conference on Computational linguistics - Volume 2, COLING ’96, pp. 836–841, Stroudsburg, PA, USA. Association for Computational Linguistics.
Online version
Summary
This is a highly influential paper on Word Alignments. This work extends IBM Model 1 and IBM Model 2, which models lexical translation probabilities and absolute distortion probabilities, by also modeling relative distortion.
The relative distortion is modeled by applying a first-order Hidden Markov Model, where each alignment probability is dependent on the distortion of the previous alignment.
Results indicate that Modeling the relative distortion can improve the overall quality of the Word Alignments.
Model
IBM Model 2 attempts to model the absolute distortion of words in sentence pairs. However, alignments have a strong tendency to maintain the local neighborhood after translation.
This model uses a first order Hidden Markov Model to restructure the alignment model used in IBM Model 2 to include first order alignment dependencies. It defines the probability of a sentence , with length , being translated to a sentence , with length , with the alignment as:
Where the alignment probability is calculated as:
In this formulation, the distortion probability does not depend on the word positions but in the jump width (i-i').
Viterbi Alignment
The alignment probability for a given sentence pair is given by:
The Viterbi alignment is the alignment with the highest and is widely used in statistical Machine Translation systems. While in previous alignment models, the Viterbi alignment could be determined in polynomial time, by maximizing the alignment probability for each target word, due to the independence assumptions that are made, finding the optimum alignment for the HMM-based model is more complex, due to the first order dependencies between alignments. This can still be calculated in polynomial time, with complexity , using the dynamic programming algorithm, similar to Viterbi Decoding proposed this work. This algorithm defines the partial alignment probability , which is defined as
can be seen as the Viterbi alignment from the partial target sentence from to , that contains the word alignment from to . This can be done because each word alignment is only dependent on the previous alignment.
Corpora
Tests were performed using the following corpora:
Corpora | Language Pair | Words | Vocabulary | Description |
---|---|---|---|---|
Avalanche Bulletins | French-German | French:62849 German:44805 | French:1993 German:2265 | Avalanche Bulletins published by the Swiss Federal Institute for Snow and Avalanche Research |
Verbmobil | Spanish-English | Spanish:13768 English:15888 | Spanish2008, English:1830 | Spontaneous spoken dialog in the domain of appointment scheduling |
EuTrans | German-English | German:150279, English:154727 | German:4017, English:2443 | Typical phrases in the tourism and travel domain |
Training
This work compares the HMM-based alignment model with IBM model 2. The training setup for both models start with 10 EM iterations using IBM model 1, to obtain the initial distribution for the lexical translation probabilities . This was used to initialize both the IBM model 2 and the HMM-based model. Next, 5 EM iterations were run for the IBM Model 2 and the HMM-based Model.
Experimental Results
The quality of the alignments produced by each model is measured in terms of the translation, alignment and total perplexity:
Avalanche Bulletins | Translation | Alignment | Total |
---|---|---|---|
IBM Model 2 | 3.18 | 10.05 | 32.00 |
HMM Model | 3.45 | 5.84 | 20.18 |
EuTrans Corpus | Translation | Alignment | Total |
---|---|---|---|
IBM Model 2 | 2.44 | 4.00 | 9.78 |
HMM Model | 2.46 | 3.93 | 9.69 |
Verbmobil Corpus | Translation | Alignment | Total |
---|---|---|---|
IBM Model 2 | 4.70 | 6.54 | 30.71 |
HMM Model | 4.86 | 5.42 | 26.50 |
From these results, it is concluded that IBM Model 2 gives slightly better results for the perplexity of translation probabilities, while the HMM-based Model gives better perplexity values for alignment probabilities. This is explained by the fact that in some cases the relative distortion used in HMM-based model gives more accurate results than the absolute distortion used in IBM Model 2 and vice-versa.
Related Papers
This work defines a relative reordering model for Word Alignments. A similar work is done in IBM Model 4, which has a more robust model. However, the HMM-based model has the advantage that the inference needed to obtain the Viterbi Alignment can be performed in polynomial time, while IBM Model 4 can only sample a subspace of all possible alignments, using Hill Climbing.