Difference between revisions of "Vogal et al, COLING 1996"

From Cohen Courses
Jump to navigationJump to search
Line 11: Line 11:
 
[[Word Alignments]] map the word correspondence between two parallel sentences in different languages.
 
[[Word Alignments]] map the word correspondence between two parallel sentences in different languages.
  
This work extends IBM models 1 and 2, which models lexical translation probabilities and absolute distortion probabilities, by also modeling relative distortion.  
+
This is a highly influential work on [[Word Alignments]]. This work IBM models 1 and 2, which models lexical translation probabilities and absolute distortion probabilities, by also modeling relative distortion.  
  
 
The relative distortion is modeled by applying a first-order HMM, where each alignment probabilities are dependent on the distortion of the previous alignment.
 
The relative distortion is modeled by applying a first-order HMM, where each alignment probabilities are dependent on the distortion of the previous alignment.

Revision as of 22:15, 24 September 2011

Citation

Vogel, S., Ney, H., & Tillmann, C. (1996). Hmm-based word alignment in statistical translation. In Proceedings of the 16th conference on Computational linguistics - Volume 2, COLING ’96, pp. 836–841, Stroudsburg, PA, USA. Association for Computational Linguistics.

Online version

ACM

Summary

Word Alignments map the word correspondence between two parallel sentences in different languages.

This is a highly influential work on Word Alignments. This work IBM models 1 and 2, which models lexical translation probabilities and absolute distortion probabilities, by also modeling relative distortion.

The relative distortion is modeled by applying a first-order HMM, where each alignment probabilities are dependent on the distortion of the previous alignment.

Previous work

IBM Model 1 defines the probability of a sentence , with length , being translated to a sentence , with length , with the alignment as:

Where the alignment is a function that maps each word to a word , by their indexes. These alignments can be viewed as an object for indicating the corresponding words in a parallel text. We can see that the sentence translation probability , is decomposed into the product of the lexical translation probabilities of each word in the target with the word that it is aligned to in the source . Additionally, target words that are not aligned with any source word are aligned with the null token, with the a lexical translation probability given by . These are referred as null insertions. The normalizing factor ensures that is a probability and is normalized over all possible alignments and all possible translations .

One of the problems of the IBM Model 1 is that it is very weak to reordering, since is calculated using only the lexical translation probabilities . Because of this, if the model is presented with 2 translations candidates and with the same lexical translations, but with different reordering of the translated words, the model scores both translations with the same score.

Mixture-based Alignment models~(IBM Model 2) addresses this problem by modeling the absolute distortion in the word positioning between the 2 languages, introducing an alignment probability distribution , where and are the word positions in the source and target sentences. Thus the equation for becomes:

Where the alignment probability distribution models the probability of a word in the position in the source sentence of being reordered into the position in the target sentence.

Model

While IBM Model 2 attempts to model the absolute distortion of words in sentence pairs , alignments have a strong tendency to maintain the local neighborhood after translation.

This uses a first order Hidden Markov Model to restructure the alignment model to include first order alignment dependencies. Thus:

Where the alignment probability is calculated as:

In this formulation, the distortion probability does not depend on the word positions but in the jump width (i-i').

Viterbi Alignment

The alignment probability for a given sentence pair is given by:

The Viterbi alignment is the alignment with the highest . While in previous alignment models, the Viterbi alignment could be determined in polynomial time, by maximizing the alignment probability for each target word, due to the independence assumptions that are made, finding the optimum alignment for the HMM-based model is more complex, due to the first order dependencies between alignments. This can still be calculated in polynomial time, with complexity , using the dynamic programming algorithm, similar to Viterbi Decoding proposed this work. This algorithm defines the partial alignment probability , which is defined as

can be seen as the Viterbi alignment from the partial target sentence from to , that contains the word alignment from to . This can be done because each word alignment is only dependent on the previous alignment.

Corpora

Tests were performed using the following corpora:

Corpora Language Pair Words Vocabulary
Avalanche Bulletins French-German French:62849 German:44805 French:1993 German:2265
Verbmobil Corpus Spanish-English Spanish:13768 English:15888 Spanish2008, English:1830
EuTrans Corpus German-English German:150279, English:154727 German:4017, English:2443

Training

This work compares the HMM-based alignment model with IBM model 2. The training setup for both models start with 10 EM iterations using IBM model 1, to obtain the initial distribution for the lexical translation probabilities . This was used to initialize both the IBM model 2 and the HMM-based model. Next, 5 EM iterations were run for the IBM Model 2 and the HMM-based Model.

Results

The quality of the alignments produced by each model is measured in terms of the translation, alignment and total perplexity:

Avalanche Bulletins Translation Alignment Total
IBM Model 2 3.18 10.05 32.00
HMM Model 3.45 5.84 20.18
EuTrans Corpus Translation Alignment Total
IBM Model 2 2.44 4.00 9.78
HMM Model 2.46 3.93 9.69
Verbmobil Corpus Translation Alignment Total
IBM Model 2 4.70 6.54 30.71
HMM Model 4.86 5.42 26.50

From these results, it is concluded that IBM Model 2 gives slightly better results for the perplexity of translation probabilities, while the HMM-based Model gives better perplexity values for alignment probabilities. This is explained by the fact that in some cases the relative distortion used in HMM-based model gives more accurate results than the absolute distortion used in IBM Model 2 and viceversa.