Difference between revisions of "IBM Model 2"

From Cohen Courses
Jump to navigationJump to search
(Created page with 'One of the problems of the IBM Model 1 is that it is very weak to reordering, since <math>p(f,a|s)</math> is calculated using only the lexical translation probabilities <math…')
 
Line 1: Line 1:
 +
== Model ==
 +
 
One of the problems of the [[IBM Model 1]] is that it is very weak to reordering, since <math>p(f,a|s)</math> is calculated using only the lexical translation probabilities <math>tr(t|s)</math>. Because of this, if the model is presented with 2 translations candidates <math>t_1</math> and <math>t_2</math> with the same lexical translations, but with different reordering of the translated words, the model scores both translations with the same score.  
 
One of the problems of the [[IBM Model 1]] is that it is very weak to reordering, since <math>p(f,a|s)</math> is calculated using only the lexical translation probabilities <math>tr(t|s)</math>. Because of this, if the model is presented with 2 translations candidates <math>t_1</math> and <math>t_2</math> with the same lexical translations, but with different reordering of the translated words, the model scores both translations with the same score.  
  

Revision as of 10:22, 27 September 2011

Model

One of the problems of the IBM Model 1 is that it is very weak to reordering, since is calculated using only the lexical translation probabilities . Because of this, if the model is presented with 2 translations candidates and with the same lexical translations, but with different reordering of the translated words, the model scores both translations with the same score.

Mixture-based Alignment models~(IBM Model 2) addresses this problem by modeling the absolute distortion in the word positioning between the 2 languages, introducing an alignment probability distribution , where and are the word positions in the source and target sentences. Thus the equation for becomes:

Where the alignment probability distribution models the probability of a word in the position in the source sentence of being reordered into the position in the target sentence.