Difference between revisions of "Vogal et al, COLING 1996"

From Cohen Courses
Jump to navigationJump to search
Line 9: Line 9:
 
== Summary ==
 
== Summary ==
  
This work extends the IBM models 1 and 2 by modeling relative distortion in Word Alignments. This is done by applying a first-order HMM, where each alignment probabilities are dependent on the distortion of the previous alignment.
+
This work extends IBM models 1 and 2, which models lexical translation probabilities and absolute distortion probabilities, by also modeling relative distortion.  
 +
 
 +
This is done by applying a first-order HMM, where each alignment probabilities are dependent on the distortion of the previous alignment.

Revision as of 11:58, 19 September 2011

Citation

Vogel, S., Ney, H., & Tillmann, C. (1996). Hmm-based word alignment in statistical translation. In Proceedings of the 16th conference on Computational linguistics - Volume 2, COLING ’96, pp. 836–841, Stroudsburg, PA, USA. Association for Computational Linguistics.

Online version

ACM

Summary

This work extends IBM models 1 and 2, which models lexical translation probabilities and absolute distortion probabilities, by also modeling relative distortion.

This is done by applying a first-order HMM, where each alignment probabilities are dependent on the distortion of the previous alignment.