Difference between revisions of "Vogal et al, COLING 1996"
From Cohen Courses
Jump to navigationJump to searchLine 8: | Line 8: | ||
== Summary == | == Summary == | ||
+ | |||
+ | Word Alignments map the word correspondence between two parallel sentences in different languages. | ||
This work extends IBM models 1 and 2, which models lexical translation probabilities and absolute distortion probabilities, by also modeling relative distortion. | This work extends IBM models 1 and 2, which models lexical translation probabilities and absolute distortion probabilities, by also modeling relative distortion. | ||
This is done by applying a first-order HMM, where each alignment probabilities are dependent on the distortion of the previous alignment. | This is done by applying a first-order HMM, where each alignment probabilities are dependent on the distortion of the previous alignment. |
Revision as of 10:59, 19 September 2011
Citation
Vogel, S., Ney, H., & Tillmann, C. (1996). Hmm-based word alignment in statistical translation. In Proceedings of the 16th conference on Computational linguistics - Volume 2, COLING ’96, pp. 836–841, Stroudsburg, PA, USA. Association for Computational Linguistics.
Online version
Summary
Word Alignments map the word correspondence between two parallel sentences in different languages.
This work extends IBM models 1 and 2, which models lexical translation probabilities and absolute distortion probabilities, by also modeling relative distortion.
This is done by applying a first-order HMM, where each alignment probabilities are dependent on the distortion of the previous alignment.