Difference between revisions of "Forward-Backward"
From Cohen Courses
Jump to navigationJump to search (Created page with 'This page is being edited by Wang Ling') |
|||
Line 1: | Line 1: | ||
− | This | + | == Summary == |
+ | |||
+ | This is a dynamic programming [[Category::method | algorithm]] on [[AddressesProblem::Word Alignments]]. This work extends [[IBM Model 1]] and [[IBM Model 2]], which models lexical translation probabilities and absolute distortion probabilities, by also modeling relative distortion. | ||
+ | |||
+ | The relative distortion is modeled by applying a first-order [[UsesMethod::Hidden Markov Model]], where each alignment probabilities are dependent on the distortion of the previous alignment. | ||
+ | |||
+ | Results indicate that Modeling the relative distortion can improve the overall quality of the Word Alignments. |
Revision as of 14:54, 28 September 2011
Summary
This is a dynamic programming algorithm on Word Alignments. This work extends IBM Model 1 and IBM Model 2, which models lexical translation probabilities and absolute distortion probabilities, by also modeling relative distortion.
The relative distortion is modeled by applying a first-order Hidden Markov Model, where each alignment probabilities are dependent on the distortion of the previous alignment.
Results indicate that Modeling the relative distortion can improve the overall quality of the Word Alignments.