Difference between revisions of "DeNero et al, EMNLP 2008"
Line 29: | Line 29: | ||
== Dirichlet Process Prior == | == Dirichlet Process Prior == | ||
One problem that all Phrase Alignment Models share is Model Degeneracy, where the joint model is biased towards longer phrase pairs. This happens an alignment that uses a large number of shorter phrase pairs, are susceptible to more distortion and generation penalties, due to the product of additional phrase probabilities defined in the model. | One problem that all Phrase Alignment Models share is Model Degeneracy, where the joint model is biased towards longer phrase pairs. This happens an alignment that uses a large number of shorter phrase pairs, are susceptible to more distortion and generation penalties, due to the product of additional phrase probabilities defined in the model. | ||
+ | |||
+ | This work controls this degenerate behavior by placing a Dirichlet Process Prior over the distribution <math>\theta_J</math>. |
Revision as of 13:12, 27 September 2011
Contents
Citation
Denero, J., Bouchard-ct, R., & Klein, D.(2008). Sampling alignment structure under a Bayesian translation model. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP '08). Association for Computational Linguistics, Stroudsburg, PA, USA, 314-323.
Online version
Summary
Unlike word-to-phrase alignments, computing the alignment and count expectations for phrase-to-phrase alignments is generally intractable due to the exponential growth of possible combination of phrases and alignments. Thus, previous attempts for building a fully joint phrase alignment model have been unsuccessful.
This paper describes the first tractable phrase-to-phrase Alignment Model, which relies on Gibbs Sampling to tackle the intractability issues.
Tests show translation improvements over Machine Translation Systems build using conventional methods.
Phrase Alignment Model
The phrase-to-phrase alignment model used in this work is presented in Marcus and Wong, EMNLP 2002. An extension is made to allow null phrases, where a phrase does not have an equivalent in the other language. This is done by introducing another multinomial distribution , from where unaligned phrase pairs are drawn.
This model involves the observed sentence pairs , the latent phrase segmentations and alignments and the parameters ( and ), from where phrase pairs are drawn. As with other alignment models, the Expectation Maximization Algorithm is used, where during the E-step the expected counts for phrase pairs are calculated under by fixing and during the M-step a new is calculated based on these counts. However, computing the expected counts this way would involve computing the phrase pair counts under all possible segmentations and alignments , which is too large to be computed one by one. This was the main drawback of the work in Marcus and Wong, EMNLP 2002.
This work tackles this problem using Gibbs sampling. Rather than iterating over all possible , this work generates a sequence of samples that approximate the conditional distribution . Then, by averaging counts for phrase pairs using those samples an approximation of the expected counts under can be computed.
Gibbs Sampler
The Gibbs Sampler presented in this work starts with the initial state , which sets a initial configuration for the phrase segmentation and alignment. Then, by applying a set of local changes starting from , a sequence of samples are produced. These samples are guaranteed to approximate the conditional distribution .
This work defines a set of operators that can be applied to a state to generate a new state . Each operator freezes the state , except for a local region, and chooses a reconfiguration, between all possible reconfigurations of that region, according to the conditional probability of each reconfiguration. This is done so that the sequence of generated samples approximate .
Dirichlet Process Prior
One problem that all Phrase Alignment Models share is Model Degeneracy, where the joint model is biased towards longer phrase pairs. This happens an alignment that uses a large number of shorter phrase pairs, are susceptible to more distortion and generation penalties, due to the product of additional phrase probabilities defined in the model.
This work controls this degenerate behavior by placing a Dirichlet Process Prior over the distribution .