Difference between revisions of "DeNero et al, EMNLP 2008"

From Cohen Courses
Jump to navigationJump to search
 
(55 intermediate revisions by 2 users not shown)
Line 5: Line 5:
 
== Online version ==
 
== Online version ==
  
[http://www.cs.berkeley.edu/~bouchard/pub/mtSubmission.pdf ACM]
+
[http://www.cs.berkeley.edu/~bouchard/pub/mtSubmission.pdf pdf]
  
 
== Summary ==
 
== Summary ==
  
Unlike word-to-phrase alignments, computing the alignment expectations for phrase-to-phrase alignments is generally intractable due to the exponential growth of possible combination of phrases and alignments. Because of this, previous attempts for building a joint phrase alignment model have been unsuccessful.
+
Unlike word-to-phrase alignments, computing the alignment and count expectations for phrase-to-phrase alignments is generally intractable due to the exponential growth of possible combination of phrases and alignments. Thus, previous attempts for building a fully joint phrase alignment model have been unsuccessful.
  
This paper describes the first tractable phrase-to-phrase Alignment Model, which relies on Gibbs Sampling to tackle the intractability problem.
+
This [[Category::paper]] describes the first tractable phrase-to-phrase Alignment Model, which relies on [[UsesMethod::Gibbs sampling]] to address the intractability issues.
  
Tests show translation improvements over Machine Translation Systems build using conventional methods.
+
Tests show translation improvements over [[AddressesProblem::Machine Translation]] Systems build using traditional methods ([[Word Alignments]] + [[Phrase Extraction]]).
  
== Previous Work ==
+
== Description of the Method ==
Most alignment models can not model many-to-many alignments, since they restrict each word in the target sentence to be aligned with at most one word in the source language. Thus, these models can only model one-to-many alignments, where each source word can be aligned to multiple target words but not the opposite. An efficient way to generate many-to-many alignments is to combine one-to-many alignments with many-to-one alignments, which is called [[Symmetrization]]. In this case, we build one-to-many alignments from the source sentences to target sentences and many-to-one alignments in the inverse direction, and combine these together to obtain many-to-many alignments. An easy combination method, but not very used presently, is to perform an union of the alignments, where two words are aligned if those are aligned in either one of the bidirectional alignments.
+
The phrase-to-phrase alignment model used in this work is presented in [[Marcus and Wong, EMNLP 2002]]. An extension is made to allow null phrases, where a phrase does not have an equivalent in the other language. This is done by introducing another multinomial distribution <math>\theta_N</math>, from where unaligned phrase pairs are drawn.
  
These alignments are used in the [[Phrase Extraction Algorithm]], where phrase pairs are extracted based on heuristics, such as the alignment template defined by ([http://www.ldc.upenn.edu/acl/w/w99/w99-0604.pdf Och et al, 1999]). The phrase translation features are then calculated for each unique phrase pair, based on the available data such as phrase pair counts and alignments. An example of a feature is the translation probability <math>\phi(t|s)</math>, which is calculated as the phrase pair count ratio:
+
This model involves the observed sentence pairs <math>x</math>, the latent phrase segmentations and alignments <math>z</math> and the parameters <math>\theta</math>(<math>\theta_N</math> and <math>\theta_J</math>), from where phrase pairs are drawn. As with other alignment models, the [[UsesMethod::Expectation Maximization]] Algorithm is used, where during the E-step the expected counts for phrase pairs are calculated under <math>P(z|x,\theta)</math> by fixing <math>\theta</math> and during the M-step a new <math>\theta</math> is calculated based on these counts. However, computing the expected counts this way would involve computing the expected phrase pair counts under all possible segmentations and alignments <math>z</math>, which is too large to be computed one by one. This was the main drawback of the work in [[Marcus and Wong, EMNLP 2002]].
  
<math>
+
This work addresses this problem using [[UsesMethod::Gibbs sampling]]. Rather than iterating over all possible <math>z</math>, this work generates a sequence of samples that approximate the conditional distribution <math>P(z|x,\theta)</math>. Then, by averaging counts for phrase pairs using those samples, an approximation of the expected counts under <math>P(z|x,\theta)</math> can be computed. In this work, these counts are directly used to calculate the Translation features, rather than considering the Word Alignments as a separate step.
\phi(t|s) = \frac{c(s,t)}{\sum_{t'} c(s,t')}
 
</math>
 
  
where c(s,t) is the number of occurrences of the phrase pair with source s and target t (or [t,s]).
+
The Gibbs Sampler presented in this work starts with the initial state <math>z_0</math>, which sets a initial configuration for the phrase segmentation and alignment. Then, by applying a set of local changes starting from <math>z_0</math>, a sequence of samples <math>z_i,z_{i+1},\dots</math> are produced. These samples are guaranteed to approximate the conditional distribution <math>P(z|x,\theta)</math>.  
  
The main difference between this model and the previous work is that, while the previous work uses fixed word alignments and extracts phrase pairs using heuristics, this work estimates the phrase translation features using an inference procedure that is not restricted by fixed alignments or heuristics.
+
This work defines a set of operators that can be applied to a state <math>z_i</math> to generate a new state <math>z_{i+1}</math>. Each operator freezes the state <math>z_i</math>, except for a local region, and chooses a reconfiguration, between all possible reconfigurations of that region, according to the conditional probability of each reconfiguration. This is done so that the sequence of generated samples approximate <math>P(z|x,\theta)</math>.
  
== Previous Phrase Alignment Model ==
+
Finally, every phrase-to-phrase alignment model has to deal with degeneracy issues, since the joint model penalizes segmentations that use many small phrases, because additional phrases incur additional generation and distortion penalties. This means that the maximum likelihood estimation of the model puts weight on segmentations with only one phrase pair that translates the whole source sentence to the target sentence, which perfectly translates the sentences in the training corpus but are unlikely to be useful for decoding sentences that are not in the training corpus. This work addresses this problem by defining a Dirichlet process prior over phrase pairs, and selecting a base measure that prefers shorter phrase pairs and only uses longer phrase pairs when there is clear evidence for them.
The phrase-to-phrase alignment model presented in this work is built upon the work in ([http://www.isi.edu/~marcu/papers/jointmt2002.pdf Marcus and Wong, 2002]). In this work, words are clustered into phrases by a generative process, which constructs an ordered set of phrases <math>t_{1:m}</math> in the target language, an ordered set of phrases <math>s_{1:n}</math> in the source language and the alignments between phrases <math>a=\{(j,k)\}</math>, which indicates that the phrase pair with the target <math>t_j</math> and <math>s_k</math>. The process is composed by 2 steps:
 
  
* First, the number of components <math>l</math> is chosen and each of <math>l</math> phrase pairs are generated independently.
+
== Experimental Results ==
* Then, a ordering for the phrases in the source phrases is chosen, and all the source and target phrases are aligned one to one.
 
  
The choice of <math>l</math> is parametrized using a geometric distribution <math>P_G</math>, with the stop parameter <math>p_{$}</math>:
+
This work was tested using the setup from the [http://www.statmt.org/wmt07/shared-task.html Statistical Machine Translation Workshop shared task], where the model is trained using the [[UsesDataset::EUROPARL]] dataset for Spanish-English. Using the expected counts to calculate the translation features, considerable improvements observed in terms of translation quality, measured with BLEU and METEOR, over the baseline trained using [http://www.statmt.org/moses/ Moses], where features are calculated using the [[Word Alignments]].
  
<math>
+
== Related Work ==
P(l) = P_G(l;p_$) = p_$ \times (1 - p_$)^{l - 1}
 
</math>
 
  
Phrase pairs are drawn from an unknown multinomial distribution <math>\theta_J</math>.
+
The work in [[Marcus and Wong, EMNLP 2002]], describes a joint probability distribution, which is used and extended in this work.  
  
A simple position based distortion model is used, where:
+
Similar approaches to address the problem of the intractability of all possible phrase segmentations and alignments have been tested using algorithms such as [[Hill Climbing]] in
 +
[[Birch et al, StatMT 2006]].
  
<math>
 
P(a|[t,s]) \propto \prod_{a_i\in a} \delta(a_i)
 
</math>
 
  
<math>
+
== Comment ==
P(a_i = (j,k)) = b^{|pos(t_j)-pos(s_k)\times s|}
 
</math>
 
  
Finally, the joint probability model for aligning sentences consisting of <math>l</math> phrase pairs is given by:
+
Hey, if you want to see more work in word alignment, this recent paper is interesting. It may be an interesting contrast because it not use Gibbs sampling for inference. C. Dyer, J. H. Clark, A. Lavie, and N. A. Smith. [http://www.aclweb.org/anthology/P/P11/P11-1042.pdf Unsupervised Word Alignment with Arbitrary Features.]  --[[User:Brendan|Brendan]] 18:36, 13 October 2011 (UTC)
 
 
<math>
 
P([t,s],a) = P_G(l;p_$)P(a|[t,s])\prod_{[t,s]}\theta_J([t,s])
 
</math>
 
 
 
In the experiments paramters <math>p_$</math> and <math>b</math> were set to 0.1 and 0.85, respectively.
 
 
 
== Proposed Phrase Alignment Model ==
 
In the previous model, each source phrase had to be aligned with a target phrase in both directions. However, sentence pairs do not always contain the same information in both sides, so it should be expected that some phrase pairs are not aligned. This work extends the previous alignment model by modeling unaligned phrases.
 
This is done by introducing another multinomial distribution <math>\theta_N</math>, from where unaligned phrase pairs are drawn. A free parameter <math>P_{N}</math> is set, defining the probability of a null phrase. This parameter is set to <math>10^{-10}</math> to discourage unaligned sentences.
 
 
 
Thus, the new model is defined as:
 
<math>
 
P([t,s],a) = P_G(l;p_$)P(a|[t,s])\prod_{[t,s]}P_M([t,s])
 
</math>
 
 
 
where <math>P_M([t,s])</math> is defined as:
 
 
 
<math>
 
P_M([t,s]) = p_{N}\theta_N([t,s]) + (1 - p_{N})\theta_J([t,s])
 
</math>
 
 
 
This model involves the observed sentence pairs <math>x</math>, the latent phrase segmentations and alignments <math>z</math> and the parameters <math>\theta</math>(<math>\theta_N</math> and <math>\theta_J</math>). This model can be used to compute the expected counts for phrase pairs for a fixed <math>\theta</math> under <math>P(z|x,\theta)</math>, which could, in turn, be used in the E-Step of EM, similarly to Word Alignments. However, in the case of Phrase Alignments the space of possible <math>z</math> is too large to be computed using the same methods used in Word Alignments. Thus, this work tackles this problem by defining a Gibbs Sampler.
 
 
 
== Gibbs Sampler ==
 
The Gibbs Sampler presented in this work starts with the initial state <math>z_0</math>, which sets a initial configuration for the phrase segmentation and alignment. Then, by applying a set of local changes to <math>z_0</math
 

Latest revision as of 13:37, 13 October 2011

Citation

Denero, J., Bouchard-ct, R., & Klein, D.(2008). Sampling alignment structure under a Bayesian translation model. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP '08). Association for Computational Linguistics, Stroudsburg, PA, USA, 314-323.

Online version

pdf

Summary

Unlike word-to-phrase alignments, computing the alignment and count expectations for phrase-to-phrase alignments is generally intractable due to the exponential growth of possible combination of phrases and alignments. Thus, previous attempts for building a fully joint phrase alignment model have been unsuccessful.

This paper describes the first tractable phrase-to-phrase Alignment Model, which relies on Gibbs sampling to address the intractability issues.

Tests show translation improvements over Machine Translation Systems build using traditional methods (Word Alignments + Phrase Extraction).

Description of the Method

The phrase-to-phrase alignment model used in this work is presented in Marcus and Wong, EMNLP 2002. An extension is made to allow null phrases, where a phrase does not have an equivalent in the other language. This is done by introducing another multinomial distribution , from where unaligned phrase pairs are drawn.

This model involves the observed sentence pairs , the latent phrase segmentations and alignments and the parameters ( and ), from where phrase pairs are drawn. As with other alignment models, the Expectation Maximization Algorithm is used, where during the E-step the expected counts for phrase pairs are calculated under by fixing and during the M-step a new is calculated based on these counts. However, computing the expected counts this way would involve computing the expected phrase pair counts under all possible segmentations and alignments , which is too large to be computed one by one. This was the main drawback of the work in Marcus and Wong, EMNLP 2002.

This work addresses this problem using Gibbs sampling. Rather than iterating over all possible , this work generates a sequence of samples that approximate the conditional distribution . Then, by averaging counts for phrase pairs using those samples, an approximation of the expected counts under can be computed. In this work, these counts are directly used to calculate the Translation features, rather than considering the Word Alignments as a separate step.

The Gibbs Sampler presented in this work starts with the initial state , which sets a initial configuration for the phrase segmentation and alignment. Then, by applying a set of local changes starting from , a sequence of samples are produced. These samples are guaranteed to approximate the conditional distribution .

This work defines a set of operators that can be applied to a state to generate a new state . Each operator freezes the state , except for a local region, and chooses a reconfiguration, between all possible reconfigurations of that region, according to the conditional probability of each reconfiguration. This is done so that the sequence of generated samples approximate .

Finally, every phrase-to-phrase alignment model has to deal with degeneracy issues, since the joint model penalizes segmentations that use many small phrases, because additional phrases incur additional generation and distortion penalties. This means that the maximum likelihood estimation of the model puts weight on segmentations with only one phrase pair that translates the whole source sentence to the target sentence, which perfectly translates the sentences in the training corpus but are unlikely to be useful for decoding sentences that are not in the training corpus. This work addresses this problem by defining a Dirichlet process prior over phrase pairs, and selecting a base measure that prefers shorter phrase pairs and only uses longer phrase pairs when there is clear evidence for them.

Experimental Results

This work was tested using the setup from the Statistical Machine Translation Workshop shared task, where the model is trained using the EUROPARL dataset for Spanish-English. Using the expected counts to calculate the translation features, considerable improvements observed in terms of translation quality, measured with BLEU and METEOR, over the baseline trained using Moses, where features are calculated using the Word Alignments.

Related Work

The work in Marcus and Wong, EMNLP 2002, describes a joint probability distribution, which is used and extended in this work.

Similar approaches to address the problem of the intractability of all possible phrase segmentations and alignments have been tested using algorithms such as Hill Climbing in Birch et al, StatMT 2006.


Comment

Hey, if you want to see more work in word alignment, this recent paper is interesting. It may be an interesting contrast because it not use Gibbs sampling for inference. C. Dyer, J. H. Clark, A. Lavie, and N. A. Smith. Unsupervised Word Alignment with Arbitrary Features. --Brendan 18:36, 13 October 2011 (UTC)