Difference between revisions of "Lacoste-Julien et al, NAACL 2006"

From Cohen Courses
Jump to navigationJump to search
Line 81: Line 81:
  
 
== Experimental Results ==
 
== Experimental Results ==
 +
The authors used the English-French [[UsesDataset::Hansards]] dataset to apply their models, comparing the baseline method with successive augmentations, achieving a 25% AER improvement over the baseline. The figure below show the Precision, Recall and Alignment Error Rate for the baseline model alone (M), baseline and fertility (M + F), baseline and interactions (M +Q), baseline, fertility and interactions (M + F + Q) and adding the IBM Model 4 predictions (M + F + Q + IBM4).
  
[[UsesMethod::HMM]]
+
[[File:Lacostejulienresults.png|400px]]
[[UsesDataset::MUC|MUC-6]]
 
[[File:BikelResults.png|400px]]
 

Revision as of 14:25, 2 November 2011

Being edited by Rui Correia

Citation

Simon Lacoste-Julien, Ben Taskar, Dan Klein, and Michael I. Jordan. Word alignment via quadratic assignment. In Human Language Technology–North American Association for Computational Linguistics, New York, NY, 2006. On-line

Summary

In this paper the authors address two major limitations of Taskar et al. (2005) paper on discriminative word alignment methods: fertility and first order interactions.

The authors show how to extend a discriminative approach to word alignment to allow fertility modeling and to capture first-order interactions between alignments of consecutive words, enhancing the expressive power of the discriminative approach. Allowing to capture phenomena of monotonicity, local inversion and contiguos fertility trends - phenomena that are highly informative for alignment. They do so while remaining computationally efficient.

Their best models achieves a relative AER reduction of 25 % over the basic matching formulation, beating intersected IBM Model 4 without the use of any compute-intensive features. The authors claim their contribution to be particularly promising for phrase based systems since they perform better with higher recall on the task.

Baseline Method

The authors base their work in a previous discriminative approach to word alignments by Taskar et al. (2005). In this model, nodes and correspond to words in the source and target language, respectively, and edges correspond to alignments between words. The edge wights represent the degree to which word in one sentence cans be translated using word in the other sentence. The predicted alignments are then chosen by maximizing the sum of the edge scores, which can be formulated in linear programming as:

      
          

where are relaxations of the binary variables that indicate if is assigned to .

Fertility

In the previously mentioned baseline model, a word can align to at most one word in the translation, which is not the proper solution in cases such as backbone and épine dorsal (in French). The first approach that came to ones mind would be to increase the right hand side for the constraints in the baseline model from to , where is the maximum allowed fertility. However, this would cause that maximum weight solutions would have either all words with fertility or .

The authors defined instead a means of encouraging the common case of low fertility, allowing only higher fertilities when it is licensed, introducing penalties in the model for higher fertilities. That penalties is modeled introducing variables and (meaning that node (or ) has fertility of at least ). The model formulation in linear programming comes as:

      
          

where represents the penalty for fertility of node , where each is the penalty increment from increasing the fertility from to .

First-Order Interaction

The second major limitation of the baseline model this paper addresses is the fact that the edges only interact indirectly though the competition induced by the constraints, in contrast with directly modeling correlations between alignments of consecutive words (such as in Vogal et al, COLING 1996).

The authors suggest this can be accomplished by modeling the common local alignment configurations by adding bonuses for pairs of edges, considering three examples:

  • encourage strictly monotonic alignments
  • local inversion of nouns and adjectives
  • a word in one language can be translated as a phrase in the other language

Formally, variables named , are correspondent scores , are added to the model indicating whether both edge and are in the alignment. Additionally, the pair of constraints and are added to reinforce the semantics . In linear programming:

      
          
          

where .

Experimental Results

The authors used the English-French Hansards dataset to apply their models, comparing the baseline method with successive augmentations, achieving a 25% AER improvement over the baseline. The figure below show the Precision, Recall and Alignment Error Rate for the baseline model alone (M), baseline and fertility (M + F), baseline and interactions (M +Q), baseline, fertility and interactions (M + F + Q) and adding the IBM Model 4 predictions (M + F + Q + IBM4).

Lacostejulienresults.png