Difference between revisions of "Lau et al HLT 1993"

From Cohen Courses
Jump to navigationJump to search
 
(17 intermediate revisions by the same user not shown)
Line 9: Line 9:
 
== Summary ==
 
== Summary ==
  
In this [[Category::paper]] the authors focus on the development of [[AddressesProblem::Language Modeling|Language Models]] using [[UsesMethod::Maximum Entropy model|Maximum Entropy]] principles, in order to combine evidence from multiple sources (for example: trigrams and long distance triggers).
+
In this [[Category::paper]] the authors focus on the development of [[AddressesProblem::Language Modeling|Language Models]] using the [[UsesMethod::Maximum entropy|Maximum Entropy]] Principle, in order to combine evidence from multiple sources (for example: trigrams and long distance triggers).
  
== The Problem ==
+
The state-of-the-art language model, by the writing time of the paper, was a "static" trigram model, where the probability of the next word is decided by looking merely at the two words that precedes it. By definition, this model, after trained, always derives the same word for the same sequence of two words, not being able to "adapt" to different texts and contexts. In the present paper the authors developed an Adaptive Model, i.e., a model that changes estimates as a result of "seeing" some of the text. This new interpretation allows one to process large heterogeneous data sources (different writing styles and/or topics) and does not require to be trained in the same domain that it will be used for.
  
State of the art language model was a trigram model - perplexity of 772 (prob of a word, based on the two word preceeding it). ==> Static model, not able to adapt to style and topic of the document
+
The way this domain variation is capture in the model is through the concept of "Trigger Pairs". If a word sequence <math>A</math> is significantly correlated with another word sequence <math>B</math>, one can say that <math>A \rarr B</math> this is considered a trigger pair. This phenomenon can be seen in the figure below, where the probability of the word "WINTER" is shown as a function of the number of previous occurrences of the word "SUMMER".
  
Adaptive model ==> changes estimates as a result of "seeing" some of the text
+
[[File:Triggerpairs.png]]
* process a large heterogeneous data source
+
 
* trained on data from one domain, can be used in another domain
+
The adaptive model that was developed in this paper is a solution to combine several estimates <math>P(w|h)</math> derived from several triggers, where <math>h</math> is the section of the document processed so far and <math>w</math> is the next word to predict. They used a Maximum Likelihood/Maximum Entropy Model (ML/ME), and reformulated triggers as constraints. Three types of constraints were considered: trigger pairs, ngrams, and self-triggers (where the trigger and the triggered word are the same, i.e., <math>A \rarr A</math>).
  
Use trigger pairs: if a word sequence A is significantly correlated with another word sequence B (A->B) this is considered a trigger pair.
+
The authors compared the existent state-of-the-art static model with their new approach and obtained improvements of 27%. The author claim that, since they are using ME, their solution is simple, intuitive and general. It can accomodate new factors, just by reformulating them as constraints, and reutilize information from previous used models (as the static one). Additionally, the method used to solve the constraints, [[UsesMethod::Generalized Iterative Scaling|Generalized Iterative Scaling]], which guarantees to converge to a unique ME solution, allows incremental adaptation, adding new constraints at any time. Although is very expensive and requires that the constraints to be consistent (which may not apply if the constraints are derived from other data than the training, or are externally imposed).
  
[[File:Triggerpairs.png]]
+
The way the authors encoded the previously mentioned self triggers do not take into account the number of times the word has previously occurred, which is significant for the problem, as one can see in the figure above. They leave as future work modeling the frequency of occurrences and distance of occurrence.
  
Given the document that was processed so far (h) and a word considered for the next position (w), there are many different estimates P(w|h), derived from the various triggers. How to combine them?
+
== Brief Description of the Method ==
  
== The Solution ==
 
 
The Maximum Entropy Principle is a common technique used to combine several different knowledge sources in a combined estimate. This method reformulates the different estimates as constraints on the expectation of various functions to be satisfied by the combined estimante, ending up choosing, among the probability distributions that satisfy these constraints, the one with the highest entropy.
 
The Maximum Entropy Principle is a common technique used to combine several different knowledge sources in a combined estimate. This method reformulates the different estimates as constraints on the expectation of various functions to be satisfied by the combined estimante, ending up choosing, among the probability distributions that satisfy these constraints, the one with the highest entropy.
  
Given a general event space <math>\{x\}</math> , to derive a combined probability function <math>P(x)</math>, each constraint <math>i</math> is associated with a constraint function <math>f_i(x)</math> and a desired expectation <math>c_i</math>. The constraint is then written as:
+
Formally, given a general event space <math>\{x\}</math> , to derive a combined probability function <math>P(x)</math>, each constraint <math>i</math> is associated with a constraint function <math>f_i(x)</math> and a desired expectation <math>c_i</math>. The constraint is then written as:
  
 
<math>
 
<math>
Line 40: Line 39:
 
</math>
 
</math>
  
where the <math>\mu_i</math>’s are some unknown constants, to be found.  
+
where the <math>\mu_i</math>’s are some unknown constants, to be found. To search this exponential family for the <math>\mu_i</math>’s that will make <math>P(x)</math> satisfy all the constraints, the authors used an iterative algorithm, [[UsesMethod::Generalized Iterative Scaling|Generalized Iterative Scaling]], which is guaranteed to converge to the solution.
 +
 
 +
 
 +
In the specific case of Language Modeling, the desired probability estimate is given by <math>P(h,w)</math>, being <math>\hat P(h,w)</math> the empirical distribution of the training data. Taking the general equation for a contraint, and adapting it to this problem, one ends up with:
  
To search the exponential family defined by for the <math>\mu_i</math>’s that will make <math>P(x)</math> satisfy all the constraints, the authors used an iterative algorithm, [[UsesMethod::Generalized Iterative Scaling]], which is guaranteed to converge to the solution.
+
<math>
 +
\sum_h \hat P(h) . \sum_w P(w|h) . f_i(h, w) = c_i
 +
</math>
  
Trigger pairs are formulated as the empirical expectation of a constraint function <math>f_{A \rarr B} </math> as:
+
where <math>f_i(h, w)</math> is any constraint function and <math>c_i</math> its desired expectation. The only missing piece is the constraint functions for the three types of triggers. So, for a general trigger pair <math>A \rarr B</math>, the authors define the constraint function <math>f_{A \rarr B}</math> as:
  
 
<math>
 
<math>
Line 54: Line 58:
 
</math>
 
</math>
  
Its associated constrain comes directly from the previous equation
+
To incorporate the previous static model, the authors formulated unigram, bigram and trigram constraint functions to fit the ML/ME paradigm. For instance, for bigrams:
 
 
<math>
 
\sum_h \hat P(h) . \sum_w P(w|h) . f_i(h, w) = c_i
 
</math>
 
 
 
To incorporate the previous static model the authors formulated constraint functions to fit the ML/ME paradigm as bigrams:
 
  
 
<math>
 
<math>
 
f_{w_1, w_2}  (h,w) =  
 
f_{w_1, w_2}  (h,w) =  
 
\begin{cases}  
 
\begin{cases}  
1,  & \mbox{if } h \mbox{ends in } w_1 \mbox{and } w=w_2  \\
+
1,  & \mbox{if } h \mbox{ ends in } w_1 \mbox{ and } w=w_2  \\
 
0, & \mbox{otherwise}  
 
0, & \mbox{otherwise}  
 
\end{cases}
 
\end{cases}
 
</math>
 
</math>
  
Similarly, the authors integrated the "bursty" nature of language, i.e., the fact that once an infrequent word occurs in a document, the probability of reoccurrence is significantly elevated. This was done as a trigger pair <math>A \rarr A</math>.
+
Similarly, the authors integrated the "bursty" nature of language, i.e., the fact that once an infrequent word occurs in a document, the probability of reoccurrence is significantly elevated, simply as a trigger pair <math>A \rarr A</math>.
  
 
+
== Experimental Results ==
== Results ==
+
The baseline static trigram model was tested against the authors' adaptive solutions. Four formulations were used: the ML/ME with the best 3 triggers for each word (with and without the incorporation of the trigram model), and the ML/ME with the best 6 triggers for each word (also with and without the incorporation of the trigram model). For each of the latter methods they incorporated the static trigram model as a different experiment. Each model was trained on 5 million words of [[UsesDataset::Wall Street Journal]] text, using a vocabulary comprised of the DARPA's 20k words. The results can be seen in the following table:
Trained on 5 million words of Wall Street Journal text, using DARPA's 20k words. They tested the baseline trigram model, against their solutions. The formulations they came up to test were the ML/ME with the best 3 triggers for each word, and the ML/ME with the best 6 triggers for each word. For each of the latter methods they incorporated the static trigram model as a different experiment. The results can be seen in the following table:
 
  
 
[[File:resultslau.png]]
 
[[File:resultslau.png]]
  
ME is simple, intuitive and general. It can accomodate new factors, just reformulating them as constraints, and reutilize information from previous used models (as the static one).
+
As one can see there are improvements when the number of triggers considered for each word is increased. There are also improvements when the adaptive model incorporates the static trigram model. Although, the authors do not discuss significance of these results and, since there is no data for more triggers considered for each word, one cannot predict how does this variation affect the results.
 
 
The Generalized Iterative Scaling allows incremental adaptation, adding new constraints at any time, and guarantees to converge to a unique ME solution. Although is very expensive and requires that the constraints to be consistent (which may not apply if the constraints are derived from other data than the training, or are externally imposed).
 
  
The way the authors enconded self triggers do not take into account the number of times the word had previously occurred, which is significant for the problem. They leave as future work modeling the frequency of occurrences and distance of occurrence.
+
== Related Papers ==
 +
The authors show the importance of self triggers, stating that they are responsible for a large part of the reduction in perplexity, and proved to be robust (maintain the correlations for different domains) in [[RelatedPaper::Rosenfield and Huang, 1992]].

Latest revision as of 13:08, 27 September 2011

Citation

Raymond Lau, Ronald Rosenfeld and Salim Roukos. Adaptive Language Modeling Using the Maximum Entropy Principle. In Proceedings of the ARPA Human Language Technology Workshop, published as Human Language Technology, pages 108–113. Morgan Kaufmann, March 1993.

Online version

ACL WEB

Summary

In this paper the authors focus on the development of Language Models using the Maximum Entropy Principle, in order to combine evidence from multiple sources (for example: trigrams and long distance triggers).

The state-of-the-art language model, by the writing time of the paper, was a "static" trigram model, where the probability of the next word is decided by looking merely at the two words that precedes it. By definition, this model, after trained, always derives the same word for the same sequence of two words, not being able to "adapt" to different texts and contexts. In the present paper the authors developed an Adaptive Model, i.e., a model that changes estimates as a result of "seeing" some of the text. This new interpretation allows one to process large heterogeneous data sources (different writing styles and/or topics) and does not require to be trained in the same domain that it will be used for.

The way this domain variation is capture in the model is through the concept of "Trigger Pairs". If a word sequence is significantly correlated with another word sequence , one can say that this is considered a trigger pair. This phenomenon can be seen in the figure below, where the probability of the word "WINTER" is shown as a function of the number of previous occurrences of the word "SUMMER".

Triggerpairs.png

The adaptive model that was developed in this paper is a solution to combine several estimates derived from several triggers, where is the section of the document processed so far and is the next word to predict. They used a Maximum Likelihood/Maximum Entropy Model (ML/ME), and reformulated triggers as constraints. Three types of constraints were considered: trigger pairs, ngrams, and self-triggers (where the trigger and the triggered word are the same, i.e., ).

The authors compared the existent state-of-the-art static model with their new approach and obtained improvements of 27%. The author claim that, since they are using ME, their solution is simple, intuitive and general. It can accomodate new factors, just by reformulating them as constraints, and reutilize information from previous used models (as the static one). Additionally, the method used to solve the constraints, Generalized Iterative Scaling, which guarantees to converge to a unique ME solution, allows incremental adaptation, adding new constraints at any time. Although is very expensive and requires that the constraints to be consistent (which may not apply if the constraints are derived from other data than the training, or are externally imposed).

The way the authors encoded the previously mentioned self triggers do not take into account the number of times the word has previously occurred, which is significant for the problem, as one can see in the figure above. They leave as future work modeling the frequency of occurrences and distance of occurrence.

Brief Description of the Method

The Maximum Entropy Principle is a common technique used to combine several different knowledge sources in a combined estimate. This method reformulates the different estimates as constraints on the expectation of various functions to be satisfied by the combined estimante, ending up choosing, among the probability distributions that satisfy these constraints, the one with the highest entropy.

Formally, given a general event space , to derive a combined probability function , each constraint is associated with a constraint function and a desired expectation . The constraint is then written as:

Given consistent constraints, a unique ME solution is guaranteed to exist, and to be of the form:

where the ’s are some unknown constants, to be found. To search this exponential family for the ’s that will make satisfy all the constraints, the authors used an iterative algorithm, Generalized Iterative Scaling, which is guaranteed to converge to the solution.


In the specific case of Language Modeling, the desired probability estimate is given by , being the empirical distribution of the training data. Taking the general equation for a contraint, and adapting it to this problem, one ends up with:

where is any constraint function and its desired expectation. The only missing piece is the constraint functions for the three types of triggers. So, for a general trigger pair , the authors define the constraint function as:

To incorporate the previous static model, the authors formulated unigram, bigram and trigram constraint functions to fit the ML/ME paradigm. For instance, for bigrams:

Similarly, the authors integrated the "bursty" nature of language, i.e., the fact that once an infrequent word occurs in a document, the probability of reoccurrence is significantly elevated, simply as a trigger pair .

Experimental Results

The baseline static trigram model was tested against the authors' adaptive solutions. Four formulations were used: the ML/ME with the best 3 triggers for each word (with and without the incorporation of the trigram model), and the ML/ME with the best 6 triggers for each word (also with and without the incorporation of the trigram model). For each of the latter methods they incorporated the static trigram model as a different experiment. Each model was trained on 5 million words of Wall Street Journal text, using a vocabulary comprised of the DARPA's 20k words. The results can be seen in the following table:

Resultslau.png

As one can see there are improvements when the number of triggers considered for each word is increased. There are also improvements when the adaptive model incorporates the static trigram model. Although, the authors do not discuss significance of these results and, since there is no data for more triggers considered for each word, one cannot predict how does this variation affect the results.

Related Papers

The authors show the importance of self triggers, stating that they are responsible for a large part of the reduction in perplexity, and proved to be robust (maintain the correlations for different domains) in Rosenfield and Huang, 1992.