Difference between revisions of "Stacked Sequential Learning"
(Created page with 'This is a meta-learning [[Category::method]] that deals with the mismatch between training and testing data for sequential models. == Motivation == == Algorithm == == Varia…') |
|||
(10 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | This is a meta-learning [[Category::method]] that deals with the mismatch between training and testing data for sequential models. | + | This is a meta-learning [[Category::method]] that deals with the mismatch between training and testing data for sequential models, proposed in [[RelatedPaper::Cohen and Carvalho, 2005]]. It stacks two stages of prediction, where the second stage makes use of the results of the first stage. |
== Motivation == | == Motivation == | ||
+ | Consider the general form of sequential prediction, in which we need to predict the label sequence <math>\mathbf{y} = (y_1, \ldots, y_n)</math> given the observation sequence <math>\mathbf{x} = (x_1, \ldots, x_n)</math>. The prediction of one label <math>y_i</math> will depend on neighboring labels, typically <math>y_{i-1}</math> and <math>y_{i+1}</math>. During training, we have the true neighboring labels; but during testing, <math>y_i</math> will be predicted based on the '''predicted''' neighboring labels. Due to reasons such as assumptions made by the model that do not exactly match the reality, there will be a '''mismatch''' between the distribution of the true and predicted neighboring labels, and this mismatch can result in degraded performance. | ||
+ | |||
+ | The solution is a two-stage approach: in the first stage, we train a base classifier using ''predicted'' labels instead of ''true'' labels; in the second stage, we train another classifier that learns from the mistakes made by the first classifier. The predicted labels for the training data are obtained with cross validation. | ||
== Algorithm == | == Algorithm == | ||
+ | [[File:Stack_Algorithm.png]] | ||
+ | |||
+ | == Graphical Model Representation == | ||
+ | |||
+ | The spirit of stacked sequential learning is best visualized with a graphical model representation. The following figure shows a stacked maxent model. In the first stage, a predicted label <math>\hat{y}_i</math> is generated for each observation <math>x_i</math> separately. In the second stage, the final label <math>y_i</math> is generated from the predicted label <math>\hat{y}_i</math> for that observation as well as neighboring predicted labels <math>\hat{y}_{i-1}</math> and <math>\hat{y}_{i+1}</math>. Note that the base classifiers (maxent classifiers) are not even sequential; the "sequentialness" of the stacked model is provided by the links from <math>\hat{y}_{i-1}</math> and <math>\hat{y}_{i+1}</math> to <math>y_i</math>. | ||
+ | |||
+ | [[File:Stack_MaxEnt_MaxEnt.png]] | ||
== Variations == | == Variations == | ||
+ | * Any type of base classifier <math>A</math> can be used. A stacked maxent classifier is shown above, and below is a stacked MEMM. | ||
+ | |||
+ | [[File:Stack_MEMM_MEMM.png]] | ||
+ | |||
+ | * Although in the algorithm above the two stages use the same base classifier <math>A</math>, they do not have to be the same. And the graphical model can be either directed or undirected. For example, the figure below shows a stacked model where the first stage is a CRF (undirected) and the second stage is a maxent classifier (directed). | ||
+ | |||
+ | [[File:Stack_CRF_MaxEnt.png]] | ||
+ | |||
+ | * The window <math>[-W_h, W_f]</math> can be made arbitrarily large. For example, below is a stacked maxent classifier with <math>W_h = W_f = 2</math>. | ||
+ | |||
+ | [[File:Stack_MaxEnt_MaxEnt_2.png]] | ||
+ | |||
+ | * The prediction for <math>\hat{y}_i</math> and <math>y_i</math> can also depend on multiple observations. For example, in [[RelatedPaper::Krishnan 2006 an effective two stage model for exploiting non local dependencies in named entity recognition|Krishnan and Manning, 2006]], both <math>\hat{y}_i</math> and <math>y_i</math> depend on three observations: <math>x_{i-1}</math>, <math>x_i</math>, and <math>x_{i+1}</math>. | ||
+ | |||
+ | [[File:Stack_Multiple_X.png]] | ||
+ | |||
+ | * An aggregate layer <math>z_i</math> can be inserted between <math>\hat{y}_i</math> and <math>y_i</math>. The features <math>z_i</math> are calculated across all <math>\hat{y}_i</math> and are used by the second-stage classifier. This is also used in [[RelatedPaper::Krishnan 2006 an effective two stage model for exploiting non local dependencies in named entity recognition|Krishnan and Manning, 2006]]. | ||
+ | |||
+ | [[File:Stack_Aggregate_Layer.png]] | ||
+ | |||
+ | * It is possible to stack three stages of classifiers, or even more. | ||
+ | |||
+ | * The idea of stacking is not restricted to sequential learning, but can also be applied to general graphical learning, as in [[RelatedPaper::Z._Kou_and_W._Cohen._SDM_2007|Kou and Cohen, 2007]]. | ||
== Time Complexity == | == Time Complexity == | ||
+ | During training, we need to train <math>K+1</math> base classifiers for the first stage (<math>K</math> for cross validation and one trained with the entire corpus), and one base classifier for the second stage. This is <math>O(K)</math> times the training time of one base classifier. In cases when training a first-stage classifier is really expensive, we can minimize the overhead by choosing <math>K=2</math>. | ||
+ | |||
+ | During testing, we only need to run the two classifiers once each. This is really efficient. | ||
== Applications == | == Applications == | ||
+ | |||
+ | * [[RelatedPaper::Cohen and Carvalho, 2005]] applies stacked non-sequential maxent classifiers and stacked CRFs are to a [[AddressesProblem::Sequence Partitioning]] problem (identifying the signature section of emails) and a [[AddressesProblem::Sequence Classification]] problem (classifying music as happy or sad). Results show that stacked classifiers outperform their non-stacked counterparts, and stacked maxent classifiers even outperform non-stacked CRFs. | ||
+ | |||
+ | * [[RelatedPaper::Krishnan 2006 an effective two stage model for exploiting non local dependencies in named entity recognition|Krishnan and Manning, 2006]] uses stacked CRFs to model non-local dependencies for [[AddressesProblem::Named Entity Recognition]]. This paper uses the variation with an aggregate layer. This layer calculates the most frequent predicted label for each entity, which is used as an input feature for the second stage. The second stage then uses this information to encourage difference occurrences of the same entity to have identical labels. | ||
+ | |||
+ | * [[RelatedPaper::Z._Kou_and_W._Cohen._SDM_2007|Kou and Cohen, 2007]] generalizes stacked sequential learning to '''stacked graphical learning''', in which the observations and labels do not need to form a sequence but can have arbitrary graphical relations. Stack graphical learning is applied to three problems: [[AddressesProblem::text region detection in images]], [[AddressesProblem::Document Classification]], and [[AddressesProblem::Named Entity Recognition]]. |
Latest revision as of 12:57, 22 October 2011
This is a meta-learning method that deals with the mismatch between training and testing data for sequential models, proposed in Cohen and Carvalho, 2005. It stacks two stages of prediction, where the second stage makes use of the results of the first stage.
Contents
Motivation
Consider the general form of sequential prediction, in which we need to predict the label sequence given the observation sequence . The prediction of one label will depend on neighboring labels, typically and . During training, we have the true neighboring labels; but during testing, will be predicted based on the predicted neighboring labels. Due to reasons such as assumptions made by the model that do not exactly match the reality, there will be a mismatch between the distribution of the true and predicted neighboring labels, and this mismatch can result in degraded performance.
The solution is a two-stage approach: in the first stage, we train a base classifier using predicted labels instead of true labels; in the second stage, we train another classifier that learns from the mistakes made by the first classifier. The predicted labels for the training data are obtained with cross validation.
Algorithm
Graphical Model Representation
The spirit of stacked sequential learning is best visualized with a graphical model representation. The following figure shows a stacked maxent model. In the first stage, a predicted label is generated for each observation separately. In the second stage, the final label is generated from the predicted label for that observation as well as neighboring predicted labels and . Note that the base classifiers (maxent classifiers) are not even sequential; the "sequentialness" of the stacked model is provided by the links from and to .
Variations
- Any type of base classifier can be used. A stacked maxent classifier is shown above, and below is a stacked MEMM.
- Although in the algorithm above the two stages use the same base classifier , they do not have to be the same. And the graphical model can be either directed or undirected. For example, the figure below shows a stacked model where the first stage is a CRF (undirected) and the second stage is a maxent classifier (directed).
- The window can be made arbitrarily large. For example, below is a stacked maxent classifier with .
- The prediction for and can also depend on multiple observations. For example, in Krishnan and Manning, 2006, both and depend on three observations: , , and .
- An aggregate layer can be inserted between and . The features are calculated across all and are used by the second-stage classifier. This is also used in Krishnan and Manning, 2006.
- It is possible to stack three stages of classifiers, or even more.
- The idea of stacking is not restricted to sequential learning, but can also be applied to general graphical learning, as in Kou and Cohen, 2007.
Time Complexity
During training, we need to train base classifiers for the first stage ( for cross validation and one trained with the entire corpus), and one base classifier for the second stage. This is times the training time of one base classifier. In cases when training a first-stage classifier is really expensive, we can minimize the overhead by choosing .
During testing, we only need to run the two classifiers once each. This is really efficient.
Applications
- Cohen and Carvalho, 2005 applies stacked non-sequential maxent classifiers and stacked CRFs are to a Sequence Partitioning problem (identifying the signature section of emails) and a Sequence Classification problem (classifying music as happy or sad). Results show that stacked classifiers outperform their non-stacked counterparts, and stacked maxent classifiers even outperform non-stacked CRFs.
- Krishnan and Manning, 2006 uses stacked CRFs to model non-local dependencies for Named Entity Recognition. This paper uses the variation with an aggregate layer. This layer calculates the most frequent predicted label for each entity, which is used as an input feature for the second stage. The second stage then uses this information to encourage difference occurrences of the same entity to have identical labels.
- Kou and Cohen, 2007 generalizes stacked sequential learning to stacked graphical learning, in which the observations and labels do not need to form a sequence but can have arbitrary graphical relations. Stack graphical learning is applied to three problems: text region detection in images, Document Classification, and Named Entity Recognition.