Difference between revisions of "Empirical Risk Minimization"
Line 55: | Line 55: | ||
* [[RelatedPaper::Klein 2002 conditional structure versus conditional estimation in nlp models]] (Klein and Manning discussed the notion of "sum of conditional likelihoods (SCL)" in their 2002 ACL paper. The idea of SCL is very similar to ERM.) | * [[RelatedPaper::Klein 2002 conditional structure versus conditional estimation in nlp models]] (Klein and Manning discussed the notion of "sum of conditional likelihoods (SCL)" in their 2002 ACL paper. The idea of SCL is very similar to ERM.) | ||
− | * [[RelatedPaper::]] () | + | * [[RelatedPaper::Stoyanov et al. 2011: Empirical Risk Minimization of Graphical Model Parameters Given Approximate Inference, Decoding, and Model Structure]] (This paper is a comprehensive overview of ERM estimation techniques for probabilistic graphical models.) |
Revision as of 15:00, 2 November 2011
This is a method proposed by Bahl et al. 1988 A new algorithm for the estimation of hidden Markov model parameters.
In graphical models, the true distribution is always unknown. Instead of maximizing the likelihood on training data when estimating the model parameter , we can alternatively minimize the Empirical Risk Minimization (ERM) by averaging the loss . ERM was widely used in Speech Recognition (Bahl et al., 1988) and Machine Translation (Och, 2003). The ERM estimation method has the following advantages:
- Maximum likelihood might overfit to the training distribution. ERM can prevent overfitting the training data.
- Log likelihood does not equal to the accuracy on the test set, but ERM directly optimizes on the test performance (the loss function can be L1 loss, mean squared error, f-measure, conditional log-likelihood or other things).
- Summing up and averaging the local conditional likelihood might be more resilient to errors than calculating the product of conditional likelihoods.
Contents
Motivation
A standard training method for probablistic graphical models often involves using Expectation Maximization (EM) for Maximum a Posteriori (MAP) training, approaximate inference and approximate decoding. However, when using the approximate inference with the same equations as in the exact case, it might lead to the divergence of the learner (Kulesza and Pereira, 2008). Secondly, the structure of the model itself might be too simple, and cannot characterize by a model parameter . Moreover, even if the model structure is correct, MAP training using the training data might not give us the correct .
ERM argues that minimizing the risk is the most proper way of training, since the ultimate goal of the task is to directly optimize the performance on true evaluation. In addition, studies (Smith and Eisner, 2006) have shown that maximizing log likelihood using EM does not guarantee consistently high accuracy for evaluations in NLP tasks. As a result, minimizing local empirical risks (the observed errors on the training data) might be an alternative method for training graphical models.
The Standard MLE Learning Method
Assume we use to represent the model parameter. The task of training is to set the most appropriate that represents the true distribution of the data. For graphical models, given the training data pairs, the standard method is to maximize the following log likelihood
where always represent the conditional log-likelihood of .
Empirical Risk Minimization
As we mentioned earlier, the risk is unknown because the true distribution is unknown. As an alternative method to maximum likelihood, we can calculate an Empirical Risk function by averaging the loss on the training set:
The idea of ERM for learning is to choose a hypothesis that minimizes the empirical risk:
In order to calculate the , the problem then turns to be an optimization problem of the above formula. The function is often differentiable and we can use optimization methods such as gradient descent to find the parameter . Note that sometime the loss function might be non-convex, and then we need to take other methods during optimization.
Some Reflections
Related Papers
- F. Och. 2003: Minimum Error Rate Training in Statistical Machine Translation (This is a seminal paper of using ERM for machine translation tasks by Google MT boss F. Och.)
- Bahl et al. 1988 A new algorithm for the estimation of hidden Markov model parameters (The original ERM method was proposed to improve Automatic Speech Recognition tasks.)
- Klein 2002 conditional structure versus conditional estimation in nlp models (Klein and Manning discussed the notion of "sum of conditional likelihoods (SCL)" in their 2002 ACL paper. The idea of SCL is very similar to ERM.)
- Stoyanov et al. 2011: Empirical Risk Minimization of Graphical Model Parameters Given Approximate Inference, Decoding, and Model Structure (This paper is a comprehensive overview of ERM estimation techniques for probabilistic graphical models.)