Difference between revisions of "Boosting"
(3 intermediate revisions by 2 users not shown) | |||
Line 5: | Line 5: | ||
Here <math>\alpha_{t} </math> denotes the coefficient with which the ensemble member <math>h_t</math> is combined; both <math>\alpha_{t} </math> and the learner or the hypotheses <math>h_t</math> are to be learned within the Boosting procedure. The coefficients <math>\alpha_{t}'s</math> (or weights) are set in iterations. The intuitive idea is that examples that are misclassified get higher weights in the next iteration, for instance the examples near the decision boundary are usually harder to classify and therefor get high weights after a few iterations. | Here <math>\alpha_{t} </math> denotes the coefficient with which the ensemble member <math>h_t</math> is combined; both <math>\alpha_{t} </math> and the learner or the hypotheses <math>h_t</math> are to be learned within the Boosting procedure. The coefficients <math>\alpha_{t}'s</math> (or weights) are set in iterations. The intuitive idea is that examples that are misclassified get higher weights in the next iteration, for instance the examples near the decision boundary are usually harder to classify and therefor get high weights after a few iterations. | ||
+ | |||
+ | In general boosting can be seen as a method for improving the accuracy of any given learning algorithm. It is used when building a highly accurate prediction rule is a difficult task but it is not hard to come up with very rough rules of thumbs that are only moderately accurate. | ||
+ | |||
+ | [[Paper::Friedman et al. 2000]] in his [[Paper::paper]] shows that AdaBoost can be interpreted as stage wise estimation procedures for fitting an additive logistic regression model. | ||
+ | |||
+ | In its standard form, boosting does not allow for the direct incorporation of prior human knowledge. [[Paper::Rochery et al.2002]] describe a modification of boosting that combines and balances human expertise with available training data. The aim of the approach is to allow the human’s rough judgments to be refined, reinforced and adjusted by the statistics of the training data, but in a manner that does not permit the data to entirely overwhelm human judgments. | ||
== AdaBoost == | == AdaBoost == | ||
Line 24: | Line 30: | ||
For AdaBoost it has been shown that <math>\alpha_t</math> can be computed analytically leading to the expression in step (3c) of the algorithm. Based on the new combined hypotheses, the weighting <math>\boldsymbol{d}</math> of the sample is updated as in step (3d) of algorithm. The initial weighting <math>\boldsymbol{d}^{(1)}</math> is chosen uniformly: <math>d_n^{(n)} = 1/N</math> | For AdaBoost it has been shown that <math>\alpha_t</math> can be computed analytically leading to the expression in step (3c) of the algorithm. Based on the new combined hypotheses, the weighting <math>\boldsymbol{d}</math> of the sample is updated as in step (3d) of algorithm. The initial weighting <math>\boldsymbol{d}^{(1)}</math> is chosen uniformly: <math>d_n^{(n)} = 1/N</math> | ||
+ | |||
+ | == Relevant Papers == | ||
+ | |||
+ | {{#ask: [[UsesMethod::Boosting]] | ||
+ | | ?AddressesProblem | ||
+ | | ?UsesDataset | ||
+ | }} | ||
+ | |||
+ | |||
+ | == Comment == | ||
+ | |||
+ | An interesting take on boosting is contained in Friedman et al. 2000, "Additive Logistic Regression": http://www.stanford.edu/~hastie/Papers/AdditiveLogisticRegression/alr.pdf | ||
+ | They also talk about it in the Hastie et al. "ESL" textbook. | ||
+ | |||
+ | --[[User:Brendan|Brendan]] 22:35, 13 October 2011 (UTC) |
Latest revision as of 19:07, 29 October 2011
Contents
Summary
The underlying idea of this method is to combine simple "rules" or weak learners, each performing only slightly better than random, to form an ensemble such that the performance of the single ensemble member is improved, i.e "boosted". Let be a set of hypotheses, and consider the composite ensemble hypothesis
Here denotes the coefficient with which the ensemble member is combined; both and the learner or the hypotheses are to be learned within the Boosting procedure. The coefficients (or weights) are set in iterations. The intuitive idea is that examples that are misclassified get higher weights in the next iteration, for instance the examples near the decision boundary are usually harder to classify and therefor get high weights after a few iterations.
In general boosting can be seen as a method for improving the accuracy of any given learning algorithm. It is used when building a highly accurate prediction rule is a difficult task but it is not hard to come up with very rough rules of thumbs that are only moderately accurate.
Friedman et al. 2000 in his paper shows that AdaBoost can be interpreted as stage wise estimation procedures for fitting an additive logistic regression model.
In its standard form, boosting does not allow for the direct incorporation of prior human knowledge. Rochery et al.2002 describe a modification of boosting that combines and balances human expertise with available training data. The aim of the approach is to allow the human’s rough judgments to be refined, reinforced and adjusted by the statistics of the training data, but in a manner that does not permit the data to entirely overwhelm human judgments.
AdaBoost
Tha AdaBoost algorithm by Freund and Schapire is one of the most successful Boosting algorithm. We focus on the problem of binary classfication and present the AdaBoost algorithm for it.
A non negative weighting is assigned to the data at step , and a weak learner is constructed based on . This weighting is updated at each iteration according to the weighted error incurred by the weak learner in the last iteration. At each step , the weak learner is required to produce a small weighted empirical error defined by
After selecting the hypothesis , its weight is computed such that it minimizes a certain loss function. In AdaBoost one minimizes
,
where is the combined hypotheses of the previous iteration given by
For AdaBoost it has been shown that can be computed analytically leading to the expression in step (3c) of the algorithm. Based on the new combined hypotheses, the weighting of the sample is updated as in step (3d) of algorithm. The initial weighting is chosen uniformly:
Relevant Papers
Comment
An interesting take on boosting is contained in Friedman et al. 2000, "Additive Logistic Regression": http://www.stanford.edu/~hastie/Papers/AdditiveLogisticRegression/alr.pdf They also talk about it in the Hastie et al. "ESL" textbook.
--Brendan 22:35, 13 October 2011 (UTC)