10-601B Boosting and Other Ensembles
From Cohen Courses
Revision as of 09:18, 12 January 2016 by Wcohen (talk | contribs) (Created page with "This a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016 === Slides === * ... === Readings === * Murphy 16.4, 16.6 Other optional readings: * [h...")
This a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016
Slides
- ...
Readings
- Murphy 16.4, 16.6
Other optional readings:
- Ensemble Methods in Machine Learning, Tom Dietterich
- A Short Introduction to Boosting, Yoav Freund and Robert Schapire.
- Optional: Improved boosting algorithms using confidence-rated predictions, Robert Schapire and Yoram Singer. (This paper has the analysis that I presented in class.)
Summary
You should understand how these ensemble methods work
- Bagging
- Stacking
- Boosting
You should understand the basic intuitions behind the analysis of boosting:
- As reducing an upper bound on error and hence fitting the training data.
- As a coordinate descent optimization of the same upper bound.
You should also be aware that boosting is related to margin classifiers.