Difference between revisions of "10-601 Bias-Variance"
From Cohen Courses
Jump to navigationJump to search (→Slides) |
(→Slides) |
||
(5 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
=== Slides === | === Slides === | ||
− | * William's [http://www.cs.cmu.edu/~wcohen/10-601/bias-variance.ppt Slides in Powerpoint] | + | * William's [http://www.cs.cmu.edu/~wcohen/10-601/bias-variance.ppt Slides in Powerpoint], and [http://www.cs.cmu.edu/~wcohen/10-601/bias-variance.pdf in PDF] |
=== Readings === | === Readings === | ||
− | * | + | * This isn't covered well in Mitchell. [http://dl.acm.org/citation.cfm?id=1016783 Valentini and Dietterich] is a good source for bias-variance for classification. Wikipedia has a reasonable description of the [http://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff regression case], which goes back at least to [http://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff Geman et al 1992]. |
− | + | * See also Littman/Isbell [https://www.youtube.com/watch?v=DQWI1kvmwRg on overfitting] | |
− | |||
− | |||
− | === | + | === What you should know === |
− | * | + | * How overfitting/underfitting can be understood as a tradeoff between high-bias and high-variance learners. |
− | + | * Mathematically, how to decompose error for linear regression into bias and variance. | |
− | + | * Intuitively, how classification can be decomposed into bias and variance. | |
− | + | * Which sorts of classifier variants lead to more bias and/or more variance: e.g., large vs small k in k-NN, etc. | |
− | |||
− | |||
− | * | ||
− | |||
− | * | ||
− | * | ||
− | |||
− |
Latest revision as of 10:44, 20 October 2014
Slides
- William's Slides in Powerpoint, and in PDF
Readings
- This isn't covered well in Mitchell. Valentini and Dietterich is a good source for bias-variance for classification. Wikipedia has a reasonable description of the regression case, which goes back at least to Geman et al 1992.
- See also Littman/Isbell on overfitting
What you should know
- How overfitting/underfitting can be understood as a tradeoff between high-bias and high-variance learners.
- Mathematically, how to decompose error for linear regression into bias and variance.
- Intuitively, how classification can be decomposed into bias and variance.
- Which sorts of classifier variants lead to more bias and/or more variance: e.g., large vs small k in k-NN, etc.