Difference between revisions of "10-601 Bias-Variance"
From Cohen Courses
Jump to navigationJump to searchLine 8: | Line 8: | ||
*Mitchell: Chap 5, 6 | *Mitchell: Chap 5, 6 | ||
− | === | + | === What you should know === |
− | * | + | * How overfitting/underfitting can be understood as a tradeoff between high-bias and high-variance learners. |
− | + | * Mathematically, how to decompose error for linear regression into bias and variance. | |
− | + | * Intuitively, how classification can be decomposed into bias and variance. | |
− | + | * Which sorts of classifier variants lead to more bias and/or more variance: e.g., large vs small k in k-NN, etc. | |
− | |||
− | |||
− | * | ||
− | |||
− | * | ||
− | * | ||
− | |||
− |
Revision as of 17:03, 19 October 2014
Slides
- William's Slides in Powerpoint
Readings
- Bishop: Chap 1, 2
- Mitchell: Chap 5, 6
What you should know
- How overfitting/underfitting can be understood as a tradeoff between high-bias and high-variance learners.
- Mathematically, how to decompose error for linear regression into bias and variance.
- Intuitively, how classification can be decomposed into bias and variance.
- Which sorts of classifier variants lead to more bias and/or more variance: e.g., large vs small k in k-NN, etc.