Difference between revisions of "10-601 Classification and K-NN"
From Cohen Courses
Jump to navigationJump to search (→Slides) |
(→Slides) |
||
Line 3: | Line 3: | ||
=== Slides === | === Slides === | ||
− | * [http://www.cs.cmu.edu/~wcohen/10-601/classification-and-knn.pptx Slides in Powerpoint]. | + | * Ziv's lecture: |
+ | * William's lecture: [http://www.cs.cmu.edu/~wcohen/10-601/classification-and-knn.pptx Slides in Powerpoint]. | ||
=== Readings === | === Readings === |
Revision as of 12:32, 15 August 2014
This a lecture used in the Syllabus for Machine Learning 10-601 in Fall 2014
Slides
- Ziv's lecture:
- William's lecture: Slides in Powerpoint.
Readings
- Mitchell, Chapter 3.
What You Should Know Afterward
- What is the goal of classification
- Bayes decision boundary for classification
- Is there an optimal classifier?
- What the K-NN algorithm is.
- What the computational properties of eager vs lazy learning are in general, and K-NN in specific.
- What decision boundary is defined by K-NN, and how it compares to decision boundaries of linear classifiers.
- How the value of K affects the tendency of K-NN to overfit or underfit data.
- (optional) probabilistic interpretation of KNN decisions