Difference between revisions of "10-601 Clustering"
From Cohen Courses
Jump to navigationJump to searchLine 10: | Line 10: | ||
* Bhavana's lecture on 13th October 2014 [[Media:Kmeans_13october2014_dalvi.pptx]] Slides in PPT | * Bhavana's lecture on 13th October 2014 [[Media:Kmeans_13october2014_dalvi.pptx]] Slides in PPT | ||
* Bhavana's lecture on 16th October 2014 [[Media:Kmeans_16october2014_dalvi.pptx]] Slides in PPT | * Bhavana's lecture on 16th October 2014 [[Media:Kmeans_16october2014_dalvi.pptx]] Slides in PPT | ||
+ | * Combined PDF version for classes on 13th and 16th October [[Media:Kmeans_cs601_CMU_dalvi.pdf]] Slides in PDF | ||
+ | (Only the slides on advanced topics vary across lectures) | ||
=== Readings === | === Readings === |
Revision as of 19:25, 25 October 2014
This a pair of lectures used in the Syllabus for Machine Learning 10-601 in Fall 2014.
There will be a guest lecturer, Bhavana Dalvi, lecturing on k-means on Monday and Thursday, and Ziv will lecture on agglomerative clustering and spectral clustering on Tuesday and Wednesday. (So usually, the lectures will be in different orders to the two sections.)
Slides
- Ziv's lecture: Slides in pdf.
- Bhavana's lecture on 13th October 2014 Media:Kmeans_13october2014_dalvi.pptx Slides in PPT
- Bhavana's lecture on 16th October 2014 Media:Kmeans_16october2014_dalvi.pptx Slides in PPT
- Combined PDF version for classes on 13th and 16th October Media:Kmeans_cs601_CMU_dalvi.pdf Slides in PDF
(Only the slides on advanced topics vary across lectures)
Readings
Bishop's Chapter 9
Mitchell 6.12 also has a nice description of EM and k-means.
What You Should Know Afterward
You should know how to implement these methods, and what their relative advantages and disadvantages are.
- Overview of clustering
- Distance functions and similarity measures and their impact
- K-means algorithms
- How to chose k and what is the impact of large and small k's
- EM
- Differences between GM and K-means