10-601 GM3
From Cohen Courses
Jump to navigationJump to searchThis a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016
Slides
Readings
- See first lecture on GM
- For EM: Mitchell 6.2 or Murphy 11.4.1, 11.4.2, 11.4.4
To remember
- The EM algorithm
- E-step (expectation step)
- M-step (maximization step)
- How to use EM to learn DGMs with hidden variables
- How to use EM to learn a mixture of Gaussians
- Connections:
- naive Bayes as a DGM
- semi-supervised naive Bayes as a DGM with hidden variables
- mixture of Gaussians as an a DGM
- mixture of Gaussians vs k-means