10-601 GM3

From Cohen Courses
Revision as of 10:58, 31 March 2016 by Wcohen (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

This a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016

Slides

Readings

To remember

  • The EM algorithm
    • E-step (expectation step)
    • M-step (maximization step)
  • How to use EM to learn DGMs with hidden variables
  • How to use EM to learn a mixture of Gaussians
  • Connections:
    • naive Bayes as a DGM
    • semi-supervised naive Bayes as a DGM with hidden variables
    • mixture of Gaussians as an a DGM
    • mixture of Gaussians vs k-means