Difference between revisions of "10-601 GM3"

From Cohen Courses
Jump to navigationJump to search
(Created page with "== Slides == * [http://www.cs.cmu.edu/~wcohen/10-601/networks-3-learning.pptx in PPT], [http://www.cs.cmu.edu/~wcohen/10-601/networks-3-learning.pdf in PDF].")
 
 
Line 1: Line 1:
== Slides ==
+
 
 +
This a lecture used in the [[Syllabus for Machine Learning 10-601B in Spring 2016]]
 +
 
 +
=== Slides ===
  
 
* [http://www.cs.cmu.edu/~wcohen/10-601/networks-3-learning.pptx in PPT], [http://www.cs.cmu.edu/~wcohen/10-601/networks-3-learning.pdf in PDF].
 
* [http://www.cs.cmu.edu/~wcohen/10-601/networks-3-learning.pptx in PPT], [http://www.cs.cmu.edu/~wcohen/10-601/networks-3-learning.pdf in PDF].
 +
 +
=== Readings ===
 +
 +
* See [[10-601 GM1|first lecture on GM]]
 +
* For EM: Mitchell 6.2 or Murphy 11.4.1, 11.4.2, 11.4.4
 +
 +
=== To remember ===
 +
 +
* The EM algorithm
 +
** E-step (expectation step)
 +
** M-step (maximization step)
 +
* How to use EM to learn DGMs with hidden variables
 +
* How to use EM to learn a mixture of Gaussians
 +
 +
* Connections:
 +
** naive Bayes as a DGM
 +
** semi-supervised naive Bayes as a DGM with hidden variables
 +
** mixture of Gaussians as an a DGM
 +
** mixture of Gaussians vs k-means

Latest revision as of 10:58, 31 March 2016

This a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016

Slides

Readings

To remember

  • The EM algorithm
    • E-step (expectation step)
    • M-step (maximization step)
  • How to use EM to learn DGMs with hidden variables
  • How to use EM to learn a mixture of Gaussians
  • Connections:
    • naive Bayes as a DGM
    • semi-supervised naive Bayes as a DGM with hidden variables
    • mixture of Gaussians as an a DGM
    • mixture of Gaussians vs k-means