Difference between revisions of "10-601 Topic Models"
From Cohen Courses
Jump to navigationJump to search (Created page with "This a lecture used in the Syllabus for Machine Learning 10-601 === Slides === * [http://www.cs.cmu.edu/~wcohen/10-601/topic-models.pptx Slides in PowerPoint]. === Read...") |
|||
Line 1: | Line 1: | ||
− | This a lecture used in the [[Syllabus for Machine Learning 10-601]] | + | This a lecture used in the [[Syllabus for Machine Learning 10-601 in Fall 2014]] |
=== Slides === | === Slides === |
Revision as of 16:38, 21 July 2014
This a lecture used in the Syllabus for Machine Learning 10-601 in Fall 2014
Slides
Readings
- LDA is not covered in Mitchell. There's a nice overview paper on LDA by David Blei.
Summary
You should know:
- d-separation, or more generally, how to determine if two variables are conditionally independent in a directed model, and what that means.
- what "explaining away" refers to.
- what Gibbs sampling is, and how it can be used for inference in a directed graphical model.
- what the graphical models are which are associated with supervised naive Bayes, unsupervised naive Bayes, and LDA.