Difference between revisions of "10-601 GM2"
From Cohen Courses
Jump to navigationJump to searchLine 6: | Line 6: | ||
=== Readings === | === Readings === | ||
− | [http://curtis.ml.cmu.edu/w/courses/images/e/e3/Graph.pdf Sparse Inverse Covariance Estimation with the Graphical Lasso] | + | * [http://curtis.ml.cmu.edu/w/courses/images/e/e3/Graph.pdf Sparse Inverse Covariance Estimation with the Graphical Lasso] |
− | [http://www.cs.cmu.edu/~epxing/papers/2010/kolar_song_xing_aoas10.pdf Estimating Time-Varying Networks] | + | * [http://www.cs.cmu.edu/~epxing/papers/2010/kolar_song_xing_aoas10.pdf Estimating Time-Varying Networks] |
=== Taking home message === | === Taking home message === |
Revision as of 22:26, 3 November 2013
Slides
Readings
Taking home message
- what is inference and learning in GM
- inference via belief propagation, when such a method is exact? When it is not?
- what is BP, what are the running modes of BP
- why MLE on fully observed GM is easy?
- what if some variables are latent?
- what is the difference between a Markov network and a correlation network?
- Learning a Markov network using Graphical Lasso, which assumptions we are making on the model underlying the data?
- Why neighborhood selection using lasso is correct?