Difference between revisions of "10-601 GM2"

From Cohen Courses
Jump to navigationJump to search
Line 9: Line 9:
 
* [http://www.cs.cmu.edu/~epxing/papers/2010/kolar_song_xing_aoas10.pdf Estimating Time-Varying Networks]
 
* [http://www.cs.cmu.edu/~epxing/papers/2010/kolar_song_xing_aoas10.pdf Estimating Time-Varying Networks]
 
   
 
   
=== Taking home message ===
+
=== To remember ===
  
* what is inference and learning in GM
+
* what is inference in DGMs
* inference via belief propagation, when such a method is exact?  When it is not?
+
* the general outline of the BP algorithm for polytrees
* what is BP, what are the running modes of BP
+
* what is a polytree and when is BP exact
* why MLE on fully observed GM is easy?
+
** what "message passing" means
* what if some variables are latent?
+
* what a Markov blanket is
* what is the difference between a Markov network and a correlation network?
+
* what a Markov network (undirected model) is
* learning a Markov network using Graphical Lasso, which assumptions we are making on the model underlying the data?
+
* how node can be merged to create a polytree
* why neighborhood selection using lasso is correct?
+
* the advantages and disadvantages of BP on polytrees and loopy BP

Revision as of 16:20, 22 March 2016

Slides

Slides in PDF

Readings

To remember

  • what is inference in DGMs
  • the general outline of the BP algorithm for polytrees
  • what is a polytree and when is BP exact
    • what "message passing" means
  • what a Markov blanket is
  • what a Markov network (undirected model) is
  • how node can be merged to create a polytree
  • the advantages and disadvantages of BP on polytrees and loopy BP