Difference between revisions of "10-601 Sequences"
From Cohen Courses
Jump to navigationJump to search (Created page with "This a lecture used in the Syllabus for Machine Learning 10-601 === Slides === * [http://www.cs.cmu.edu/~wcohen/10-601/hmms.pptx Slides in PowerPoint]. === Readings ===...") |
|||
Line 5: | Line 5: | ||
* [http://www.cs.cmu.edu/~wcohen/10-601/hmms.pptx Slides in PowerPoint]. | * [http://www.cs.cmu.edu/~wcohen/10-601/hmms.pptx Slides in PowerPoint]. | ||
− | === Readings === | + | === Optional Readings === |
− | * This is not covered in Mitchell. | + | * This is not covered in Mitchell. For HMMS: Bishop 13.1-13.2 cover this material. The most-used introduction to HMMs is: ''Rabiner, Lawrence R. "A tutorial on hidden Markov models and selected applications in speech recognition." Proceedings of the IEEE 77.2 (1989): 257-286.'' There is a nice [http://arxiv.org/abs/1011.4088 general introduction to CRFs] by Sutton and McCallum. |
=== Summary === | === Summary === |
Revision as of 15:10, 10 November 2013
This a lecture used in the Syllabus for Machine Learning 10-601
Slides
Optional Readings
- This is not covered in Mitchell. For HMMS: Bishop 13.1-13.2 cover this material. The most-used introduction to HMMs is: Rabiner, Lawrence R. "A tutorial on hidden Markov models and selected applications in speech recognition." Proceedings of the IEEE 77.2 (1989): 257-286. There is a nice general introduction to CRFs by Sutton and McCallum.
Summary
You should know:
- The definition of an HMM
- What the Viterbi and forward-background algorithms are:
- What their complexity is
- What they compute.
- How to learn the parameters HMMs when
- The states associated with the training data are observed
- The states are unobserved.
- What the advantages of a CRF are compared to an HMM.
- How HMMs and CRFs relate to naive Bayes, logistic regression, and generative and discriminative models.
- How HMMs or CRFs can be used for named-entity recognition (NER) and other sequential classification tasks.