Difference between revisions of "10-601 Sequences"
From Cohen Courses
Jump to navigationJump to search (→Slides) |
|||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
− | This a lecture used in the [[Syllabus for Machine Learning 10- | + | This a lecture used in the [[Syllabus for Machine Learning 10-601B in Spring 2016]] |
=== Slides === | === Slides === |
Latest revision as of 14:16, 21 April 2016
This a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016
Slides
Optional Readings
- This is not covered in Mitchell. For HMMS: Bishop 13.1-13.2 cover this material. The most-used introduction to HMMs is: Rabiner, Lawrence R. "A tutorial on hidden Markov models and selected applications in speech recognition." Proceedings of the IEEE 77.2 (1989): 257-286. There is a nice general introduction to CRFs by Sutton and McCallum.
Summary
You should know:
- The definition of an HMM
- What the Viterbi and forward-background algorithms are:
- What their complexity is
- What they compute.
- How to learn the parameters HMMs when
- The states associated with the training data are observed
- The states are unobserved.
- What the advantages of a CRF are compared to an HMM.
- How HMMs and CRFs relate to naive Bayes, logistic regression, and generative and discriminative models.
- How HMMs or CRFs can be used for named-entity recognition (NER) and other sequential classification tasks.