10-601B Perceptrons and Large Margin
From Cohen Courses
Revision as of 09:15, 12 January 2016 by Wcohen (talk | contribs) (→What You Should Know Afterward)
This a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016
Slides
Readings
- My notes on the voted perceptron. (You can skip sections 3-4 on ranking and the structured perceptron).
- Optional reading: Freund, Yoav, and Robert E. Schapire. "Large margin classification using the perceptron algorithm." Machine learning 37.3 (1999): 277-296.
- Optional background on linear algebra: Zico Kolter's linear algebra review lectures
What You Should Know Afterward
- The difference between an on-line and batch algorithm.
- How to implement the voted perceptron.
- The definition of a mistake bound, and a margin.
- The definitions of, and intuitions behind, these concepts:
- The margin of a classifier relative to a dataset.
- What a constrained optimization problem is.
- The primal form of the SVM optimization problem.
- The dual form of the SVM optimization problem.
- What a support vector is.
- What a kernel function is.
- What slack variables are and why and when they are used in SVMs.
- How to explain the different parts (constraints, optimization criteria) of the primal and dual forms for the SVM.
- How the perceptron and SVM are similar and different.