Difference between revisions of "10-601 SVMS"
From Cohen Courses
Jump to navigationJump to search(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
− | This a lecture used in the [[Syllabus for Machine Learning 10- | + | This a lecture used in the [[Syllabus for Machine Learning 10-601B in Spring 2016]] |
=== Slides === | === Slides === | ||
Line 22: | Line 22: | ||
** What a ''support vector'' is. | ** What a ''support vector'' is. | ||
** What a ''kernel function'' is. | ** What a ''kernel function'' is. | ||
+ | ** What ''slack variables'' are and why and when they are used in SVMs. | ||
+ | * How to explain the different parts (constraints, optimization criteria) of the primal and dual forms for the SVM. | ||
+ | * How the perceptron and SVM are similar and different. |
Latest revision as of 15:18, 6 January 2016
This a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016
Slides
Readings
Assignment
- None
What You Should Know Afterward
- The definitions of, and intuitions behind, these concepts:
- The margin of a classifier relative to a dataset.
- What a constrained optimization problem is.
- The primal form of the SVM optimization problem.
- The dual form of the SVM optimization problem.
- What a support vector is.
- What a kernel function is.
- What slack variables are and why and when they are used in SVMs.
- How to explain the different parts (constraints, optimization criteria) of the primal and dual forms for the SVM.
- How the perceptron and SVM are similar and different.