Difference between revisions of "10-601B Kernelized SVMs"
From Cohen Courses
Jump to navigationJump to searchLine 13: | Line 13: | ||
=== What You Should Know Afterward === | === What You Should Know Afterward === | ||
− | * What | + | * The definitions of, and intuitions behind, these concepts: |
− | * The | + | ** The margin of a classifier relative to a dataset. |
− | * | + | ** What a constrained optimization problem is. |
+ | ** The primal form of the SVM optimization problem. | ||
+ | ** The dual form of the SVM optimization problem. | ||
+ | * What a support vector is. | ||
+ | * What slack variables are and why and when they are used in SVMs. | ||
+ | * How to explain the different parts (constraints, optimization criteria) of the primal and dual forms for the SVM. | ||
+ | * How to Kernelize SVM |
Latest revision as of 22:34, 8 February 2016
This a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016
Slides
Readings
- Support Vector Machines: Bishop 7.1, Murphy 14.5
- Andrew Ng's notes on SVM optimization
What You Should Know Afterward
- The definitions of, and intuitions behind, these concepts:
- The margin of a classifier relative to a dataset.
- What a constrained optimization problem is.
- The primal form of the SVM optimization problem.
- The dual form of the SVM optimization problem.
- What a support vector is.
- What slack variables are and why and when they are used in SVMs.
- How to explain the different parts (constraints, optimization criteria) of the primal and dual forms for the SVM.
- How to Kernelize SVM