Difference between revisions of "10-601B Kernelized SVMs"

From Cohen Courses
Jump to navigationJump to search
 
Line 13: Line 13:
 
=== What You Should Know Afterward ===
 
=== What You Should Know Afterward ===
  
* What functions can be expressed with multi-layer networks that a single layer cannot express
+
* The definitions of, and intuitions behind, these concepts:
* The backpropagation algorithm, and what loss is associated with it
+
** The margin of a classifier relative to a dataset.
* In outline, how deep neural networks are trained
+
** What a constrained optimization problem is.
 +
** The primal form of the SVM optimization problem.
 +
** The dual form of the SVM optimization problem.
 +
* What a support vector is.
 +
* What slack variables are and why and when they are used in SVMs.
 +
* How to explain the different parts (constraints, optimization criteria) of the primal and dual forms for the SVM.
 +
* How to Kernelize SVM

Latest revision as of 23:34, 8 February 2016

This a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016

Slides

Readings

What You Should Know Afterward

  • The definitions of, and intuitions behind, these concepts:
    • The margin of a classifier relative to a dataset.
    • What a constrained optimization problem is.
    • The primal form of the SVM optimization problem.
    • The dual form of the SVM optimization problem.
  • What a support vector is.
  • What slack variables are and why and when they are used in SVMs.
  • How to explain the different parts (constraints, optimization criteria) of the primal and dual forms for the SVM.
  • How to Kernelize SVM