Difference between revisions of "10-601B Kernelized SVMs"

From Cohen Courses
Jump to navigationJump to search
 
(2 intermediate revisions by the same user not shown)
Line 8: Line 8:
 
=== Readings ===
 
=== Readings ===
  
* Mitchell: Ch. 4, Murphy Ch 16.5
+
* Support Vector Machines: Bishop 7.1, Murphy 14.5
 +
* [http://cs229.stanford.edu/notes/cs229-notes3.pdf Andrew Ng's notes on SVM optimization]
  
 
=== What You Should Know Afterward ===
 
=== What You Should Know Afterward ===
  
* What functions can be expressed with multi-layer networks that a single layer cannot express
+
* The definitions of, and intuitions behind, these concepts:
* The backpropagation algorithm, and what loss is associated with it
+
** The margin of a classifier relative to a dataset.
* In outline, how deep neural networks are trained
+
** What a constrained optimization problem is.
 +
** The primal form of the SVM optimization problem.
 +
** The dual form of the SVM optimization problem.
 +
* What a support vector is.
 +
* What slack variables are and why and when they are used in SVMs.
 +
* How to explain the different parts (constraints, optimization criteria) of the primal and dual forms for the SVM.
 +
* How to Kernelize SVM

Latest revision as of 22:34, 8 February 2016

This a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016

Slides

Readings

What You Should Know Afterward

  • The definitions of, and intuitions behind, these concepts:
    • The margin of a classifier relative to a dataset.
    • What a constrained optimization problem is.
    • The primal form of the SVM optimization problem.
    • The dual form of the SVM optimization problem.
  • What a support vector is.
  • What slack variables are and why and when they are used in SVMs.
  • How to explain the different parts (constraints, optimization criteria) of the primal and dual forms for the SVM.
  • How to Kernelize SVM