Difference between revisions of "10-601B Perceptrons and Large Margin"

From Cohen Courses
Jump to navigationJump to search
 
(3 intermediate revisions by the same user not shown)
Line 6: Line 6:
 
* [http://curtis.ml.cmu.edu/w/courses/images/d/d2/Perceptron-svm_02_01.pdf Slides in pdf]
 
* [http://curtis.ml.cmu.edu/w/courses/images/d/d2/Perceptron-svm_02_01.pdf Slides in pdf]
  
=== Readings ===
+
=== Useful Additional Readings ===
  
* The Perceptron Algorithm: Bishop 4.1.7, Mitchell 4.4, Murphy 8.5.4
+
* The Perceptron Algorithm: Mitchell 4.4.1 & 4.1.2, Bishop 4.1.7
* Support Vector Machines: Bishop 7.1
+
* Support Vector Machines: Bishop 7.1, Murphy 14.5
  
 
=== What You Should Know Afterward ===
 
=== What You Should Know Afterward ===
Line 19: Line 19:
 
** The ''margin'' of a classifier relative to a dataset.
 
** The ''margin'' of a classifier relative to a dataset.
 
** What a ''constrained optimization problem'' is.
 
** What a ''constrained optimization problem'' is.
** The ''primal form'' of the SVM optimization problem.
+
** The SVM algorithm.
** What ''slack variables'' are and why and when they are used in SVMs.
 
 
* How the perceptron and SVM are similar and different.
 
* How the perceptron and SVM are similar and different.

Latest revision as of 22:36, 8 February 2016

This a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016

Slides

Useful Additional Readings

  • The Perceptron Algorithm: Mitchell 4.4.1 & 4.1.2, Bishop 4.1.7
  • Support Vector Machines: Bishop 7.1, Murphy 14.5

What You Should Know Afterward

  • The difference between an on-line and batch algorithm.
  • The perceptron algorithm.
  • The importance of margins in machine learning.
  • The definitions of, and intuitions behind, these concepts:
    • The margin of a classifier relative to a dataset.
    • What a constrained optimization problem is.
    • The SVM algorithm.
  • How the perceptron and SVM are similar and different.