Difference between revisions of "10-601B Perceptrons and Large Margin"

From Cohen Courses
Jump to navigationJump to search
Line 6: Line 6:
 
* [http://curtis.ml.cmu.edu/w/courses/images/d/d2/Perceptron-svm_02_01.pdf Slides in pdf]
 
* [http://curtis.ml.cmu.edu/w/courses/images/d/d2/Perceptron-svm_02_01.pdf Slides in pdf]
  
=== Readings ===
+
=== Useful Additional Readings ===
  
<!--
+
* The Perceptron Algorithm: Mitchell 4.4.1 & 4.1.2, Bishop 4.1.7
* The Perceptron Algorithm: Bishop 4.1.7, Mitchell 4.4, Murphy 8.5.4
+
* Support Vector Machines: Bishop 7.1, Murphy 14.5
* Support Vector Machines: Bishop 7.1
 
-->
 
  
 
=== What You Should Know Afterward ===
 
=== What You Should Know Afterward ===

Revision as of 12:36, 2 February 2016

This a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016

Slides

Useful Additional Readings

  • The Perceptron Algorithm: Mitchell 4.4.1 & 4.1.2, Bishop 4.1.7
  • Support Vector Machines: Bishop 7.1, Murphy 14.5

What You Should Know Afterward

  • The difference between an on-line and batch algorithm.
  • The perceptron algorithm.
  • The importance of margins in machine learning.
  • The definitions of, and intuitions behind, these concepts:
    • The margin of a classifier relative to a dataset.
    • What a constrained optimization problem is.
    • The primal form of the SVM optimization problem.
    • What slack variables are and why and when they are used in SVMs.
  • How the perceptron and SVM are similar and different.