Difference between revisions of "Class meeting for 10-605 Parallel Perceptrons 2"

From Cohen Courses
Jump to navigationJump to search
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
This is one of the class meetings on the [[Syllabus for Machine Learning with Large Datasets 10-605 in Fall 2015|schedule]] for the course [[Machine Learning with Large Datasets 10-605 in Fall_2015]].
+
This is one of the class meetings on the [[Syllabus for Machine Learning with Large Datasets 10-605 in Fall 2016|schedule]] for the course [[Machine Learning with Large Datasets 10-605 in Fall_2016]].
  
 
=== Slides ===
 
=== Slides ===
Line 5: Line 5:
 
Perceptrons, continued:
 
Perceptrons, continued:
  
* [http://www.cs.cmu.edu/~wcohen/10-605/mistake-bounds+struct-vp-2.pptx Slides in Powerpoint]
+
* [http://www.cs.cmu.edu/~wcohen/10-605/2016/mistake-bounds+struct-vp-2.pptx Slides in Powerpoint]
* [http://www.cs.cmu.edu/~wcohen/10-605/mistake-bounds+struct-vp-2.pdf Slides in PDF]
+
* [http://www.cs.cmu.edu/~wcohen/10-605/2016/mistake-bounds+struct-vp-2.pdf Slides in PDF]
  
 
Parallel perceptrons with iterative parameter mixing:
 
Parallel perceptrons with iterative parameter mixing:
  
* [http://www.cs.cmu.edu/~wcohen/10-605/parallel-perceptrons.pptx Slides in Powerpoint]
+
* [http://www.cs.cmu.edu/~wcohen/10-605/2016/parallel-perceptrons.pptx Slides in Powerpoint]
* [http://www.cs.cmu.edu/~wcohen/10-605/parallel-perceptrons.pdf Slides in PDF]
+
* [http://www.cs.cmu.edu/~wcohen/10-605/2016/parallel-perceptrons.pdf Slides in PDF]
  
 
=== Readings for the Class ===
 
=== Readings for the Class ===
Line 18: Line 18:
 
=== Optional Readings ===
 
=== Optional Readings ===
 
* [http://www.cs.cmu.edu/~wcohen/10-707/vp-notes/vp.pdf Notes on voted perceptron.]
 
* [http://www.cs.cmu.edu/~wcohen/10-707/vp-notes/vp.pdf Notes on voted perceptron.]
 +
 +
=== What you should remember ===
 +
 +
* The averaged perceptron and the voted perceptron
 +
* Approaches to parallelizing perceptrons (and other on-line learning methods, like SGD)
 +
** Parameter mixing
 +
** Iterative parameter mixing (IPM)
 +
* The meaning and implications of the theorems given for convergence of the basic perceptron and the IPM version

Latest revision as of 16:41, 1 August 2017

This is one of the class meetings on the schedule for the course Machine Learning with Large Datasets 10-605 in Fall_2016.

Slides

Perceptrons, continued:

Parallel perceptrons with iterative parameter mixing:

Readings for the Class

Optional Readings

What you should remember

  • The averaged perceptron and the voted perceptron
  • Approaches to parallelizing perceptrons (and other on-line learning methods, like SGD)
    • Parameter mixing
    • Iterative parameter mixing (IPM)
  • The meaning and implications of the theorems given for convergence of the basic perceptron and the IPM version