Difference between revisions of "Class meeting for 10-405 Parallel Perceptrons"

From Cohen Courses
Jump to navigationJump to search
 
Line 14: Line 14:
  
 
=== Readings ===
 
=== Readings ===
* [http://www.cs.cmu.edu/~wcohen/10-601/vp-notes/vp.pdf Notes on voted perceptron.] (Updated --[[User:Wcohen|Wcohen]] ([[User talk:Wcohen|talk]]) 10:28, 6 March 2018 (EST))
+
* [http://www.cs.cmu.edu/~wcohen/10-601/vp-notes/vp.pdf Notes on voted perceptron.] Note: these were updated --[[User:Wcohen|Wcohen]] ([[User talk:Wcohen|talk]]) 10:28, 6 March 2018 (EST)
  
 
=== Optional Readings  ===
 
=== Optional Readings  ===

Latest revision as of 11:28, 6 March 2018

This is one of the class meetings on the schedule for the course Machine Learning with Large Datasets 10-405 in Spring 2018.

Slides

Quiz

Readings

Optional Readings

Things to Remember

  • Definition of mistake bound
  • Definition of perceptron algorithm
    • Mistake bound analysis for perceptrons, in terms of margin and example radius
  • Converting perceptrons to batch: voted perceptron, averaged perceptron
  • Definition of the ranking perceptron and kernel perceptron
  • Relationship of hash trick to kernels
  • Parallellizing streaming ML algorithms
    • Parameter mixing, and the effect it has on the mistake bounds for perceptrons
    • Iterative parameter mixing, and the effect it has on the mistake bounds for perceptrons
  • The ALLREDUCE algorithm and its complexity