Difference between revisions of "Class meeting for 10-605 Parallel Perceptrons 2"
From Cohen Courses
Jump to navigationJump to search (→Slides) |
|||
(8 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | This is one of the class meetings on the [[Syllabus for Machine Learning with Large Datasets 10-605 in | + | This is one of the class meetings on the [[Syllabus for Machine Learning with Large Datasets 10-605 in Fall 2016|schedule]] for the course [[Machine Learning with Large Datasets 10-605 in Fall_2016]]. |
=== Slides === | === Slides === | ||
− | * [http://www.cs.cmu.edu/~wcohen/10-605/parallel-perceptrons.pptx Slides in Powerpoint] | + | Perceptrons, continued: |
+ | |||
+ | * [http://www.cs.cmu.edu/~wcohen/10-605/2016/mistake-bounds+struct-vp-2.pptx Slides in Powerpoint] | ||
+ | * [http://www.cs.cmu.edu/~wcohen/10-605/2016/mistake-bounds+struct-vp-2.pdf Slides in PDF] | ||
+ | |||
+ | Parallel perceptrons with iterative parameter mixing: | ||
+ | |||
+ | * [http://www.cs.cmu.edu/~wcohen/10-605/2016/parallel-perceptrons.pptx Slides in Powerpoint] | ||
+ | * [http://www.cs.cmu.edu/~wcohen/10-605/2016/parallel-perceptrons.pdf Slides in PDF] | ||
=== Readings for the Class === | === Readings for the Class === | ||
Line 10: | Line 18: | ||
=== Optional Readings === | === Optional Readings === | ||
* [http://www.cs.cmu.edu/~wcohen/10-707/vp-notes/vp.pdf Notes on voted perceptron.] | * [http://www.cs.cmu.edu/~wcohen/10-707/vp-notes/vp.pdf Notes on voted perceptron.] | ||
+ | |||
+ | === What you should remember === | ||
+ | |||
+ | * The averaged perceptron and the voted perceptron | ||
+ | * Approaches to parallelizing perceptrons (and other on-line learning methods, like SGD) | ||
+ | ** Parameter mixing | ||
+ | ** Iterative parameter mixing (IPM) | ||
+ | * The meaning and implications of the theorems given for convergence of the basic perceptron and the IPM version |
Latest revision as of 16:41, 1 August 2017
This is one of the class meetings on the schedule for the course Machine Learning with Large Datasets 10-605 in Fall_2016.
Slides
Perceptrons, continued:
Parallel perceptrons with iterative parameter mixing:
Readings for the Class
- Distributed Training Strategies for the Structured Perceptron, R. McDonald, K. Hall and G. Mann, North American Association for Computational Linguistics (NAACL), 2010.
Optional Readings
What you should remember
- The averaged perceptron and the voted perceptron
- Approaches to parallelizing perceptrons (and other on-line learning methods, like SGD)
- Parameter mixing
- Iterative parameter mixing (IPM)
- The meaning and implications of the theorems given for convergence of the basic perceptron and the IPM version