Difference between revisions of "Class meeting for 10-605 Deep Learning"

From Cohen Courses
Jump to navigationJump to search
Line 4: Line 4:
  
 
* Lecture 1: [http://www.cs.cmu.edu/~wcohen/10-605/deep-1.pptx Powerpoint], [http://www.cs.cmu.edu/~wcohen/10-605/deep-1.pdf PDF].
 
* Lecture 1: [http://www.cs.cmu.edu/~wcohen/10-605/deep-1.pptx Powerpoint], [http://www.cs.cmu.edu/~wcohen/10-605/deep-1.pdf PDF].
 +
 +
=== Quizzes ===
 +
 +
* [https://qna.cs.cmu.edu/#/pages/view/75 Quiz for lecture one]
  
 
=== Readings ===
 
=== Readings ===

Revision as of 14:03, 24 October 2016

This is one of the class meetings on the schedule for the course Machine Learning with Large Datasets 10-605 in Fall_2016.

Slides

Quizzes

Readings

Things to remember

  • The underlying reasons deep networks are hard to train
  • Exploding/vanishing gradients
  • Saturation
  • The importance of key recent advances in neural networks:
  • Matrix operations and GPU training
  • ReLU, cross-entropy, softmax
  • How backprop can be generalized to a sequence of assignment operations