Difference between revisions of "Class meeting for 10-605 Deep Learning"
From Cohen Courses
Jump to navigationJump to searchLine 25: | Line 25: | ||
* Matrix operations and GPU training | * Matrix operations and GPU training | ||
* ReLU, cross-entropy, softmax | * ReLU, cross-entropy, softmax | ||
+ | * How backprop can be generalized to a sequence of assignment operations |
Revision as of 17:17, 17 October 2016
This is one of the class meetings on the schedule for the course Machine Learning with Large Datasets 10-605 in Fall_2016.
Slides
- TBD
Readings
- Automatic differentiation:
- William's notes on automatic differentiation.
- Domke's blog post - clear but not much detail - and another nice blog post.
- The clearest paper I've found is Reverse-Mode AD in a Functional Framework: Lambda the Ultimate Backpropagator
- More general neural networks:
- Neural Networks and Deep Learning An online book by Michael Nielsen, pitched at an appropriate level for 10-601, which has a bunch of exercises and on-line sample programs in Python.
For more detail, look at the MIT Press book (in preparation) from Bengio - it's very complete but also fairly technical.
Things to remember
- The underlying reasons deep networks are hard to train
- Exploding/vanishing gradients
- Saturation
- The importance of key recent advances in neural networks:
- Matrix operations and GPU training
- ReLU, cross-entropy, softmax
- How backprop can be generalized to a sequence of assignment operations