Difference between revisions of "10-601 Deep Learning 1"

From Cohen Courses
Jump to navigationJump to search
 
(8 intermediate revisions by the same user not shown)
Line 9: Line 9:
 
This area is moving very fast and the textbooks are not up-to-date.  Some recommended readings:
 
This area is moving very fast and the textbooks are not up-to-date.  Some recommended readings:
 
* [http://neuralnetworksanddeeplearning.com/index.html  Neural Networks and Deep Learning] An online book by Michael Nielsen, pitched at an appropriate level for 10-601, which has a bunch of exercises and on-line sample programs in Python.
 
* [http://neuralnetworksanddeeplearning.com/index.html  Neural Networks and Deep Learning] An online book by Michael Nielsen, pitched at an appropriate level for 10-601, which has a bunch of exercises and on-line sample programs in Python.
* [http://cs231n.github.io/  Stanford CS class CS231n: Convolutional Neural Networks for Visual Recognition] has nice on-line notes.
 
  
I also used some on-line visualizations in the materials for the lecture, especially the part on ConvNets.
+
For more detail, look at the [http://www.deeplearningbook.org/ MIT Press book] (in preparation) from Bengio - it's very complete but also fairly technical.
* [https://en.wikipedia.org/wiki/Convolution the Wikipedia page for convolutions] has nice animations of 1-D convolutions.
 
* [http://matlabtricks.com/post-5/3x3-convolution-kernels-with-online-demo  On-line demo of 2-D convolutions for image processing.
 
* [https://cs.stanford.edu/people/karpathy/convnetjs/demo/mnist.html  There's an on-line demo of CNNs] which are trained in your browser (!)
 
* [http://scs.ryerson.ca/~aharley/vis/conv/  3D visualization of a trained net.]
 
  
For more detail, look at the [http://www.deeplearningbook.org/ MIT Press book (in preparation) from Bengio
+
=== Things to remember  ===
  
 
+
* The underlying reasons deep networks are hard to train
===  Summary  ===
+
** Exploding/vanishing gradients
 
+
** Saturation
* To be added
+
* The importance of key recent advances in neural networks:
 +
** Matrix operations and GPU training
 +
** ReLU, cross-entropy, softmax

Latest revision as of 14:18, 11 April 2016

This a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016

Slides

Readings

This area is moving very fast and the textbooks are not up-to-date. Some recommended readings:

  • Neural Networks and Deep Learning An online book by Michael Nielsen, pitched at an appropriate level for 10-601, which has a bunch of exercises and on-line sample programs in Python.

For more detail, look at the MIT Press book (in preparation) from Bengio - it's very complete but also fairly technical.

Things to remember

  • The underlying reasons deep networks are hard to train
    • Exploding/vanishing gradients
    • Saturation
  • The importance of key recent advances in neural networks:
    • Matrix operations and GPU training
    • ReLU, cross-entropy, softmax