Difference between revisions of "Class meeting for 10-605 Deep Learning"

From Cohen Courses
Jump to navigationJump to search
Line 10: Line 10:
 
* Automatic differentiation:
 
* Automatic differentiation:
 
** William's notes on [http://www.cs.cmu.edu/~wcohen/10-605/notes/autodiff.pdf automatic differentiation].
 
** William's notes on [http://www.cs.cmu.edu/~wcohen/10-605/notes/autodiff.pdf automatic differentiation].
**  [https://justindomke.wordpress.com/2009/03/24/a-simple-explanation-of-reverse-mode-automatic-differentiation/ Domke's blog post] - clear but not much detail
+
**  [https://justindomke.wordpress.com/2009/03/24/a-simple-explanation-of-reverse-mode-automatic-differentiation/ Domke's blog post] - clear but not much detail - and [http://colah.github.io/posts/2015-08-Backprop/ another nice blog post].
 
+
** The clearest paper I've found is [http://www.bcl.hamilton.ie/~barak/papers/toplas-reverse.pdf Reverse-Mode AD in a Functional Framework: Lambda the Ultimate Backpropagator]
http://colah.github.io/posts/2015-08-Backprop/ - another nice blog post
 
 
 
http://homes.cs.washington.edu/~naveenks/files/2009_Cranfield_PPT.pdf - ok slide deck
 
 
 
http://www.bcl.hamilton.ie/~barak/papers/toplas-reverse.pdf - clearest paper I found so far
 
 
 
https://github.com/HIPS/autograd - similar project
 
 
 
http://arxiv.org/pdf/1502.05767v2.pdf - survey paper I haven't really looked at
 

Revision as of 17:13, 17 October 2016

This is one of the class meetings on the schedule for the course Machine Learning with Large Datasets 10-605 in Fall_2016.

Slides

  • TBD


Readings