Difference between revisions of "Class meeting for 10-605 Deep Learning"
From Cohen Courses
Jump to navigationJump to searchLine 10: | Line 10: | ||
* Automatic differentiation: | * Automatic differentiation: | ||
** William's notes on [http://www.cs.cmu.edu/~wcohen/10-605/notes/autodiff.pdf automatic differentiation]. | ** William's notes on [http://www.cs.cmu.edu/~wcohen/10-605/notes/autodiff.pdf automatic differentiation]. | ||
− | ** [https://justindomke.wordpress.com/2009/03/24/a-simple-explanation-of-reverse-mode-automatic-differentiation/ Domke's blog post] - clear but not much detail | + | ** [https://justindomke.wordpress.com/2009/03/24/a-simple-explanation-of-reverse-mode-automatic-differentiation/ Domke's blog post] - clear but not much detail - and [http://colah.github.io/posts/2015-08-Backprop/ another nice blog post]. |
− | + | ** The clearest paper I've found is [http://www.bcl.hamilton.ie/~barak/papers/toplas-reverse.pdf Reverse-Mode AD in a Functional Framework: Lambda the Ultimate Backpropagator] | |
− | http://colah.github.io/posts/2015-08-Backprop/ | ||
− | |||
− | |||
− | |||
− | http://www.bcl.hamilton.ie/~barak/papers/toplas-reverse.pdf - | ||
− | |||
− | |||
− | |||
− |
Revision as of 17:13, 17 October 2016
This is one of the class meetings on the schedule for the course Machine Learning with Large Datasets 10-605 in Fall_2016.
Slides
- TBD
Readings
- Automatic differentiation:
- William's notes on automatic differentiation.
- Domke's blog post - clear but not much detail - and another nice blog post.
- The clearest paper I've found is Reverse-Mode AD in a Functional Framework: Lambda the Ultimate Backpropagator