Difference between revisions of "10-601B Neural networks and Backprop"

From Cohen Courses
Jump to navigationJump to search
(Created page with "This a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016 === Slides === === Readings === * Mitchell: Ch. 4, Murphy Ch 16.5 === What You Should ...")
 
 
(One intermediate revision by one other user not shown)
Line 3: Line 3:
 
=== Slides ===
 
=== Slides ===
  
 +
* [http://curtis.ml.cmu.edu/w/courses/images/8/8b/Kernelized-svms.pdf Kernelized svm slides in pdf]
 +
* [http://curtis.ml.cmu.edu/w/courses/images/a/aa/Kernelized-svms.pptx Kernelized svm slides in ppt]
  
 
=== Readings ===
 
=== Readings ===

Latest revision as of 22:06, 8 February 2016

This a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016

Slides

Readings

  • Mitchell: Ch. 4, Murphy Ch 16.5

What You Should Know Afterward

  • What functions can be expressed with multi-layer networks that a single layer cannot express
  • The backpropagation algorithm, and what loss is associated with it
  • In outline, how deep neural networks are trained