Difference between revisions of "10-601B Kernelized SVMs"
From Cohen Courses
Jump to navigationJump to search (→Slides) |
|||
Line 8: | Line 8: | ||
=== Readings === | === Readings === | ||
− | * | + | * Support Vector Machines: Bishop 7.1, Murphy 14.5 |
+ | * [http://cs229.stanford.edu/notes/cs229-notes3.pdf | Andrew Ng's notes on SVM optimization] | ||
=== What You Should Know Afterward === | === What You Should Know Afterward === |
Revision as of 22:33, 8 February 2016
This a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016
Slides
Readings
- Support Vector Machines: Bishop 7.1, Murphy 14.5
- | Andrew Ng's notes on SVM optimization
What You Should Know Afterward
- What functions can be expressed with multi-layer networks that a single layer cannot express
- The backpropagation algorithm, and what loss is associated with it
- In outline, how deep neural networks are trained