Difference between revisions of "10-601 Matrix Factorization"
From Cohen Courses
Jump to navigationJump to search (→Slides) |
|||
Line 7: | Line 7: | ||
=== Readings === | === Readings === | ||
− | + | ||
− | + | Matrix factorization and collaborative filtering is not covered in Murphy or Mitchell. Some external readings are below. | |
− | * | + | * Koren, Yehuda, Robert Bell, and Chris Volinsky. "Matrix factorization techniques for recommender systems." Computer 8 (2009): 30-37. |
* There's a nice description of [http://people.mpi-inf.mpg.de/~rgemulla/publications/rj10481rev.pdf the gradient-based approach to MF], and a scheme for parallelizing it,by Gemulla et al. | * There's a nice description of [http://people.mpi-inf.mpg.de/~rgemulla/publications/rj10481rev.pdf the gradient-based approach to MF], and a scheme for parallelizing it,by Gemulla et al. | ||
Revision as of 12:37, 13 April 2016
This a lecture used in the Syllabus for Machine Learning 10-601B in Spring 2016
Slides
Readings
Matrix factorization and collaborative filtering is not covered in Murphy or Mitchell. Some external readings are below.
- Koren, Yehuda, Robert Bell, and Chris Volinsky. "Matrix factorization techniques for recommender systems." Computer 8 (2009): 30-37.
- There's a nice description of the gradient-based approach to MF, and a scheme for parallelizing it,by Gemulla et al.
Summary
You should know:
- What loss function and constraints are associated with PCA - i.e., what the "PCA optimization problem" is.
- How to interpret the low-dimensional embedding of instances, and the "prototypes" produced by PCA and MF techniques.
- How to interpret the prototypes in the case of dimension reduction for images.