Difference between revisions of "Syllabus for Machine Learning with Large Datasets 10-405 in Spring 2018"

From Cohen Courses
Jump to navigationJump to search
Line 1: Line 1:
 
This is the syllabus for [[Machine Learning with Large Datasets 10-405 in Spring 2018]].   
 
This is the syllabus for [[Machine Learning with Large Datasets 10-405 in Spring 2018]].   
 +
 +
== Ideas for extensions to the HW assignments ==
 +
 +
This is not a complete list! you can use any of these as a starting point, but feel free to think up your own extensions.
 +
 +
HW2 (NB in GuineaPig):
 +
 +
* The assignment proposes one particular scheme for parallelizing the training/testing algorithm.  Consider another parallelization algorithm.
 +
* Implement a similarly scalable Rocchio algorithm and compare it with NB.
 +
* Reimplement the same algorithm in Spark (or some other dataflow language) and compare.
 +
 +
HW3 (Logistic regression and SGD)
 +
* Evaluate the hash trick for Naive Bayes systematically on a series of datasets.
 +
* Implement a parameter-mixing version of logistic regression and evaluate it.
 +
* A [https://www.aclweb.org/anthology/P12-2018 recent paper] proposes (roughly) using SVM with NB-transformed features.  Implement this and compare.
 +
* The personalization method described in class is based on [https://www.umiacs.umd.edu/~hal/docs/daume07easyadapt.pdf a transfer learning method] which
 +
works similarly.  Many wikipedia pages are available in multiple languages, and works in related languages tend to be lexically similar (eg, "astrónomo" is Spanish for "astronomer").  Suppose features were character n-grams (eg "astr", "stro", "tron", ...) - does domain transfer work for the task of classifying wikipedia pages?  Construct a dataset and experiment to test this hypothesis.
  
 
=== Notes ===
 
=== Notes ===

Revision as of 11:46, 20 February 2018

This is the syllabus for Machine Learning with Large Datasets 10-405 in Spring 2018.

Ideas for extensions to the HW assignments

This is not a complete list! you can use any of these as a starting point, but feel free to think up your own extensions.

HW2 (NB in GuineaPig):

  • The assignment proposes one particular scheme for parallelizing the training/testing algorithm. Consider another parallelization algorithm.
  • Implement a similarly scalable Rocchio algorithm and compare it with NB.
  • Reimplement the same algorithm in Spark (or some other dataflow language) and compare.

HW3 (Logistic regression and SGD)

  • Evaluate the hash trick for Naive Bayes systematically on a series of datasets.
  • Implement a parameter-mixing version of logistic regression and evaluate it.
  • A recent paper proposes (roughly) using SVM with NB-transformed features. Implement this and compare.
  • The personalization method described in class is based on a transfer learning method which

works similarly. Many wikipedia pages are available in multiple languages, and works in related languages tend to be lexically similar (eg, "astrónomo" is Spanish for "astronomer"). Suppose features were character n-grams (eg "astr", "stro", "tron", ...) - does domain transfer work for the task of classifying wikipedia pages? Construct a dataset and experiment to test this hypothesis.

Notes

  • Homeworks, unless otherwise posted, will be due when the next HW comes out.
  • Lecture notes and/or slides will be (re)posted around the time of the lectures.