Machine Learning 10-601 in Fall 2013
- Important announcements will be made here as well as on Piazza.
Important People and Places
- Instructors: William Cohen and Eric Xing, Machine Learning Dept and LTI
- Course secretary: Sharon Cavlovich, email@example.com, 412-268-5196
- When/where: M/W 4:30-5:50, Doherty Hall 2315 (not 1:30-2:50 as was announced earlier!)
- Classes will start on Wednesday, Sept 4 (the Wed after Labor Day)
- Course Number: ML 10-601
- TAs and recitation schedule:
- Guanyu Wang (firstname.lastname@example.org, guanyuw@andrew), recitation: Mon. 6:30pm-7:30pm Porter Hall A18C
- William Yang Wang (email@example.com, yww@andrew), recitation: Tue. 5pm-6pm Gates 4215
- Shu-Hao Yu (firstname.lastname@example.org, shuhaoy@andrew), recitation: Wed. 6:30pm-7:30pm Wean 5403
- Avinava Dubey (email@example.com), recitation: Thu. 5pm-6pm Porter Hall A18C
- Pengtao Xie (firstname.lastname@example.org, pxie1@andrew), recitation: Fri. 5pm-6pm GHC 4215
- Shangqing Zhang (email@example.com, shangqiz@andrew), recitation leader-at-large
- Ying Shen (firstname.lastname@example.org), recitation leader-at-large
- Recitations will start after Sept 4
- Syllabus (including lecture slides and HWs): Syllabus for Machine Learning 10-601
- On-line lectures: MediaSite will post within 24 hrs of lecture, use your Andrew id to log in.
- Office hours for William and Eric:
- William and Eric will hold office hours in DH 2315 immediately after class from 5:50 to 6:30pm. (I'm told the room is free until 7pm). Typically Eric will have office hours Monday and William on Wed.
- We'll be using BlackBoard and Autolab for most assignments.
- We've set up a Piazza page for questions of general interest.
For instructors only:
- The autolab directory is /afs/cs/academic/class/10601-f13/autolab - you need to be in the right pts group to access it, ask wcohen if you don't.
- New: Save backup materials - eg handout .tex files, autolab scripts, etc - in /afs/cs.cmu.edu/academic/class/10601
- To-do lists and such are on our GDoc spreadsheet."
Machine Learning (ML) asks "how can we design programs that automatically improve their performance through experience?" This includes learning to perform many types of tasks based on many types of experience, e.g. spotting high-risk medical patients, recognizing speech, classifying text documents, detecting credit card fraud, or driving autonomous robots.
Topics covered in 10-601 include concept learning, version spaces, decision trees, neural networks, computational learning theory, active learning, estimation & the bias-variance tradeoff, hypothesis testing, Bayesian learning, Naïve Bayes classifier, Bayes Nets & Graphical Models, the EM algorithm, Hidden Markov Models, K-Nearest-Neighbors and nonparametric learning, reinforcement learning, bagging and boosting, neural networks, and other topics.
10-601 focuses on the mathematical, statistical and computational foundations of the field. It emphasizes the role of assumptions in machine learning. As we introduce different ML techniques, we work out together what assumptions are implicit in them. Grading is based on written assignments, programming assignments, and a final exam.
10-601 focuses on understanding what makes machine learning work. If your interest is primarily in learning the process of applying ML effectively, and in the practical side of ML for applications, you should consider Machine Learning in Practice (11-344/05-834).
10-601 is open to all but is recommended for CS Seniors & Juniors, Quantitative Masters students, and non-SCS PhD students.
Syllabus and Text
Syllabus for Machine Learning 10-601, including lecture slides and HWs
Previous syllabi, for the historically-minded:
The text is Tom Mitchell's textbook, Machine Learning. It is recommended but not required.
- Prerequisites are 15-122, Principles of Imperative Computation AND 21-127: Concepts of Mathematics.
- Additionally, a probability course is a co-requisite: 36-217: Probability Theory and Random Processes OR 36-225: Introduction to Probability and Statistics I
- A minimum grade of 'C' is required in all these courses.
Self-assessment for students:
- Students, especially graduate students, come to CMU with a variety of different backgrounds, so formal course prereqs are hard to establish. There is a short self-assessment test to see if you have the necessary background for 10-601. We recommend that all students take this before enrolling in 10-601 to see if they have the necessary background knowledge already, or if they need to review and/or take additional courses.
- Semi-final exam: 20%
- Instead of a final exam, we have an exam in class on the Monday before Thanksgiving (Nov 25)
- Weekly homeworks (out Wed, due Wed): 60%
- Late assignment policy: We will grant up to 50% credit if an assignment is less than 48 hrs late. Also, you can drop your lowest assignment grade entirely.
- Project: 20% (see below)
More details will be posted later; here is an outline of the project. The goal is building and evaluating a robust out-of-the-box classifier learner.
Some learning algorithms require more tuning to a new problem than others, but most of what is known about how to tune classifiers for a learning task is folklore, not science. The question here is: which algorithms are most robust? To address this I suggest a Kaggle-style competition with these rules.
- Submitted learners will be scored by their average error rates (say) over 5 evaluation learning tasks, each of which has an associated train/test split.
- The evaluation tasks are not known in advance - instead there are 20 development learning tasks, each of which has an associated train/test split, to tune the learning system.
- The learning system could be, for example:
- A plain classifier learner (eg, a standard implementation of random forests might be a good baseline)
- A classifier learner with a wrapper around it that does a parameter sweep and picks a set of parameters.
- A classifier learner with wrapper that is some sort of feature-selection mechanism.
- A set of K classifier learners, which uses internal cross-validation to pick the best set.
- A set of K classifier learners, including one or more than project team-mates have implemented and/or invented on their own.
- A semi-automatic system, which requires some human input to make its final choice of classifier. (But we're not sure now how to score this....?)
- Anything else you can think of.
Policy on Collaboration among Students
These policies are the same as were used in Dr. Rosenfeld's previous version of 2013.
The purpose of student collaboration is to facilitate learning, not to circumvent it. Studying the material in groups is strongly encouraged. It is also allowed to seek help from other students in understanding the material needed to solve a particular homework problem, provided no written notes are shared, or are taken at that time, and provided learning is facilitated, not circumvented. The actual solution must be done by each student alone, and the student should be ready to reproduce their solution upon request.
The presence or absence of any form of help or collaboration, whether given or received, must be explicitly stated and disclosed in full by all involved, on the first page of their assignment. Specifically, each assignment solution must start by answering the following questions:
(1) Did you receive any help whatsoever from anyone in solving this assignment? Yes / No. If you answered 'yes', give full details: _______________ (e.g. "Jane explained to me what is asked in Question 3.4") (2) Did you give any help whatsoever to anyone in solving this assignment? Yes / No. If you answered 'yes', give full details: _______________ (e.g. "I pointed Joe to section 2.3 to help him with Question 2".
Collaboration without full disclosure will be handled severely, in compliance with CMU's Policy on Cheating and Plagiarism.
As a related point, some of the homework assignments used in this class may have been used in prior versions of this class, or in classes at other institutions. Avoiding the use of heavily tested assignments will detract from the main purpose of these assignments, which is to reinforce the material and stimulate thinking. Because some of these assignments may have been used before, solutions to them may be (or may have been) available online, or from other people. It is explicitly forbidden to use any such sources, or to consult people who have solved these problems before. You must solve the homework assignments completely on your own. I will mostly rely on your wisdom and honor to follow this rule, but if a violation is detected it will be dealt with harshly. Collaboration with other students who are currently taking the class is allowed, but only under the conditions stated below.