Machine Learning 10-601 in Fall 2013

From Cohen Courses
Jump to navigationJump to search

Important Announcements

  • Assignment 1 was out and this assignment is due on 13th, Sep.
  • You are allowed to take and submit the first assignment three times before the deadline.
  • Message from Prof. Eric Xing about the waiting list status:

Dear Class,

As you can see, 10601 is over-subscribed by about 100 students, and many on the waiting list were asking if they could make it to the formal registration list.

As William pointed, we have a hard limit on the class size at around 260. Typically, in the first two weeks of the class, students will "shop around" classes and make their final decision after taking a few lectures, so the the class roster is still fluid at this point. Due to the size limitation, we request all students who are registered but plan to only audit or take the class as pass/fail de-register themselves asap (you are still welcome to sit in the class and/or follow the class video broadcast, etc.), and we suggest students who find this class unsuitable for their needs, or do not need to take this class right now (it will be offered every semester) de-register the class during the first two weeks (i.e., before next Friday) to vacant slots to students who are on the list but really need to take this class. Also, if we found you on the roster, but do not show up in class for the first two weeks, and/or do not submit your first homework assignment, we will remove your name.

For those who are on the waiting list but are seriously planning to take the course, please still come to the class and at the same time check your status often during the first two weeks, as it will start clearing itself once vacancy in the class roster is available. If after at least two weeks, you are still not able to register, you can assume that you will have to take the class at another time.

best,

Eric

Important People and Places

  • Instructors: William Cohen and Eric Xing, Machine Learning Dept and LTI
  • Course secretary: Sharon Cavlovich, sharonw+@cs.cmu.edu, 412-268-5196
  • When/where: M/W 4:30-5:50, Doherty Hall 2315 (not 1:30-2:50 as was announced earlier!)
    • Classes will start on Wednesday, Sept 4 (the Wed after Labor Day)
  • Course Number: ML 10-601
  • TAs and recitation schedule:
    • Guanyu Wang (wgiveny@gmail.com, guanyuw@andrew), recitation: Mon. 6:30pm-7:30pm Porter Hall A18C
    • William Yang Wang (ww@cmu.edu, yww@andrew), recitation: Tue. 5pm-6pm Porter Hall A18A
    • Shu-Hao Yu (shuhaoy@gmail.com, shuhaoy@andrew), recitation: Wed. 6:30pm-7:30pm Wean 5403
    • Avinava Dubey (akdubey@andrew.cmu.edu), recitation: Thu. 5pm-6pm Porter Hall A18C
    • Pengtao Xie (pengtaoxie2008@gmail.com, pxie1@andrew), recitation: Fri. 5pm-6pm GHC 4215
    • Shangqing Zhang (zsqhyhzyh@gmail.com, shangqiz@andrew), recitation leader-at-large
    • Ying Shen (yingshen@andrew.cmu.edu), recitation leader-at-large
    • Recitations will start after Sept 4
  • Syllabus: Syllabus for Machine Learning 10-601
  • On-line lectures: MediaSite will post within 24 hrs of lecture, use your Andrew id to log in.
  • Office hours for William and Eric:
    • William and Eric will hold office hours in DH 2315 immediately after class from 5:50 to 6:30pm. (I'm told the room is free until 7pm). Typically Eric will have office hours Monday and William on Wed.
  • We'll be using BlackBoard and Autolab for most assignments.
  • New: We've set up a Piazza page for questions of general interest.


For instructors only:

  • The autolab directory is /afs/cs/academic/class/10601-f13/autolab'
  • To-do lists and such are on our GDoc spreadsheet."

Description

Machine Learning (ML) asks "how can we design programs that automatically improve their performance through experience?" This includes learning to perform many types of tasks based on many types of experience, e.g. spotting high-risk medical patients, recognizing speech, classifying text documents, detecting credit card fraud, or driving autonomous robots.

Topics covered in 10-601 include concept learning, version spaces, decision trees, neural networks, computational learning theory, active learning, estimation & the bias-variance tradeoff, hypothesis testing, Bayesian learning, Naïve Bayes classifier, Bayes Nets & Graphical Models, the EM algorithm, Hidden Markov Models, K-Nearest-Neighbors and nonparametric learning, reinforcement learning, bagging and boosting, neural networks, and other topics.

10-601 focuses on the mathematical, statistical and computational foundations of the field. It emphasizes the role of assumptions in machine learning. As we introduce different ML techniques, we work out together what assumptions are implicit in them. Grading is based on written assignments, programming assignments, and a final exam.

10-601 focuses on understanding what makes machine learning work. If your interest is primarily in learning the process of applying ML effectively, and in the practical side of ML for applications, you should consider Machine Learning in Practice (11-344/05-834).

10-601 is open to all but is recommended for CS Seniors & Juniors, Quantitative Masters students, and non-SCS PhD students.

Syllabus and Text

Previous syllabi, for the historically-minded:

The text is Tom Mitchell's textbook, Machine Learning. It is recommended but not required.

Prerequisites

Formal prerequisites:

  • Prerequisites are 15-122, Principles of Imperative Computation AND 21-127: Concepts of Mathematics.
  • Additionally, a probability course is a co-requisite: 36-217: Probability Theory and Random Processes OR 36-225: Introduction to Probability and Statistics I
  • A minimum grade of 'C' is required in all these courses.

Self-assessment for students:

  • Students, especially graduate students, come to CMU with a variety of different backgrounds, so formal course prereqs are hard to establish. There is a short self-assessment test to see if you have the necessary background for 10-601. We recommend that all students take this before enrolling in 10-601 to see if they have the necessary background knowledge already, or if they need to review and/or take additional courses.

Grading Policy

  • Semi-final exam: 20%
    • Instead of a final exam, we have an exam in class on the Monday before Thanksgiving (Nov 25)
  • Weekly homeworks (out Wed, due Wed): 60%
    • Late assignment policy: We will grant up to 50% credit if an assignment is less than 48 hrs late. Also, you can drop your lowest assignment grade entirely.
  • Project: 20% (see below)

Projects

More details will be posted later; here is an outline of the project. The goal is building and evaluating a robust out-of-the-box classifier learner.

Some learning algorithms require more tuning to a new problem than others, but most of what is known about how to tune classifiers for a learning task is folklore, not science. The question here is: which algorithms are most robust? To address this I suggest a Kaggle-style competition with these rules.

  • Submitted learners will be scored by their average error rates (say) over 5 evaluation learning tasks, each of which has an associated train/test split.
  • The evaluation tasks are not known in advance - instead there are 20 development learning tasks, each of which has an associated train/test split, to tune the learning system.
  • The learning system could be, for example:
    1. A plain classifier learner (eg, a standard implementation of random forests might be a good baseline)
    2. A classifier learner with a wrapper around it that does a parameter sweep and picks a set of parameters.
    3. A classifier learner with wrapper that is some sort of feature-selection mechanism.
    4. A set of K classifier learners, which uses internal cross-validation to pick the best set.
    5. A set of K classifier learners, including one or more than project team-mates have implemented and/or invented on their own.
    6. A semi-automatic system, which requires some human input to make its final choice of classifier. (But we're not sure now how to score this....?)
    7. Anything else you can think of.

Policy on Collaboration among Students

These policies are the same as were used in Dr. Rosenfeld's previous version of 2013.

The purpose of student collaboration is to facilitate learning, not to circumvent it. Studying the material in groups is strongly encouraged. It is also allowed to seek help from other students in understanding the material needed to solve a particular homework problem, provided no written notes are shared, or are taken at that time, and provided learning is facilitated, not circumvented. The actual solution must be done by each student alone, and the student should be ready to reproduce their solution upon request.

The presence or absence of any form of help or collaboration, whether given or received, must be explicitly stated and disclosed in full by all involved, on the first page of their assignment. Specifically, each assignment solution must start by answering the following questions:

(1) Did you receive any help whatsoever from anyone in solving this assignment? Yes / No.
If you answered 'yes', give full details: _______________ (e.g. "Jane explained to me what is asked in Question 3.4")
(2) Did you give any help whatsoever to anyone in solving this assignment? Yes / No.
If you answered 'yes', give full details: _______________ (e.g. "I pointed Joe to section 2.3 to help him with Question 2".

Collaboration without full disclosure will be handled severely, in compliance with CMU's Policy on Cheating and Plagiarism.

As a related point, some of the homework assignments used in this class may have been used in prior versions of this class, or in classes at other institutions. Avoiding the use of heavily tested assignments will detract from the main purpose of these assignments, which is to reinforce the material and stimulate thinking. Because some of these assignments may have been used before, solutions to them may be (or may have been) available online, or from other people. It is explicitly forbidden to use any such sources, or to consult people who have solved these problems before. You must solve the homework assignments completely on your own. I will mostly rely on your wisdom and honor to follow this rule, but if a violation is detected it will be dealt with harshly. Collaboration with other students who are currently taking the class is allowed, but only under the conditions stated below.