Structured Prediction 10-710 in Fall 2011

From Cohen Courses
Revision as of 13:11, 30 November 2011 by Nasmith (talk | contribs) (→‎Grading)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Instructor and Venue

  • Instructors: William Cohen and Noah Smith, Machine Learning Dept and LTI
  • Course secretary: Sharon Cavlovich, sharonw+@cs.cmu.edu, 412-268-5196
  • When/where: Tues-Thursday 3:00-4:20 in Gates-Hillman 4211
  • Course Number: ML 10-710 and LTI 11-763
  • Prerequisites: a machine learning course (e.g., 10-701 or 10-601) or consent of the instructor.
  • TA: Brendan O'Connor
  • Syllabus: Syllabus for Structured Prediction 10-710 in Fall 2011
  • Office hours:
    • Noah, GHC 5723, Thursdays 4:30-5:30 (starting 9/8)
    • Brendan, GHC 8005, Tuesdays 4:30-5:30
    • William, GHC 8217, Fridays 11:00-12:00 (starting 9/16)

Description

This course seeks to cover statistical modeling techniques for discrete, structured data such as text. It brings together content previously covered in Language and Statistics 2 (11-762) and Information Extraction (10-707 and 11-748), and aims to define a canonical set of models and techniques applicable to problems in natural language processing, information extraction, and other application areas. Upon completion, students will have a broad understanding of machine learning techniques for structured outputs, will be able to develop appropriate algorithms for use in new research, and will be able to critically read related literature. The course is organized around methods, with example tasks introduced throughout.

The prerequisite is Machine Learning (10-601 or 10-701), or permission of the instructors.

Syllabus

Older syllabi:

Readings

Unless there's announcement to the contrary, required readings should be done before the class.

Grading

Grades are based on

  • The class project
    • Choose teams and a general project topic. (This can change in the coming weeks/month.) Create a team wiki page, add its members and the project topic. Every team member then should link to it from their own user homepage.
    • Final reports should be in the ICML 2011 format. Aim for 6-10 pages including citations. Please be concise; we do not encourage you to write a report that is longer than necessary.
  • Wiki writeup assignments
  • Class participation

Attendees

People taking this class in Fall 2011 include:

Here are sample pages for William, Noah, and Brendan.

Projects

Final presentation dates

Tues 12/6

  • 3:05 Word Alignments using an HMM-based model - Wang Ling and Rui Correia
  • 3:17 Training SMT Systems with the Latent Structured SVM - Avneesh Saluja and Jeff Flanigan
  • 3:29 Semi-supervised Generation of Wikipedia Infoboxes - Wangshu Pang, Yun Wang and Matt Gardner
  • 3:41 Relevant Information Extraction from Court-room Hearings To Predict Judgement - Manaj Srivastava, Mridul Gupta
  • 3:53 Stylistic Structure Extraction from Early United States Slave-related Legal Opinions William Y. Wang and Elijah Mayfield
  • 4:05 Restaurant Recommendations Based On Review Content (updated!) - Junyang Ng, Yan Chuan Sim, Kelvin Law

Thurs 12/8

  • 3:05 Automated Template Extraction - Francis Keith, Andrew Rodriguez
  • 3:17 Learning Indian Classical Music Using Sequential Models - Dhananjay Kulkarni, Tarun Kumar
  • 3:29 Finding out who you are from where, when, what and with whom you tweet - Derry Wijaya, Tarun Sharma
  • 3:41 Wikipedia Infobox Generator Using Cross Lingual Unstructured Text - Daegun Won and Tony Navas
  • 3:53 Identifying Abbreviations in Biomedical Text - Dana Movshovitz-Attias


Project list

(should get comments from Brendan:)

(should get comments from Noah:)

(should get comments from William:)

(older ideas:)

In general, a nice way to find already-made datasets is to read papers in the literature and see what they use and reference. A few data ideas: Project Brainstorming for 10-710 in Fall 2011/Some data ideas