Difference between revisions of "Class Meeting for 10-710 11-22-2011"
From Cohen Courses
Jump to navigationJump to searchLine 12: | Line 12: | ||
* [http://dl.acm.org/citation.cfm?id=1614136 Efficient computation of entropy gradient for semi-supervised conditional random fields], Mann and McCallum, NAACL 2007. | * [http://dl.acm.org/citation.cfm?id=1614136 Efficient computation of entropy gradient for semi-supervised conditional random fields], Mann and McCallum, NAACL 2007. | ||
+ | * [http://www.cs.umass.edu/~mccallum/papers/druck08sigir.pdf Learning from Labeled Features using Generalized Expectation Criteria], Druck, Mann, McCallum, SIGIR 2008. | ||
+ | * [http://www.aclweb.org/anthology-new/W/W04/W04-3237.pdf Adaptation of Maximum Entropy Capitalizer: Little Data Can Help a Lot]. Chelba and Acero, EMNLP 2004: 285-292 |
Revision as of 15:57, 21 November 2011
This is one of the class meetings on the schedule for the course Syllabus for Structured Prediction 10-210 in Fall 2011.
Regularization and Unlabeled Data
- Slides - To be posted
Required Readings
- Semi-supervised conditional random fields for improved sequence segmentation and labeling, Jiao et al, ACl 2006.
Optional Readings
- Efficient computation of entropy gradient for semi-supervised conditional random fields, Mann and McCallum, NAACL 2007.
- Learning from Labeled Features using Generalized Expectation Criteria, Druck, Mann, McCallum, SIGIR 2008.
- Adaptation of Maximum Entropy Capitalizer: Little Data Can Help a Lot. Chelba and Acero, EMNLP 2004: 285-292