Class Meeting for 10-707 9/27/2010
From Cohen Courses
Jump to navigationJump to search
Meta-Learning: Stacking and Sequential Models
The notes also have a short review of last week's session on CRFs.
- Additional notes on Sha & Pereira, including derivation of the gradient of the loglikelihood for CRFs].
- Stacked sequential learning, by William W. Cohen, Vitor Carvalho. In International Joint Conference on Artificial Intelligence, 2005..
- An Effective Two-Stage Model for Exploiting Non-Local Dependencies in Named Entity Recognition, Krishnan and Manning, ACL 2006. Another take on stacked sequential learning.
- Zhenzhen Kou and William W. Cohen (2007): Stacked Graphical Models for Efficient Inference in Markov Random Fields in SDM-2007. Extended version of the stacked sequential learning method that applies to arbitrary graphs.
- Transformation-Based Error-Driven Learning and Natural Language Processing, Brill, COLING 1995. The learning algorithm in the Brill parser, which has also been used for NER (e.g., in Abgene).
- Search-based Structured Prediction, Daume, Langford, and Marcu, Machine Learning Journal (2009). Another clever meta-learning algorithm that works well for sequences.
- Conditional graphical models, Perez-Cruz & Ghahramani, 2007, in Predicting Structured Data. MIT Press, Cambridge, MA, USA, pp. 265-282.. A very simple and effective meta-learning method.