Rbalasub writeup of talk by Dan Roth at U Pitt on Oct 2nd, 2009

From Cohen Courses
Revision as of 11:42, 3 September 2010 by WikiAdmin (talk | contribs) (1 revision)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Constrained Conditional Models - Inference for Natural Language Understanding

In a system with multiple components, one can train each component independently and impose global constraints during inference. An ILP formulation is used to introduce the constraints. It is done for semantic role labeling, relation learning etc. LBJ is a package on his website to write these rules in Java-like syntax.

Constraints vs. features is not a clear cut distinction. Generally, features don't span too many output variables since that makes the model very complicated. Commonly used models like CRF and HMM don't allow for such features.

Learning a big joint model only yields better results when each component is not a well solved problem i.e it itself is hard to learn. For most natural language tasks, it's advantageous to learn models independently and impose the constraints during inference.