Sgardine writesup Cohn 2005 Tree CRF

From Cohen Courses
Revision as of 10:42, 3 September 2010 by WikiAdmin (talk | contribs) (1 revision)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

This is a review of Cohn_2005_semantic_role_labelling_with_tree_conditional_random_fields by user:sgardine.

Summary

Given a parse tree for a sentence, the authors construct a CRF over a heuristically chosen subset of the nodes in the tree. They then train the model; because the graph is a tree, exact inference is tractable. The prediction is constrained to be consistent. The model achieves F1 of ~0.7 on test data.

Commentary

There was no baseline for the performance on these testsets.

I am curious about the tree-pruning strategy, but I assume such curiousity can be satisfied by reading the cited Xue and Palmer paper.

The discussion about two alternate labeling strategies could have used some (brief) hard data.

At prediction time, they coerce the prediction into consistency. How often do inconsistencies happen? Could they build a model that cared about consistency, e.g. through constraints?