Difference between revisions of "Smith and Eisner 2008:Dependency parsing by belief propagation"

From Cohen Courses
Jump to navigationJump to search
Line 9: Line 9:
 
== Summary ==
 
== Summary ==
  
This is a crucial [[Category::paper]] that presents a [[UsesMethod::Belief Propagation]] method for [[AddressesProblem::Dependency Parsing]], which can also be easily applied to general problems in [[AddressesProblem::Named Entity Recognition]], [[AddressesProblem::Word Alignment]], [[AddressesProblem::Shallow Parsing]], and [[AddressesProblem::Constituent Parsing]].
+
This is a crucial [[Category::paper]] that presents a loopy [[UsesMethod::Belief Propagation]](BP) method for [[AddressesProblem::Dependency Parsing]], which can also be easily applied to general problems in [[AddressesProblem::Named Entity Recognition]], [[AddressesProblem::Word Alignment]], [[AddressesProblem::Shallow Parsing]], and [[AddressesProblem::Constituent Parsing]].
 +
 
 +
The paper formulates the dependency parsing problem as a learning and decoding problem on a graphical model with global constraints.
 +
The authors show that BP needs only <math>O(n^3)</math> time to perform approximate inference on a graphical model, with second-order features
 +
and latent variables incorporated.  
  
 
== Brief description of the method ==
 
== Brief description of the method ==

Revision as of 13:10, 28 September 2011

Citation

Smith, David A. and Jason Eisner (2008). Dependency parsing by belief propagation. Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 145-156, Honolulu, October.

Online version

Smith and Eisner 2008

Summary

This is a crucial paper that presents a loopy Belief Propagation(BP) method for Dependency Parsing, which can also be easily applied to general problems in Named Entity Recognition, Word Alignment, Shallow Parsing, and Constituent Parsing.

The paper formulates the dependency parsing problem as a learning and decoding problem on a graphical model with global constraints. The authors show that BP needs only time to perform approximate inference on a graphical model, with second-order features and latent variables incorporated.

Brief description of the method

Experimental Result

Related papers