Difference between revisions of "Hoffmann et al., ACL 2010"

From Cohen Courses
Jump to navigationJump to search
Line 23: Line 23:
 
* Training Data is generated heuristically by the '''Matcher''' (e.g. the [[Wikipedia]] article "Jerry Seinfeld" contains sentence "Seinfeld was born in Brooklyn, New York." and at the infobox within the same page, contains relation pair "birth_place = Brooklyn"; the '''Matcher''' heuristically generate training data for extractors of different relations (but in this paper the authors didn't describe any details about this part);
 
* Training Data is generated heuristically by the '''Matcher''' (e.g. the [[Wikipedia]] article "Jerry Seinfeld" contains sentence "Seinfeld was born in Brooklyn, New York." and at the infobox within the same page, contains relation pair "birth_place = Brooklyn"; the '''Matcher''' heuristically generate training data for extractors of different relations (but in this paper the authors didn't describe any details about this part);
 
* A language model is trained by '''CRF Learner''', using the training data generated by '''Matcher''', and used as a '''Extractor''' to extract structured information from free text;  
 
* A language model is trained by '''CRF Learner''', using the training data generated by '''Matcher''', and used as a '''Extractor''' to extract structured information from free text;  
*
+
* As the major contribution of this paper, a '''Lexicon Learner''' is trained by HTML lists crawled from Internet, and contribute lexicons feature to CRF Learner. This step enables the system working with "sparse relations", or say extract structured information in a semi-supervised way.
  
 
== Brief description of the method ==
 
== Brief description of the method ==

Revision as of 13:41, 30 September 2011

Citation

Raphael Hoffmann, Congle Zhang, and Daniel S. Weld. 2010. Learning 5000 relational extractors. In ACL '10 (Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics)

Online version

ACM Digital Library

ACL Anthology

PPT slides

Summary

This is a paper introducing LUCHS, a self-supervised relations-specific IE system capable of learning more than 5000 relations with an average F1 score of 61%. The system applies dynamic lexicon features learning is applied as a semi-supervise learning solution cope with sparse training data.

System Architecture

The following figure summarizes the architecture of LUCHS.

LUCHSArchitecture.png

  • A Schema Classifier is trained by Wikipedia pages containing infoboxes, which can be used to decide which schema should be applied to an article without infobox;
  • Training Data is generated heuristically by the Matcher (e.g. the Wikipedia article "Jerry Seinfeld" contains sentence "Seinfeld was born in Brooklyn, New York." and at the infobox within the same page, contains relation pair "birth_place = Brooklyn"; the Matcher heuristically generate training data for extractors of different relations (but in this paper the authors didn't describe any details about this part);
  • A language model is trained by CRF Learner, using the training data generated by Matcher, and used as a Extractor to extract structured information from free text;
  • As the major contribution of this paper, a Lexicon Learner is trained by HTML lists crawled from Internet, and contribute lexicons feature to CRF Learner. This step enables the system working with "sparse relations", or say extract structured information in a semi-supervised way.

Brief description of the method

Experimental Result

Related papers