# Hoffmann et al., ACL 2010

## Citation

Raphael Hoffmann, Congle Zhang, and Daniel S. Weld. 2010. Learning 5000 relational extractors. In ACL '10 (Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics)

## Summary

This is a paper introducing LUCHS, a self-supervised relations-specific IE system capable of learning more than 5000 relations with an average F1 score of 61%. The system applies dynamic lexicon features learning is applied as a semi-supervise learning solution cope with sparse training data.

## System Architecture

The following figure summarizes the architecture of LUCHS.

• A Schema Classifier is trained by Wikipedia pages containing infoboxes, which can be used to decide which schema should be applied to an article without infobox;
• Training Data is generated heuristically by the Matcher (e.g. the Wikipedia article "Jerry Seinfeld" contains sentence "Seinfeld was born in Brooklyn, New York." and at the infobox within the same page, contains relation pair "birth_place = Brooklyn"; the Matcher heuristically generate training data for extractors of different relations (but in this paper the authors didn't describe any details about this part);
• A language model is trained by CRF Learner, using the training data generated by Matcher, and used as a Extractor to extract structured information from free text;
• As the major contribution of this paper, a Lexicon Learner is trained by HTML lists crawled from Internet, and contribute lexicons feature to CRF Learner. This step enables the system working with "sparse relations", or say extract structured information in a semi-supervised way.

## Brief description of the method

### Schema Classifier

The system uses a liner, multi-class classifier with 6 kinds of features:

• Words in the article title;
• Words in the first sentence;
• Words in the first sentence which are direct objects to the verb 'to be' (what is that?)
• Wikipedia categories;
• Ancestor categories.

Voted Perceptron is used for training the liner classifier.

### Extractor

The extractor uses a language model trained by a linear-chain Conditional Random Fields (CRF) introduced by Lafferty et al.,2001:

${\displaystyle p(y|x)={\frac {1}{Z(x)}}exp\Sigma _{t=1}^{T}\Sigma _{k=1}^{K}\lambda _{k}f_{k}(y_{t-1},y_{t},x,t)}$

where ${\displaystyle T}$ is the length of the sequence, ${\displaystyle K}$ is the number of feature functions, feature functions ${\displaystyle f}$ encodes statistics of pair ${\displaystyle (x,y)}$, and ${\displaystyle \lambda _{k}}$ are parameters of feature weights.

In the system, parameters ${\displaystyle \lambda _{k}}$ are trained by Voted Perceptron algorithm. Nine kinds of Boolean features are evolved in the training:

• Words;
• State Transitions;
• Word Contextualization;
• Capitalization;
• Digits;
• Dependencies;
• First Sentence;
• Gaussians;
• Lexicons.

## Experimental Result

### Dataset

10/2008 English Wikipedia dump;

• 1,583 schema which contains at least 10 instances (wiki pages);
• 981,387 articles;
• 5,025 attributes (relations).

### Overall Extraction Performance

They reported a precision of .55 at recall of .68, giving an F1-score of .61.

## Related papers

This paper is a follow-up research based on KYLIN IE system, and other researches conducted by the University of Washington: Weld et al SIGMOD 2009, Wu and Weld ACL 2010 and Wu and Weld WWW 2008.

They use DBpedia as the training dataset. And iPopulator paper describes a similar research.