Banko 2007 Open Information Extraction from the Web

From Cohen Courses
Jump to navigationJump to search

Citation

Banko, M., Cafarella, M., Soderland, S., Broadhead, M. and Etzioni, O. 2007. Open Information Extraction from the Web. In Proceedings of IJCAI.

Online version

An online version of this paper is available at [1].

Summary

This paper introduces a novel approach to iteratively extract information from the web in the open domains, it also presents TextRunner, a large-scale open information extraction system with its current results and statistics.

Key Contributions

The biggest contribution claimed by the authors in this paper is the new paradigm of open information extraction which does not require any human input, it makes a single run through the data and generates large set of relational tuples. Another contribution of this paper is the analysis of TextRunner system and its current results.

The TextRunner System

  • System Architecture

The TextRunner system is designed as a fully automated open information extraction system. It takes a corpus as input and outputs a set of extractions that are also efficiently indexed to support exploration via user queries. As described in the paper, the system basically consists of three parts: (1) a self-supervised learner which gives a classifier to determine the "trustworthy" of any candidate extractions; (2) a single-pass extractor which makes a single pass over the entire corpus to extract tuples for all possible relations and sends each candidate to the classifier, retains the ones labeled as trustworthy; (3) a redundancy-based assessor which assigns a probability to each retained tuple based on a probabilistic model of redundancy in text.

1 Self-Supervised Learner
The algorithm is proposed for extracting relations using the pattern relation duality. The detail is as follows:

2 '

Experiments and Evaluation

The author uses a repository of 24 million web pages, which is part of the Stanford WebBase and is used for Google Search Engine as of 1998. The author also mentions an exclusion of the amazon pages due to crawling difficulty. The experiments start with only five books as the seed and a simple pattern, it grows with considerably fast-pace, although there are some bogus and it seems to be getting a lot of sci-fiction books as the author mentioned. At the final iteration, it has over 15,000 unique book titles.

The author chose a small set of output for manual verification. And it turns out that 19 out of 20 are correct with only one exception which refers to an article instead of a book. It also shows that many of the books are not in the Amazon list or other catalogs, which tells the power of information extraction over the web.