Difference between revisions of "Wu and Weld WWW 2008"

From Cohen Courses
Jump to navigationJump to search
 
(39 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
== Citation ==
 
== Citation ==
  
Dustin Lange and Christoph Böhm and Felix Naumann. Extracting structured information from Wikipedia articles to populate infoboxes. In Proceedings of the 19th ACM Conference on Information and Knowledge Management, CIKM 2010, Toronto, Ontario, Canada, October 26-30, 2010, pages 1661-1664, 2010.
+
Wu, F. and Weld, D. 2008. Automatically Refining the Wikipedia Infobox Ontology. In Proceedings of the 17th Conference of the World Wide Web, pp. 635-644, ACM, New York.
  
 
== Online version ==
 
== Online version ==
  
[http://www.hpi.uni-potsdam.de/fileadmin/hpi/FG_Naumann/publications/2010/iPopulator-CIKM.pdf]
+
[http://ai.cs.washington.edu/www/media/papers/Automatically_Refining_the_Wikipedia_Infobox_Ontology.pdf University of Washington]
  
 
== Summary ==
 
== Summary ==
  
This is an early and influential [[Category::paper]] presenting an unsupervised approach to [[AddressesProblem::review classification]]. There are three basic ideas introduced here.
+
This is a [[Category::paper]] that introduces an autonomous system for refining [[UsesDataset::Wikipedia]]’s
 +
infobox information schema to create a cleanly-structured ontology. Advanced query capability, improved information extractors and semiautomatic generation of new infobox templates are shown as advantages of a refined ontology. The [[AddressesProblem::ontology refinement]] problem is solved using both [[UsesMethod::Support Vector Machines]] (SVM) and a more powerful joint-inference approach expressed in [[UsesMethod::Markov Logic Networks]] (MLN).
  
One key idea is to score the polarity of a review based on the total polarity of the phrases in it.
+
The autonomous system, presented as Kylin Ontology Generator (KOG), is comprised of three modules: a schema cleaner, which merges duplicate classes and attributes and prunes rarely-used ones; a subsumption detector, which identifies '''[http://en.wikipedia.org/wiki/is-a is-a]''' relations between infobox classes (e.g. "volleyball player" is-a "athlete"); and a schema mapper, which builds attribute mappings between related infobox classes.
  
A second idea is to use patterns of part of speech tags to pick out phrases that are likely to be meaningful and unambiguous with respect to semantic orientation (e.g. ADJ NOUN might pick out "good service" or "delicious desserts"). 
+
== Methods used ==
  
Finally, these potentially-meaningful phrases are then scored using [[UsesMethod::pointwise mutual information]] (PMI) to seed words on known polarity.  Specifically, Turney uses PMI to compare each phrase to the words "excellent" or "poor", and then uses these distances to give an overall score for the polarity to each phrase, based on the difference of its PMI with "excellent" to the PMI with "poor". A very large corpus was used here (the Web, via queries to a search engine), which appears to be important in making this simple technique work.
+
The subsumption detection task is modeled as a binary classification problem and several intuitive indicators are used as features to train two classifiers: one using SVM, and the other using MLN. Some of these features are similarity measures between infobox classes, based on the TF/IDF scores between bags of words taken from their attribute set and the first sentence of each of their instances (articles). Other features include category tags, class-name string inclusion, edit history and hearst patterns. Additionally, a bunch of heuristics is used to compute a mapping between an infobox class and a WordNet node, and whether a corresponding node in WordNet is subsumed by the corresponding node of another class is also used as a feature for classification.  
  
== Brief description of the method ==
+
=== Joint-inference classification ===
The algorithm takes a written review as an input. First it assigns a POS tag to each word in the review to identify adjective or adverb phrases in the input review. They have used PMI-IR algorithm to estimate the semantic orientation of a phrase. The Pointwise Mutual Information (PMI) between two words <math> w_1 </math> and <math> w_2 </math> is defined as follow:
 
  
<math>
+
Both the SVM classifier and the MLN model are trained using the features above, but the MLN classifier exploits additional information. First, if "Class1 is-a Class2" and "Class2 is-a Class3", then it is likely that "Class1 is-a Class3". Also, the WordNet mapping and the is-a binary classification are treated as separate problems when actually the evidence from either one can help to reduce the uncertainty of the other. This knowledge is represented in the MLN model as additional logical implications with an attached measure of uncertainty:
PMI(w_1,w_2)=log_2(p(w_1\ and\ w_2)/p(w_1)p(w_2))
+
* <math>\text{is-a}(c_{1}, c_{2}) \wedge \text{is-a}(c_{2}, c_{3}) \Rightarrow \text{is-a}(c_{1}, c_{3})</math>
</math>
+
(the intuition that is-a is transitive),
 +
* <math>map(c_{1}) \wedge map(c_{2}) \wedge \text{is-a-WN}(c_{1}, c_{2}) \Rightarrow \text{is-a}(c_{1}, c_{2})</math>
 +
(which means that if two infobox classes have correct WordNet mappings and their mapped nodes are is-a according to WordNet, then they should also be in a subsumption relation in the ontology).
  
where <math> p(w_1,w_2) </math> is the probability that <math> w_1 </math> and <math> w_2 </math> co-occur. They have defined the semantic orientation of a phrase as follow:
+
== Experimental results ==
  
<math>
+
A labeled dataset of 205 positive and 358 negative is-a pairs are used for training the classifiers. This dataset is constructed in part using [[UsesDataset::DBpedia]]'s manually-created mapping from 287,676 Wikipedia articles to their corresponding WordNet nodes. The performance of three different classifiers is tested with five-fold cross validation on the dataset: the SVM classifier, a MLN classifier using only the exact same features as the SVM one, and a fully-functional MLN classifier (called MLN+) using the additional formulas for crosstalk between WordNet mapping and is-a classification.
SO(phrase)=PMI(phrase,'excellent')-PMI(phrase,'poor')
 
</math>
 
  
We can modify the above definition to obtain the following formula:
+
The SVM classifier achieves a precision of 97.2% and recall of 88.6%. Altough the MLN model drops precision to 96.8%, it has better recall at 92.1%. Finally, MLN+ wins on both measures, increasing precision to 98.8% and recall to 92.5%, showing the impact of joint inference in the task of subsumption detection, and therefore of ontology refinement.
 
 
<math>
 
SO(phrase)=log_2(\frac{hits(phrase\ NEAR\ 'excellent')hits('excellent')}{hits(phrase\ NEAR\ 'poor')hits('excellent')} )
 
</math>
 
 
 
where operator NEAR means that the two phrases should be appeared close to each other in the corpus. Using the above formula they have calculated the average semantic orientation for a review. They have shown that the value of average semantic orientation for phrases in the items that are tagged as "recommended" by the users are usually positive and those that are tagged as "not recommended" are usually negative.
 
 
 
== Experimental Result ==
 
 
 
This approach was fairly successful on a range of review-classification tasks: it achieved accuracy of between 65% and 85% in predicting an author-assigned "recommended" flag for Epinions ratings for eight diverse products, ranging from cars to movies. Many later writers used several key ideas from the paper, including: treating polarity prediction as a document-classification problem; classifying documents based on likely-to-be-informative phrases; and using unsupervised or semi-supervised learning methods.
 
  
 
== Related papers ==
 
== Related papers ==
  
The widely cited [[RelatedPaper::Pang et al EMNLP 2002]] paper was influenced by this paper - but considers supervised learning techniques.  The choice of movie reviews as the domain was suggested by the (relatively) poor performance of Turney's method on movies.
+
The autonomous system KOG is designed with the goal of situating semantic knowledge extracted from Wikipedia's natural language text (described in [[RelatedPaper::Wu and Weld CIKM 2007]]) in a clean and useful ontology. A follow-up paper [[RelatedPaper::Wu et al KDD 2008]] presents techniques
 
+
for increasing recall while extracting information from Wikipedia's long tail of sparse classes, by applying the automatically-learned subsumption taxonomy. The refined ontology applied to Wikipedia's infobox schema can also provide training data to bootstrap open information extractors, such as the ones described in [[RelatedPaper::Weld et al SIGMOD 2009]] and [[RelatedPaper::Wu and Weld ACL 2010]].
An interesting follow-up paper is [[RelatedPaper::Turney and Littman, TOIS 2003]] which focuses on evaluation of the technique of using PMI for predicting the [[semantic orientation of words]].
 

Latest revision as of 02:04, 28 September 2011

Citation

Wu, F. and Weld, D. 2008. Automatically Refining the Wikipedia Infobox Ontology. In Proceedings of the 17th Conference of the World Wide Web, pp. 635-644, ACM, New York.

Online version

University of Washington

Summary

This is a paper that introduces an autonomous system for refining Wikipedia’s infobox information schema to create a cleanly-structured ontology. Advanced query capability, improved information extractors and semiautomatic generation of new infobox templates are shown as advantages of a refined ontology. The ontology refinement problem is solved using both Support Vector Machines (SVM) and a more powerful joint-inference approach expressed in Markov Logic Networks (MLN).

The autonomous system, presented as Kylin Ontology Generator (KOG), is comprised of three modules: a schema cleaner, which merges duplicate classes and attributes and prunes rarely-used ones; a subsumption detector, which identifies is-a relations between infobox classes (e.g. "volleyball player" is-a "athlete"); and a schema mapper, which builds attribute mappings between related infobox classes.

Methods used

The subsumption detection task is modeled as a binary classification problem and several intuitive indicators are used as features to train two classifiers: one using SVM, and the other using MLN. Some of these features are similarity measures between infobox classes, based on the TF/IDF scores between bags of words taken from their attribute set and the first sentence of each of their instances (articles). Other features include category tags, class-name string inclusion, edit history and hearst patterns. Additionally, a bunch of heuristics is used to compute a mapping between an infobox class and a WordNet node, and whether a corresponding node in WordNet is subsumed by the corresponding node of another class is also used as a feature for classification.

Joint-inference classification

Both the SVM classifier and the MLN model are trained using the features above, but the MLN classifier exploits additional information. First, if "Class1 is-a Class2" and "Class2 is-a Class3", then it is likely that "Class1 is-a Class3". Also, the WordNet mapping and the is-a binary classification are treated as separate problems when actually the evidence from either one can help to reduce the uncertainty of the other. This knowledge is represented in the MLN model as additional logical implications with an attached measure of uncertainty:

(the intuition that is-a is transitive),

(which means that if two infobox classes have correct WordNet mappings and their mapped nodes are is-a according to WordNet, then they should also be in a subsumption relation in the ontology).

Experimental results

A labeled dataset of 205 positive and 358 negative is-a pairs are used for training the classifiers. This dataset is constructed in part using DBpedia's manually-created mapping from 287,676 Wikipedia articles to their corresponding WordNet nodes. The performance of three different classifiers is tested with five-fold cross validation on the dataset: the SVM classifier, a MLN classifier using only the exact same features as the SVM one, and a fully-functional MLN classifier (called MLN+) using the additional formulas for crosstalk between WordNet mapping and is-a classification.

The SVM classifier achieves a precision of 97.2% and recall of 88.6%. Altough the MLN model drops precision to 96.8%, it has better recall at 92.1%. Finally, MLN+ wins on both measures, increasing precision to 98.8% and recall to 92.5%, showing the impact of joint inference in the task of subsumption detection, and therefore of ontology refinement.

Related papers

The autonomous system KOG is designed with the goal of situating semantic knowledge extracted from Wikipedia's natural language text (described in Wu and Weld CIKM 2007) in a clean and useful ontology. A follow-up paper Wu et al KDD 2008 presents techniques for increasing recall while extracting information from Wikipedia's long tail of sparse classes, by applying the automatically-learned subsumption taxonomy. The refined ontology applied to Wikipedia's infobox schema can also provide training data to bootstrap open information extractors, such as the ones described in Weld et al SIGMOD 2009 and Wu and Weld ACL 2010.