Difference between revisions of "Blei et al, 2002"

From Cohen Courses
Jump to navigationJump to search
(Created page with '== Citation == Carlson, A., S. Schafer. 2008. Bootstrapping Information Extraction from Semi-structured Web Pages. ECML PKDD '08: Proceedings of the 2008 European Conference on …')
 
Line 9: Line 9:
 
== Summary ==
 
== Summary ==
  
This [[Category::paper]] introduces a novel approach for [[AddressesProblem::extracting data from semi-structured web pages]]  by requiring annotating only a few pages of very few websites.
+
This [[Category::paper]] introduces a novel hierarchical probabilistic model that combines both global and local features in the learning process. They have applied their technique for [[AddressesProblem::extracting structured data from the Web]].  
  
 
This method first requires a set of web pages which are annotated by human. The annotator should decide what schema columns are presenting in the input web pages and should also annotate a very small number of web pages for four or six different websites. Given this training data, program trains four different classifiers (using different types of features) to classify data for each of the annotated fields. Using these trained classifiers, it then extracts data that maximize confidence value of trained classifiers.
 
This method first requires a set of web pages which are annotated by human. The annotator should decide what schema columns are presenting in the input web pages and should also annotate a very small number of web pages for four or six different websites. Given this training data, program trains four different classifiers (using different types of features) to classify data for each of the annotated fields. Using these trained classifiers, it then extracts data that maximize confidence value of trained classifiers.

Revision as of 23:11, 31 October 2010

Citation

Carlson, A., S. Schafer. 2008. Bootstrapping Information Extraction from Semi-structured Web Pages. ECML PKDD '08: Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I, 2008, 195-210, Berlin, Heidelberg.

Online version

Carlson-ECML08

Summary

This paper introduces a novel hierarchical probabilistic model that combines both global and local features in the learning process. They have applied their technique for extracting structured data from the Web.

This method first requires a set of web pages which are annotated by human. The annotator should decide what schema columns are presenting in the input web pages and should also annotate a very small number of web pages for four or six different websites. Given this training data, program trains four different classifiers (using different types of features) to classify data for each of the annotated fields. Using these trained classifiers, it then extracts data that maximize confidence value of trained classifiers.

To evaluate their method they have used logistic regression classifier as the baseline method. The technique is tested on two different domains: vacation rentals and job sites. They have shown that by annotating 2-5 pages for 4-6 web sites, their technique can achieve an accuracy of 84% on job offer sites and 91% on vacation rental sites.

Related papers