Difference between revisions of "Crescenzi et al, 2001"

From Cohen Courses
Jump to navigationJump to search
 
(2 intermediate revisions by the same user not shown)
Line 21: Line 21:
 
Given two web pages as the input of the system, this technique compares the content of these two web pages and generate a wrapper by comparing similarities and dissimilarities of these two pages. They have developed a matching technique to extract wrapper from the input webpages. The matching algorithm works on two objects at the same time: (1) a list of tokens and (2) a wrapper. It initially considers one of the input webpages as a wrapper and then iteratively refines the wrapper by processing new web pages. When it processes the new web pages it may finds a mismatch between the structure of the web page and the current wrapper. In these cases it tries to generalize the wrapper to solve the mismatch.  
 
Given two web pages as the input of the system, this technique compares the content of these two web pages and generate a wrapper by comparing similarities and dissimilarities of these two pages. They have developed a matching technique to extract wrapper from the input webpages. The matching algorithm works on two objects at the same time: (1) a list of tokens and (2) a wrapper. It initially considers one of the input webpages as a wrapper and then iteratively refines the wrapper by processing new web pages. When it processes the new web pages it may finds a mismatch between the structure of the web page and the current wrapper. In these cases it tries to generalize the wrapper to solve the mismatch.  
  
Given two webpages the algorithm first convert the HTML codes to XHTML. It then selects on of the webpages randomly, for example page 1, and build a wrapper based on it. The wrapper is then refined based on the content of the second webpage by solving <i> mismatches </i> between wrapper and content of the second webpage. There are two kinds of mismatches:
+
Given two webpages, the algorithm first converts the HTML files to XHTML format. It then selects one of the webpages randomly, for example page 1, and builds a wrapper based on it. The wrapper is then refined based on the content of the second webpage by solving <i> mismatches </i> between wrapper and content of the second webpage. There are two kinds of mismatches:
  
- String mismatches: mismatches between different strings that occur at the same position.
+
- String mismatches: mismatches between different strings that occur at the same position in the input webpages.
  
 
- Tag mismatches: mismatches between different tags in the wrapper and the new webpage.
 
- Tag mismatches: mismatches between different tags in the wrapper and the new webpage.
  
In most of the cases mismatches can be solved by the followings ways:
+
In most of the cases mismatches can be solved using the followings ways:
  
-  
+
- String mismatches may refer to different values of a database. In this case we just add these values to the database.
  
-
+
- Tag mismatches (discovering optionals): If a mismatch happens for two tags then it means that either we have an optional field or an iterator. In the case of optional field, we have either in the new webpage or in the wrapper a piece of HTML code that is not presented on the other side. In this case we first determine the boundary of the optional field and then generalize the wrapper to cover this optional field.
 +
 
 +
- Tag mismatches (iterator): Another reason for tag mismatches is different cardinalities in the the number of items in the webpage (e.g. number of book titles in a webpage). To solve the mismatches, we need to identify all these repeated patterns. The paper has described the detail of their technique to handle this case in the paper.
 +
 
 +
We mentioned in the above an example of solving mismatches. In the paper they have discussed more complex examples in details to solve mismatches. Based on the different types of mismatches the wrapper is generalized.
  
 
This technique is tested on several well known data-intensive web sites. For each web site they have downloaded 10-20 pages with similar structure. These web pages are given to the program to generate a wrapper. The results show  that this technique has been able to extract dataset of 8 websites (among 10 different tested websites).
 
This technique is tested on several well known data-intensive web sites. For each web site they have downloaded 10-20 pages with similar structure. These web pages are given to the program to generate a wrapper. The results show  that this technique has been able to extract dataset of 8 websites (among 10 different tested websites).

Latest revision as of 17:01, 24 November 2010

Citation

V. Crescenzi, G. Mecca, and P. Merialdo. ROAD RUNNER: Towards automatic data extraction from large web sites. In Proc. of the 2001 Intl. Conf. on Very Large Data Bases, pages 109–118, 2001.


Online version

[[1]]

Summary

This paper introduces a novel technique for automatic wrapper generation by comparing HTML pages and building a wrapper based on the similarity between web pages. This technique can be applied on websites that contain large amount of data (i.e. data-intensive). They also have assumed that the webpages of the given website have fairly similar structure. The main advantages of this technique are:

- This technique does not require any interaction with user during the process of wrapper generation. This extends the applicability of their technique to automatically learn wrappers for input website without getting any supervision from human.

- The technique doesn't have any prior knowledge about the structure of the input web pages.

Given two web pages as the input of the system, this technique compares the content of these two web pages and generate a wrapper by comparing similarities and dissimilarities of these two pages. They have developed a matching technique to extract wrapper from the input webpages. The matching algorithm works on two objects at the same time: (1) a list of tokens and (2) a wrapper. It initially considers one of the input webpages as a wrapper and then iteratively refines the wrapper by processing new web pages. When it processes the new web pages it may finds a mismatch between the structure of the web page and the current wrapper. In these cases it tries to generalize the wrapper to solve the mismatch.

Given two webpages, the algorithm first converts the HTML files to XHTML format. It then selects one of the webpages randomly, for example page 1, and builds a wrapper based on it. The wrapper is then refined based on the content of the second webpage by solving mismatches between wrapper and content of the second webpage. There are two kinds of mismatches:

- String mismatches: mismatches between different strings that occur at the same position in the input webpages.

- Tag mismatches: mismatches between different tags in the wrapper and the new webpage.

In most of the cases mismatches can be solved using the followings ways:

- String mismatches may refer to different values of a database. In this case we just add these values to the database.

- Tag mismatches (discovering optionals): If a mismatch happens for two tags then it means that either we have an optional field or an iterator. In the case of optional field, we have either in the new webpage or in the wrapper a piece of HTML code that is not presented on the other side. In this case we first determine the boundary of the optional field and then generalize the wrapper to cover this optional field.

- Tag mismatches (iterator): Another reason for tag mismatches is different cardinalities in the the number of items in the webpage (e.g. number of book titles in a webpage). To solve the mismatches, we need to identify all these repeated patterns. The paper has described the detail of their technique to handle this case in the paper.

We mentioned in the above an example of solving mismatches. In the paper they have discussed more complex examples in details to solve mismatches. Based on the different types of mismatches the wrapper is generalized.

This technique is tested on several well known data-intensive web sites. For each web site they have downloaded 10-20 pages with similar structure. These web pages are given to the program to generate a wrapper. The results show that this technique has been able to extract dataset of 8 websites (among 10 different tested websites).