{ "paper_id": "Y11-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:39:18.239343Z" }, "title": "Automatic Wrapper Generation and Maintenance", "authors": [ { "first": "Yingju", "middle": [], "last": "Xia", "suffix": "", "affiliation": { "laboratory": "LTD. 15F Tower A", "institution": "", "location": { "addrLine": "No.56 Dong Si Huan Zhong Rd", "postCode": "100025", "settlement": "Chaoyang District, Beijing", "country": "China" } }, "email": "yjxia@cn.fujitsu.com" }, { "first": "Yuhang", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "LTD. 15F Tower A", "institution": "", "location": { "addrLine": "No.56 Dong Si Huan Zhong Rd", "postCode": "100025", "settlement": "Chaoyang District, Beijing", "country": "China" } }, "email": "" }, { "first": "Fujiang", "middle": [], "last": "Ge", "suffix": "", "affiliation": { "laboratory": "LTD. 15F Tower A", "institution": "", "location": { "addrLine": "No.56 Dong Si Huan Zhong Rd", "postCode": "100025", "settlement": "Chaoyang District, Beijing", "country": "China" } }, "email": "" }, { "first": "Shu", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "LTD. 15F Tower A", "institution": "", "location": { "addrLine": "No.56 Dong Si Huan Zhong Rd", "postCode": "100025", "settlement": "Chaoyang District, Beijing", "country": "China" } }, "email": "" }, { "first": "Hao", "middle": [], "last": "Yu", "suffix": "", "affiliation": { "laboratory": "LTD. 15F Tower A", "institution": "", "location": { "addrLine": "No.56 Dong Si Huan Zhong Rd", "postCode": "100025", "settlement": "Chaoyang District, Beijing", "country": "China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper investigates automatic wrapper generation and maintenance for Forums, Blogs and News web sites. Web pages are increasingly dynamically generated using a common template populated with data from databases. This paper proposes a novel method that uses tree alignment and transfer learning method to generate the wrapper from this kind of web pages. The tree alignment algorithm is adopted to find the best matching structure of the input web pages. A kind of linear regression method is employed to get the weight of different tag-matching. A transfer learning method is adopted to find the most likely content block. A wrapper built on the most probable content block and the repeating patterns extracts data from web pages. The wrapper maintenance arises because web source may experiment changes that invalidate the current wrappers. This paper presents a wrapper maintenance method using a log likelihood ratio test for detecting the change points on the similarity series which gotten from the wrapper and input web pages. The wrapper generation method is applied to generate a wrapper once the web source change is detected. Experimental results show that the method achieves high accuracy and has steady performance", "pdf_parse": { "paper_id": "Y11-1010", "_pdf_hash": "", "abstract": [ { "text": "This paper investigates automatic wrapper generation and maintenance for Forums, Blogs and News web sites. Web pages are increasingly dynamically generated using a common template populated with data from databases. This paper proposes a novel method that uses tree alignment and transfer learning method to generate the wrapper from this kind of web pages. The tree alignment algorithm is adopted to find the best matching structure of the input web pages. A kind of linear regression method is employed to get the weight of different tag-matching. A transfer learning method is adopted to find the most likely content block. A wrapper built on the most probable content block and the repeating patterns extracts data from web pages. The wrapper maintenance arises because web source may experiment changes that invalidate the current wrappers. This paper presents a wrapper maintenance method using a log likelihood ratio test for detecting the change points on the similarity series which gotten from the wrapper and input web pages. The wrapper generation method is applied to generate a wrapper once the web source change is detected. Experimental results show that the method achieves high accuracy and has steady performance", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Web-based information is typically formatted to be read by human users, not by computer applications. Information agents are being proposed which automatically extract information from multiple websites. Data is typically extracted from web sources by writing specialized programs, called wrappers (Laender et al. 2002) , which identify data of interest and map them to a suitable format.", "cite_spans": [ { "start": 298, "end": 319, "text": "(Laender et al. 2002)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Many approaches have been reported in the literature for wrapper generation. Detailed discussions of various approaches can be found in several surveys (Chang et al., 2006; Laender et al., 2002) .", "cite_spans": [ { "start": 152, "end": 172, "text": "(Chang et al., 2006;", "ref_id": "BIBREF4" }, { "start": 173, "end": 194, "text": "Laender et al., 2002)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Early approaches were based on manual techniques (Atzeni and Mecca, 1997; Crescenzi and Mecca, 1998; Huck et al., 1998; Sahuguet and Azavant, 1999) . By observing a web page and its source code, the programmer find some patterns from the page and then write a program to extract data from the web pages. A key problem with manually coded wrappers is that writing them is a difficult and labor-intensive task, and tends to be brittle and difficult to maintain.", "cite_spans": [ { "start": 61, "end": 73, "text": "Mecca, 1997;", "ref_id": "BIBREF1" }, { "start": 74, "end": 100, "text": "Crescenzi and Mecca, 1998;", "ref_id": "BIBREF7" }, { "start": 101, "end": 119, "text": "Huck et al., 1998;", "ref_id": "BIBREF11" }, { "start": 120, "end": 147, "text": "Sahuguet and Azavant, 1999)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Other approaches have some degrees of automation. In semi-automatic approaches (Cohen et al., 2002; Irmak and Suel, 2006; Kushmerick, 2000; Muslea et al., 1999; Pinto et al., 2003; Wang and Hu, 2002; Zheng et al., 2007) , a set of extraction rules are learnt from a set of manually labeled pages or data records. These rules are then used to extract data items from similar pages. This method still requires substantial manual efforts.", "cite_spans": [ { "start": 79, "end": 99, "text": "(Cohen et al., 2002;", "ref_id": "BIBREF6" }, { "start": 100, "end": 121, "text": "Irmak and Suel, 2006;", "ref_id": "BIBREF13" }, { "start": 122, "end": 139, "text": "Kushmerick, 2000;", "ref_id": "BIBREF16" }, { "start": 140, "end": 160, "text": "Muslea et al., 1999;", "ref_id": "BIBREF21" }, { "start": 161, "end": 180, "text": "Pinto et al., 2003;", "ref_id": "BIBREF22" }, { "start": 181, "end": 199, "text": "Wang and Hu, 2002;", "ref_id": "BIBREF28" }, { "start": 200, "end": 219, "text": "Zheng et al., 2007)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In automatic methods, Arasu and Hector (2003) , Chang and Liu (2001) and Crescenzi et al. (2001) found patterns or grammars from multiple pages containing similar data records. Wang and Lochovsky (2003) treated the input pages as strings and employed an algorithm to discover the continuously repeated substrings using suffix trees. Lerman et al. (2004) utilized the detailed data in the page behind the current page to identify data records. Simon and Lausen (2005) identified and ranked potential repeated patterns using visual features. Then matched 25th Pacific Asia Conference on Language, Information and Computation, pages 90-99 subsequences of the pattern with the highest weight was aligned with global multiple sequence alignment techniques.", "cite_spans": [ { "start": 22, "end": 45, "text": "Arasu and Hector (2003)", "ref_id": "BIBREF0" }, { "start": 48, "end": 68, "text": "Chang and Liu (2001)", "ref_id": null }, { "start": 73, "end": 96, "text": "Crescenzi et al. (2001)", "ref_id": "BIBREF8" }, { "start": 177, "end": 202, "text": "Wang and Lochovsky (2003)", "ref_id": "BIBREF27" }, { "start": 333, "end": 353, "text": "Lerman et al. (2004)", "ref_id": "BIBREF19" }, { "start": 453, "end": 466, "text": "Lausen (2005)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Several methods were presented to address the wrapper maintenance problem. Kushmerick (1999) defined a problem called \"wrapper verification,\" which checks if a wrapper stops extracting correct data. Their proposed solution analyzes pages and extracted information, and detects the page changes. If the pages have changed, the designer is notified, so that she can relearn the wrapper from the pages with the new structure. Lerman et al. (2003) developed a method for repairing wrappers in the case of small mark-up changes. Chidlovskii (2001) presented an automatic maintenance approach to repairing wrappers under the assumption that there are only small changes. Raposo et al.(2005) made wrappers collect some results from valid queries during their operation, and when the source changes, use those results to generate a new training set of labeled examples to bootstrap the wrapper induction process again. Meng et al. (2003) presented a schema-guided approach which is based on the observation that despite various page changes, many important features of the pages are preserved, such as syntactic patterns, annotations, and hyperlinks of the extracted data items. Their approach uses these preserved features to identify the locations of the desired values in the changed pages, and repair wrappers correspondingly by inducing semantic blocks from the HTML tree.", "cite_spans": [ { "start": 75, "end": 92, "text": "Kushmerick (1999)", "ref_id": null }, { "start": 423, "end": 443, "text": "Lerman et al. (2003)", "ref_id": "BIBREF18" }, { "start": 524, "end": 542, "text": "Chidlovskii (2001)", "ref_id": "BIBREF5" }, { "start": 665, "end": 684, "text": "Raposo et al.(2005)", "ref_id": "BIBREF23" }, { "start": 911, "end": 929, "text": "Meng et al. (2003)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Those previous methods focused on the list pages (Each of such pages contains lists of objects, for example, the pages in the shopping website such as Amazon.com.). This kind of web pages can be retrieved using queries which enable the \"wrapper verification\" procedure. But for the web pages from News, Forum and Blogs, the \"wrapper verification\" approach cannot be utilized because these web pages cannot be retrieved using queries and such no valid training set can be provided for the wrapper maintenance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper presents a method that uses tree alignment to automatically build wrapper from web pages coming from News, Forums and Blogs websites. A kind of linear regression method is proposed to get the weight of different tag-matching. Based on the alignment, we merge the trees into one union tree whose nodes record the statistical information gotten from multiple web pages. We use a transfer learning method to find the most likely content block and use the alignment algorithm to detect the repeat patterns on the union tree. A log likelihood ratio test is adopted to the wrapper maintenance. Because the likelihood ratios describe evidence rather than embody a decision, they can easily be adapted to the various goals for which inferential statistics might be used. The likelihood ratios provide an intuitive approach to summarizing the evidence provided by an experiment in wrapper maintenance scenario.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For the wrapper generation, in the sense of techniques used, the most relevant approaches are (Zhai and Liu, 2005; Zigoris et al., 2006) . Zhai and Liu(2005) used partial alignment method to align and extract data items from the identified data records. Zigoris et al.(2006) used Support Vector Machines (SVM) for learning the tree alignment parameters. With well-tuned parameters these models are resilient.", "cite_spans": [ { "start": 94, "end": 114, "text": "(Zhai and Liu, 2005;", "ref_id": "BIBREF30" }, { "start": 115, "end": 136, "text": "Zigoris et al., 2006)", "ref_id": "BIBREF33" }, { "start": 139, "end": 157, "text": "Zhai and Liu(2005)", "ref_id": "BIBREF30" }, { "start": 254, "end": 274, "text": "Zigoris et al.(2006)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Compared with these methods, the wrapper generation method proposed in this study presents a kind of linear regression method to get the weight of different tag-matching. The algorithm is dedicated to adopt different node features and different matching weights while the others haven't take into account the categories of html tags, the properties of different level nodes and the text features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Another major difference between the proposed method and the previous works is the way the alignment algorithm and the statistics are used. Zhai and Liu(2005) used the alignment algorithm to align the data items (data fields) from the identified data records. A link was created when a matching was found. The method proposed in this paper utilizes the alignment algorithm to obtain skeleton of the input trees, merges the trees into one union tree and records the statistical information. The proposed method employed the most probable content block finding step to locate the content blocks. The statistics recorded in the union tree makes this step more accurate because the heuristic is often used to differentiate the content from junk information.", "cite_spans": [ { "start": 140, "end": 158, "text": "Zhai and Liu(2005)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "We present a transfer learning method to get the weight of each feature when finding the most probable content block. The proposed method gets steady performance due to the statistics used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "For the wrapper maintenance, as mentioned in Section 1, the previous methods focused on the list pages. The \"wrapper verification\" approach cannot be utilized on the web pages from News, Forums and Blogs. There are no literature considering the wrapper maintenance on News, Forums and Blogs websites. In this paper, a log likelihood ratio test is adopted to wrapper maintenance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "There are several steps in the proposed method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wrapper Generation", "sec_num": "3" }, { "text": "(1) Wrapper generation. We use the tree alignment methods to calculate the similarity between input web pages and build a wrapper on tree alignment results. The tree alignment method is also used to calculate the similarity between wrapper and the input web pages. The input trees are merged into one union tree whose nodes record the statistical information such as the times a node has been aligned, the text length of the node. A heuristic method is employed to find the most probable content block. The alignment algorithm is utilized again to detect the repeating patterns on the union tree. The wrapper is generated based on the most probable content block and the repeating patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wrapper Generation", "sec_num": "3" }, { "text": "(2) Similarity series generation. A similarity series was built by calculating the similarity between the input web pages and the current wrapper using the tree alignment algorithm proposed in this paper. The similarity series is in the order of the input web pages' timestamp.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wrapper Generation", "sec_num": "3" }, { "text": "(3) Change point detection and wrapper regeneration. A log likelihood ratio test is utilized to detect the change points on the similarity series. The wrapper generation method is applied again to generate a wrapper once a change point is detected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wrapper Generation", "sec_num": "3" }, { "text": "In this study, we are interested in one specific type of tree called labeled ordered rooted tree. A rooted tree is a tree whose root vertex is fixed. Ordered rooted trees are rooted trees in which the relative order of the children is fixed for each vertex. We use the tree edit distance to evaluate the structural similarities between Web pages. In its traditional formulation, the tree edit distance problem considers three operations: node removal, node insertion and node replacement. The solution of this problem consists in determining the minimal set of operations to transform one tree into another. Another equivalent formulation of this problem is to discover a mapping with minimum cost between the two trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wrapper Generation", "sec_num": "3" }, { "text": "In this work, we focus on setting the weight (cost) of different node mapping (tag-matching). One of the major contributions of our work is a kind of linear regression method for getting the weight of different tag-matching. Another contribution of our work is the way we use the similarity between trees and the transfer learning method which is used for finding the most likely content block.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wrapper Generation", "sec_num": "3" }, { "text": "The main problem of the previous method is that they did not consider about employing different weights for various tag-matching. For example, the HTML tags are divided into two categories: block elements and inline elements. The block elements are elements that usually, but not always, contain other elements. They normally act as containers of some sort. The inline elements normally mark up the semantic meaning of something. Furthermore, the level of the different nodes should also be considered. The higher-level nodes should have higher weight as the higher-level nodes usually act as bigger structure block. Different weight should be assigned to different type of tag-matching.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatically getting tag-matching weight", "sec_num": "3.1" }, { "text": "In this study, a kind of linear regression method is employed to get the weight of different tag-matching. First, we found a collection of similar web pages belong to the same \"class\" (The web pages share the common format and layout characteristics, usually generated with the same template, for example, the web pages of the same board in one Forum website). It's feasible to get this kind of web pages collection automatically. Next, we will use this web pages collection for getting the optimal weighting schema.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatically getting tag-matching weight", "sec_num": "3.1" }, { "text": "Let w i be the weight of tag-matching and w i > w j for i < j. Let D mn be the sum of the gains in the best alignment between the trees T m and T n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatically getting tag-matching weight", "sec_num": "3.1" }, { "text": "( 1)Where is the number of w i occur in the alignment procedure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatically getting tag-matching weight", "sec_num": "3.1" }, { "text": "The sum of the gains in the collection is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatically getting tag-matching weight", "sec_num": "3.1" }, { "text": "\uf0e5 \uf0e5 \uf0e5\uf0e5 \uf0e5 \uf03d \uf03d \uf03d i n m mn i i n m i mn i i n m mn t w t w D f , , ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatically getting tag-matching weight", "sec_num": "3.1" }, { "text": "(2) Because the collection is the similar web pages belonging to the same \"class\", a set of w i is selected which makes the maximum f.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatically getting tag-matching weight", "sec_num": "3.1" }, { "text": "To get , a constraint is added. The group of equations is rewritten as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatically getting tag-matching weight", "sec_num": "3.1" }, { "text": "(3) The solution of the above equations is used as the weight of each type of tag-matching (w i ). Figure 1 illustrates an example of the weight setting method. For one collection of similar web pages belong to the same \"class\", we calculate the sum of the alignment gains (or the similarity) for each weighting schema. The best weighting schema is the one maximize the sum of the gains. That means to find a set of w i that output the maximum f in the equations (3). ", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 107, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Automatically getting tag-matching weight", "sec_num": "3.1" }, { "text": "Using the alignment algorithm, we can find whether a node has been aligned. We then merge the two trees into one union tree and record the alignment information in each node. After processing several trees, we can use the statistic such as the ratio of the times a node has been aligned to make the decision whether this node should be kept or not. The union tree becomes more compact after deleting some useless nodes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer learning method for the most probable content block detecting", "sec_num": "3.2" }, { "text": "The next step is finding the most probable content block (the data in the content block is what we want to extract). In general cases, there is one content block in news page and several content blocks in forum and blog page. The content block detecting method is shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer learning method for the most probable content block detecting", "sec_num": "3.2" }, { "text": "(4) Where, f i is the feature and w i is its weight. There are many heuristic features (Christian , 2009) such as the variance of the text length of nodes, the ratio of the length of the link to the length of the text in the node, the ratio of the fixed text length and number of stop words inside the DOM node.", "cite_spans": [ { "start": 87, "end": 105, "text": "(Christian , 2009)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Transfer learning method for the most probable content block detecting", "sec_num": "3.2" }, { "text": "The remained issue is how to get the weight of each feature. Since there are three related types: News, Forums and Blogs. We can consider this problem as transfer learning (Jing, 2009) . We are interested in getting the weight of target webpage type T and we have labeled instance for K auxiliary type A 1 , \u2026, A k . Let k w denote the weight vector of the linear classifier for the auxiliary type A k and T w denote the weight vector for the target type T. we now assume that these weight vectors are related through a common component v: , , for k = 1,2, \u2026, K If we assume that only weight of certain general features can be shared between different web page types, we can force certain dimensions of v to be 0. We use a square matrix F and set Fv=0. The entries of F are set to 0 except that F i,i =1 if we want to force v i =0.", "cite_spans": [ { "start": 172, "end": 184, "text": "(Jing, 2009)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Transfer learning method for the most probable content block detecting", "sec_num": "3.2" }, { "text": "Now we can learn these weight vectors in a transfer learning framework. Let x represents the feature vector of a candidate web page, and y \u2208 {+1, -1} represent a class label. Let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer learning method for the most probable content block detecting", "sec_num": "3.2" }, { "text": "T N i T i T i T y x D 1 )} , {( \uf03d \uf03d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer learning method for the most probable content block detecting", "sec_num": "3.2" }, { "text": "denote the set of labeled instances for the target type T. Let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer learning method for the most probable content block detecting", "sec_num": "3.2" }, { "text": "k N i k i k i k y x D 1 )} , {( \uf03d \uf03d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer learning method for the most probable content block detecting", "sec_num": "3.2" }, { "text": "denotes the labeled instance for the auxiliary type A k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer learning method for the most probable content block detecting", "sec_num": "3.2" }, { "text": "We learn the optimal weight vectors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer learning method for the most probable content block detecting", "sec_num": "3.2" }, { "text": "\u03a4 K k k \uf06d \uf06d, } { 1 \uf03d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer learning method for the most probable content block detecting", "sec_num": "3.2" }, { "text": "and v by optimizing the following objective function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer learning method for the most probable content block detecting", "sec_num": "3.2" }, { "text": "] ) , ( ) , ( [ ) , , } ({ 1 2 2 2 1 0 , , }, { 1 min arg \uf0e5 \uf0e5 \uf03d \uf03d \uf03d \uf03d \uf02b \uf02b \uf02b \uf02b \uf02b \uf02b \uf03d K k v k k T T K k k k T T Fv v \u03a4 K k k v v D L v D L v \u03a4 k \uf06c \uf06d \uf06c \uf06d \uf06c \uf06d \uf06d \uf06d \uf06d \uf06d \uf06d \uf06d \uf06d (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer learning method for the most probable content block detecting", "sec_num": "3.2" }, { "text": "Once we get the most probable content, we use the alignment algorithm to find the repeat patterns. We first split the union tree into several subtrees according to the content block nodes. The alignment algorithm is used to measure the similarity between subtrees. Also in the alignment, we consider about node's weight according to the level and category. This step is especially useful for the web pages coming from Forums and Blogs. In the News web pages, the content block itself is usually used as the extracting pattern.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer learning method for the most probable content block detecting", "sec_num": "3.2" }, { "text": "Thus, by alignment, merging, finding content block and mining the repeat patterns, we can get a wrapper to extract the data from web pages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer learning method for the most probable content block detecting", "sec_num": "3.2" }, { "text": "The wrapper maintenance arises because the template of the web source may experiment changes that invalidate the current wrappers. Figure 2 shows an example of the template change detection. The x-axis shows the web pages of one website ordered by the timestamp. Let time(i) be the time of the webpage i, then time(i)PASVMOur methodNewsPrecision 0.912 0.892 Recall 0.865 0.9330.903 0.956ForumPrecision 0.845 0.918 Recall 0.891 0.9460.932 0.965BlogPrecision 0.848 0.921 Recall 0.903 0.9580.941 0.969", "html": null }, "TABREF2": { "type_str": "table", "num": null, "text": "Wrapper maintenance evaluation matrix", "content": "", "html": null } } } }