{ "paper_id": "C00-1025", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:31:11.410689Z" }, "title": "Mining Tables from Large Scale HTML Texts", "authors": [ { "first": "Hsin-Hsi", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan University Taipei", "location": { "country": "TAIWAN, R.O.C" } }, "email": "hh_chen@csie.ntu.edu.tw" }, { "first": "Shih-Chung", "middle": [], "last": "Tsai", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan University Taipei", "location": { "country": "TAIWAN, R.O.C" } }, "email": "" }, { "first": "Jin-He", "middle": [], "last": "Tsai", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan University Taipei", "location": { "country": "TAIWAN, R.O.C" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Table is a very common presentation scheme, but few papers touch on table extraction in text data mining. This paper focuses on mining tables from large-scale HTML texts. Table filtering, recognition, interpretation, and presentation are discussed. Heuristic rules and cell similarities are employed to identify tables. The F-measure of table recognition is 86.50%. We also propose an algorithm to capture attribute-value relationships among table cells. Finally, more structured data is extracted and presented.", "pdf_parse": { "paper_id": "C00-1025", "_pdf_hash": "", "abstract": [ { "text": "Table is a very common presentation scheme, but few papers touch on table extraction in text data mining. This paper focuses on mining tables from large-scale HTML texts. Table filtering, recognition, interpretation, and presentation are discussed. Heuristic rules and cell similarities are employed to identify tables. The F-measure of table recognition is 86.50%. We also propose an algorithm to capture attribute-value relationships among table cells. Finally, more structured data is extracted and presented.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Tables, which are simple and easy to use, are very common presentation scheme for writers to describe schedules, organize statistical data, summarize experimental results, and so on, in texts of different domains. Because tables provide rich information, table acquisition is useful for many applications such as document understanding, question-and-answering, text retrieval, etc. However, most of previous approaches on text data mining focus on text parts, and only few touch on tabular ones (Appelt and Israel, 1997; Gaizauskas and Wilks, 1998; Hurst, 1999a) . Of the papers on table extractions (Douglas, Hurst and Quinn, 1995; Douglas and Hurst 1996; Hurst and Douglas, 1997; Ng, Lim and Koo, 1999) , plain texts are their targets.", "cite_spans": [ { "start": 495, "end": 520, "text": "(Appelt and Israel, 1997;", "ref_id": "BIBREF0" }, { "start": 521, "end": 548, "text": "Gaizauskas and Wilks, 1998;", "ref_id": "BIBREF4" }, { "start": 549, "end": 562, "text": "Hurst, 1999a)", "ref_id": "BIBREF7" }, { "start": 600, "end": 632, "text": "(Douglas, Hurst and Quinn, 1995;", "ref_id": "BIBREF2" }, { "start": 633, "end": 656, "text": "Douglas and Hurst 1996;", "ref_id": "BIBREF3" }, { "start": 657, "end": 681, "text": "Hurst and Douglas, 1997;", "ref_id": "BIBREF6" }, { "start": 682, "end": 704, "text": "Ng, Lim and Koo, 1999)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "In plain text, writers often use special symbols, e.g., tabs, blanks, dashes, etc., to make tables. The following shows an example. It depicts book titles, authors, and prices. When detecting if there is a table in free text, we should disambiguate the uses of the special symbols. That is, the special symbol may be a separator or content of cells. Previous papers employ grammars (Green and Krishnamoorthy, 1995) , string-based cohesion measures (Hurst and Douglas, 1997) , and learning methods (Ng, Lim and Koo, 1999) to deal with table recognition.", "cite_spans": [ { "start": 382, "end": 414, "text": "(Green and Krishnamoorthy, 1995)", "ref_id": "BIBREF5" }, { "start": 448, "end": 473, "text": "(Hurst and Douglas, 1997)", "ref_id": "BIBREF6" }, { "start": 497, "end": 520, "text": "(Ng, Lim and Koo, 1999)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Because of the simplicity of table construction methods in free text, the expressive capability is limited.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Comparatively, the markup languages like HTML provide very flexible constructs for writers to design tables. The flexibility also shows that table extraction in HTML texts is harder than that in plain text. Because the HTML texts are huge on the web, and they are important sources of knowledge, it is indispensable to deal with table mining on HTML texts. Hurst (1999b) is the first attempt to collect a corpus from HTML files, L A T E X files and a small number of ASCII files for table extraction. This paper focuses on HTML texts. We will discuss not only how to recognize tables from HTML texts, but also how to identify the roles of each cell (attribute and/or value), and how to utilize the extracted tables. HTML table begins with an optional caption followed one or more rows. Each row is formed by one or more cells, which are classified into header and data cells. Cells can be merged across rows and columns. The following tags are used:", "cite_spans": [ { "start": 357, "end": 370, "text": "Hurst (1999b)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 716, "end": 733, "text": "HTML table begins", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "(1)
(2) (3) (4) (5) Cell may play the role of attribute and/or value. Several cells may be concatenated to denote an attribute. For example, \"Adult-Price-Single Room-Economic Class\" means the adult price for economic class and single room. The relationships may be read in column wise or in row wise depending on the interpretation. For example, the relationship for \"Tour Code:DP9LAX01AB\" is in row wise. The prices for \"Economic Class\" are in column wise. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tables in HTML", "sec_num": "1" }, { "text": "The flow of table mining is shown as Figure 1 . It is composed of five modules. Hypertext processing module analyses HTML text, and extracts the table tags. Table filtering module filters out impossible cases by heuristic rules. The remaining candidates are sent to table recognition module for further analyses. The table interpretation module differentiates the roles of cells in the tables. The final module tackles how to present and employ the mining results. The first two modules are discussed in the following paragraph, and the last three modules will be dealt with in the following sections in detail.", "cite_spans": [], "ref_spans": [ { "start": 37, "end": 45, "text": "Figure 1", "ref_id": null }, { "start": 157, "end": 172, "text": "Table filtering", "ref_id": null } ], "eq_spans": [], "section": "Flow of Table Mining", "sec_num": "2" }, { "text": "As specified above, table wrappers do not always introduce tables. Two filtering rules are employed to disambiguate their functions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1. Flow of Table Mining", "sec_num": null }, { "text": "(1) A table must contain at least two cells to represent attribute and value. In other words, the structure with only one cell is filtered out.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1. Flow of Table Mining", "sec_num": null }, { "text": "(2) If the content enclosed by table wrappers contain too much hyperlinks, forms and figures, then it is not regarded as a table.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1. Flow of Table Mining", "sec_num": null }, { "text": "To evaluate the performance of table mining, we prepare the test data selected from airline information in travelling category of Chinese Yahoo web site (http://www.yahoo.com. tw). Table 2 shows the statistics of our test data. Table 3 shows the results after we employ the filtering rules on the test data. The 5 th row shows how many non-table candidates are filtered out by the proposed rules, and the 6 th row shows the number of wrong filters. On the average, the correct rate is 98.93%. Total 423 of 2300 non-tables are remained.", "cite_spans": [], "ref_spans": [ { "start": 181, "end": 188, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 228, "end": 235, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Figure 1. Flow of Table Mining", "sec_num": null }, { "text": "After simple analyses specified in the previous section, there are still 423 non-tables passing the filtering criteria. Now we consider the content of the cells. A cell is much shorter than a sentence in plain text. In our study, the length of 43,591 cells (of 61,770 cells) is smaller than 10 characters 2 . Because of the space limitation in a table, writers often use shorthand notations to describe their intention. presentation of results table interpretation hypertext processing table filtering table recognition example, they may use a Chinese character (\"P\", dao4) to represent a two-character word \"PE\" (dao4da2, arrive), and a character (\"}\", li2) to denote the Chinese word \"}\u00c9\" (li2kai1, leave).", "cite_spans": [], "ref_spans": [ { "start": 420, "end": 532, "text": "presentation of results table interpretation hypertext processing table filtering table recognition", "ref_id": null } ], "eq_spans": [], "section": "Table Recognition", "sec_num": "3" }, { "text": "They even employ special symbols like > and G to represent \"increase\" and \"decrease\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table Recognition", "sec_num": "3" }, { "text": "Thus it is hard to determine if a fragment of HTML text is a table depending on a cell only. The context among cells is important.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table Recognition", "sec_num": "3" }, { "text": "Value cells under the same attribute names demonstrate similar concepts. We employ the following metrics to measure the cell similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table Recognition", "sec_num": "3" }, { "text": "(1) String similarity We measure how many characters are common in neighboring cells. If the number is above a threshold, we call the two cells are similar. (2) Named entity similarity", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table Recognition", "sec_num": "3" }, { "text": "The metric considers semantics of cells. We adopt some named entity expressions defined in MUC (1998) such as date/time expressions and monetary and percentage expressions. A rule-based method similar to the paper (Chen, Ding, and Tsai, 1998) Tables 4-6 show the experimental results when the three metrics are applied incrementally.", "cite_spans": [ { "start": 214, "end": 242, "text": "(Chen, Ding, and Tsai, 1998)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 243, "end": 253, "text": "Tables 4-6", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Table Recognition", "sec_num": "3" }, { "text": "Precision rate (P), recall rate (R), and F-measure (F) defined below are adopted to measure the performance. Table 4 shows that string similarity cannot capture the similar concept between neighboring cells very well. The F-measure is 55.50%. Table 5 tries to incorporate more semantic features, i.e., categories of named entity. Unfortunately, the result does not meet our expectation. The performance only increases a little. The major reason is that the keywords (pm/am, $, %, etc.) for date/time expressions and monetary and percentage expressions are usually omitted in table description. Table 6 shows that the F-measure achieves 86.50% when number category is used. Compared with Tables 4 and 5, the performance is improved ", "cite_spans": [], "ref_spans": [ { "start": 109, "end": 116, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 243, "end": 250, "text": "Table 5", "ref_id": null }, { "start": 594, "end": 601, "text": "Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Table Recognition", "sec_num": "3" }, { "text": "As specified in Section 1, the attribute-value relationship may be interpreted in column wise or in row wise. If the table tags in questions do not contain COLSPAN (ROWSPAN), the problem is easier. The first row and/or the first column consist of the attribute cells, and the others are value cells. Cell similarity guides us how to read a table. We define row (or column) similarity in terms of cell similarity as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table Interpretation", "sec_num": "4" }, { "text": "Two rows (or columns) are similar if most of the corresponding cells between these two rows (or columns) are similar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table Interpretation", "sec_num": "4" }, { "text": "A basic table interpretation algorithm is shown below. Assume there are n rows and m columns. Let c ij denote a cell in i th row and j th column.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table Interpretation", "sec_num": "4" }, { "text": "(1) If there is only one row or column, then the problem is trivial. We just read it in row wise or column wise. (2) Otherwise, we start the similarity checking from the right-bottom position, i.e., c nm . That is, the n th row and the m th column are regarded as base for comparisons. (3) For each row i (1 \u2264 i < n), compute the similarity of the two rows i and n. (4) Count how many pairs of rows are similar. (5) If the count is larger than (n-2)/2, and the similarity of row 1 and row n is smaller than the similarity of the other row pairs, then we say this table can be read in column wise. In other words, the first row contains attribute cells. (6) The interpretation from row wise is done in the similar way. We start checking from m th column, compare it with each column j (1 \u2264 j < m), and count how many pairs of columns are similar. (7) If neither \"row-wise\" nor \"column-wise\" can be assigned, then the default is set to \"row wise\". Table 6 is an example. The first column contains attribute cells. The other cells are statistics of an experimental result. We read it in row wise. If COLSPAN (ROWSPAN) is used, the table interpretation is more difficult. Table 1 is a typical example. Five COLSPANs and two ROWSPANs are used to create a better layout.", "cite_spans": [], "ref_spans": [ { "start": 946, "end": 953, "text": "Table 6", "ref_id": "TABREF7" }, { "start": 1094, "end": 1114, "text": "If COLSPAN (ROWSPAN)", "ref_id": null }, { "start": 1168, "end": 1175, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Table Interpretation", "sec_num": "4" }, { "text": "The attributes are formed hierarchically. The following is an example of hierarchy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table Interpretation", "sec_num": "4" }, { "text": "Adult -----Price -----------Double Room -------------Single Room ------------Extra Bed", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table Interpretation", "sec_num": "4" }, { "text": "Here, we extend the above algorithm to deal with table interpretation with COLSPAN (ROWSPAN). At first, we drop COLSPAN and ROWSPAN by duplicating several copies of cells in their proper positions. For example, COLSPAN=3 for \"Tour Code\" in Table 1 , thus we duplicate \"Tour Code\" at columns 2 and 3. Table 7 shows the final reformulation of the example in Table 1 . Then we employ the above algorithm with slight modification to find the reading direction.", "cite_spans": [], "ref_spans": [ { "start": 240, "end": 247, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 300, "end": 307, "text": "Table 7", "ref_id": "TABREF8" }, { "start": 356, "end": 363, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Table Interpretation", "sec_num": "4" }, { "text": "The modification is that spanning cells are boundaries for similarity checking. Take Table 7 as an example. We start the similarity checking from the right-bottom cell, i.e., 360, and consider each row and column within boundaries. The cell \"1999.04.01-2000.03.31\" is a spanning cell, so that 2 nd row is a boundary. \"Price\" is a spanning cell, thus 2 nd column is a boundary. In this case, we can interpret the table tags in both row wise and column wise. After that, a second cycle begins. The starting points are moved to new right-bottom positions, i.e., (3, 5) and (9, 3) . In this cycle, boundaries are reset. The cells DP9LAX01AB\" and \"Adult\" (\"Child\") are spanning cells, so that 1 st row and 1 st column are new boundaries. At this time, \"row-wise\" is selected.", "cite_spans": [ { "start": 560, "end": 563, "text": "(3,", "ref_id": null }, { "start": 564, "end": 566, "text": "5)", "ref_id": null }, { "start": 571, "end": 574, "text": "(9,", "ref_id": null }, { "start": 575, "end": 577, "text": "3)", "ref_id": null } ], "ref_spans": [ { "start": 80, "end": 93, "text": "Take Table 7", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Table Interpretation", "sec_num": "4" }, { "text": "In final cycle, the starting positions are (2,5) and (9, 2). The boundaries are 0 th row and 0 th column. These two sub-tables are read in row wise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table Interpretation", "sec_num": "4" }, { "text": "The results of table interpretation are a sequence of attribute-value pairs. Consider the tour example. Table 8 shows the extracted pairs.", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 111, "text": "Table 8", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Presentation of Table Extraction", "sec_num": "5" }, { "text": "We can find the following two phenomena:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Presentation of Table Extraction", "sec_num": "5" }, { "text": "(1) A cell may be a value of more than one attribute. (2) A cell may act as an attribute in one case, and a value in another case. We can concatenate two attributes together by using phenomenon (1). For example, \"35,450\" is a value of \"Single Room\" and \"Economic Class\", thus \"Single Room-Economic Class\" is formed. Besides that, we can find attribute hierarchy by using phenomenon (2). For example, \"Single Room\" is a value of \"Price\", and \"Price\" is a value of \"Adult\", so that we can create a hierarchy \"Adult-Price-Single Room\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Presentation of Table Extraction", "sec_num": "5" }, { "text": "Merging the results from these two phenomena, we can create the interpretations that we listed in Section 1. For example, from the two facts: \"35,450\" is a value of \"Single Room-Economic Class\", and \"Adult-Price-Single Room\" is a hierarchical attribute, we can infer that 35,450 is a value of \"Adult-Price-Single Room-Economic Class\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Presentation of Table Extraction", "sec_num": "5" }, { "text": "In this way, we can transform unstructured data into more structured representation for further applications. Consider an application in question and answering. Given a query like \"how much is the price of a double room for an adult\", the keywords are \"price\", \"double room\", and \"adult\". After consulting the database learning from HTML texts, two values, 32,500 and 1,430 with attributes economic class and extension, are reported. With this table mining technology, knowledge that can be employed is beyond text level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Presentation of Table Extraction", "sec_num": "5" }, { "text": "In this paper, we propose a systematic way to mine tables from HTML texts. Table filtering, table recognition, table interpretation and application of table extraction There are still other spaces to improve performance. The cues from context of tables and the traversal paths of HTML pages may be also useful. In the text surrounding tables, writers usually explain the meaning of tables. For example, which row (or column) denotes what kind of meanings. From the description, we can know which cell may be an attribute, and along the same row (column) we can find their value cells. Besides that, the text can also show the semantics of the cells. For example, the table cell may be a monetary expression that denotes the price of a tour package. In this way, even money marker is not present in the table cell, we can still know it is a monetary expression.", "cite_spans": [], "ref_spans": [ { "start": 75, "end": 169, "text": "Table filtering, table recognition, table interpretation and application of table extraction", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "Note that HTML texts can be chained through hyperlinks like \"previous\" and \"next\". The context can be expanded further. Their effects on table mining will be studied in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "Besides the possible extensions, another research line that can be considered is to set up a corpus for evaluation of attribute-value relationship.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "Because the role of a cell (attribute or value) is relative to other cells, to develop answering keys is indispensable for table interpretation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "This example is selected from http://www.chinaairlines.com/cdpks/los7-4.htm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Tutorial Notes on Building Information Extraction Systems", "authors": [ { "first": "D", "middle": [], "last": "Appelt", "suffix": "" }, { "first": "D", "middle": [], "last": "Israel", "suffix": "" } ], "year": 1997, "venue": "Tutorial on Fifth Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Appelt, D. and Israel, D. (1997) \"Tutorial Notes on Building Information Extraction Systems,\" Tutorial on Fifth Conference on Applied Natural Language Processing, 1997.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Named Entity Extraction for Information Retrieval", "authors": [ { "first": "H", "middle": [ "H" ], "last": "Chen", "suffix": "" }, { "first": "Y", "middle": [ "W" ], "last": "Ding", "suffix": "" }, { "first": "S", "middle": [ "C" ], "last": "Tsai", "suffix": "" } ], "year": 1998, "venue": "Computer Processing of Oriental Languages, Special Issue on Information Retrieval on Oriental Languages", "volume": "12", "issue": "1", "pages": "75--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, H.H.; Ding Y.W.; and Tsai, S.C. (1998) \"Named Entity Extraction for Information Retrieval,\" Computer Processing of Oriental Languages, Special Issue on Information Retrieval on Oriental Languages, Vol. 12, No. 1, 1998, pp.75-85.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Using Natural Language Processing for Identifying and Interpreting Tables in Plain Text", "authors": [ { "first": "S", "middle": [], "last": "Douglas", "suffix": "" }, { "first": "M", "middle": [], "last": "Hurst", "suffix": "" }, { "first": "D", "middle": [], "last": "Quinn", "suffix": "" } ], "year": 1995, "venue": "Proceedings of Fourth Annual Symposium on Document Analysis and Information Retrieval", "volume": "", "issue": "", "pages": "535--545", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas, S.; Hurst, M. and Quinn, D. (1995) \"Using Natural Language Processing for Identifying and Interpreting Tables in Plain Text,\" Proceedings of Fourth Annual Symposium on Document Analysis and Information Retrieval, 1995, pp. 535-545.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Layout and Language: Lists and Tables in Technical Documents", "authors": [ { "first": "S", "middle": [], "last": "Douglas", "suffix": "" }, { "first": "M", "middle": [], "last": "Hurst", "suffix": "" } ], "year": 1996, "venue": "Proceedings of ACL SIGPARSE Workshop on Punctuation in Computational Linguistics", "volume": "", "issue": "", "pages": "19--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas, S. and Hurst, M. (1996) \"Layout and Language: Lists and Tables in Technical Documents,\" Proceedings of ACL SIGPARSE Workshop on Punctuation in Computational Linguistics, 1996, pp. 19-24.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Information Extraction: Beyond Document Retrieval", "authors": [ { "first": "R", "middle": [], "last": "Gaizauskas", "suffix": "" }, { "first": "Y", "middle": [], "last": "Wilks", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics and Chinese Language Processing", "volume": "3", "issue": "", "pages": "17--59", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gaizauskas, R. and Wilks, Y. (1998) \" Information Extraction: Beyond Document Retrieval,\" Computational Linguistics and Chinese Language Processing, Vol. 3, No. 2, 1998, pp. 17-59.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Recognition of Tables Using Grammars", "authors": [ { "first": "E", "middle": [], "last": "Green", "suffix": "" }, { "first": "M", "middle": [], "last": "Krishnamoorthy", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Fourth Annual Symposium on Document Analysis and Information Retrieval", "volume": "", "issue": "", "pages": "261--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Green, E. and Krishnamoorthy, M. (1995) \"Recognition of Tables Using Grammars,\" Proceedings of the Fourth Annual Symposium on Document Analysis and Information Retrieval, 1995, pp. 261-278.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Layout and Language: Preliminary Experiments in Assigning Logical Structure to Table Cells", "authors": [ { "first": "M", "middle": [], "last": "Hurst", "suffix": "" }, { "first": "S", "middle": [], "last": "Douglas", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "217--220", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hurst, M. and Douglas, S. (1997) \"Layout and Language: Preliminary Experiments in Assigning Logical Structure to Table Cells,\" Proceedings of the Fifth Conference on Applied Natural Language Processing, 1997, pp. 217-220.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Layout and Language: Beyond Simple Text for Information Interaction -Modeling the Table", "authors": [ { "first": "M", "middle": [], "last": "Hurst", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 2nd International Conference on Multimodal Interfaces", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hurst, M. (1999a) \"Layout and Language: Beyond Simple Text for Information Interaction -Modeling the Table,\" Proceedings of the 2nd International Conference on Multimodal Interfaces, Hong Kong, January 1999.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Layout and Language: A Corpus of Documents Containing Tables", "authors": [ { "first": "M", "middle": [], "last": "Hurst", "suffix": "" } ], "year": 1999, "venue": "Proceedings of AAAI Fall Symposium: Using Layout for the Generation, Understanding and Retrieval of Documents", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hurst, M. (1999b) \"Layout and Language: A Corpus of Documents Containing Tables,\" Proceedings of AAAI Fall Symposium: Using Layout for the Generation, Understanding and Retrieval of Documents, 1999.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A Workbench for Acquisition of Ontological Knowledge from Natural Text", "authors": [ { "first": "A", "middle": [], "last": "Mikheev", "suffix": "" }, { "first": "S", "middle": [], "last": "Finch", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 7th Conference of the European Chapter", "volume": "", "issue": "", "pages": "194--201", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikheev, A. and Finch, S. (1995) \"A Workbench for Acquisition of Ontological Knowledge from Natural Text,\" Proceedings of the 7th Conference of the European Chapter for Computational Linguistics, 1995, pp. 194-201.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Proceedings of 7 th Message Understanding Conference", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "MUC (1998) Proceedings of 7 th Message Understanding Conference, http://www.muc.saic. com /proceedings/proceedings_index.html.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning to Recognize Tables in Free Text", "authors": [ { "first": "H", "middle": [ "T" ], "last": "Ng", "suffix": "" }, { "first": "C", "middle": [ "Y" ], "last": "Lim", "suffix": "" }, { "first": "J", "middle": [ "L T" ], "last": "Koo", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th Annual Meeting of ACL", "volume": "", "issue": "", "pages": "443--450", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ng, H.T.; Lim, C.Y. and Koo, J.L.T. (1999) \"Learning to Recognize Tables in Free Text,\" Proceedings of the 37th Annual Meeting of ACL, 1999, pp. 443-450.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Chinese character is represented by two bytes. That is, a cell contains 5 Chinese characters on the average." }, "TABREF1": { "num": null, "type_str": "table", "html": null, "text": "An Example for a Tour Package 1", "content": "
Tour CodeDP9LAX01AB
Valid1999.04.01-2000.03.31
Class/ExtensionEconomic Class Extension
Single Room35,4502,510
Adult ChildP R I C EDouble Room Extra Bed Occupation Extra Bed32,500 30,550 25,800 23,8501,430 720 1,430 720
No Occupation22,900360
They denote main wrapper, table row, table data,
table header, and caption for a table. Table 1
shows an example that lists the prices for a tour.
The interpretation of this table in terms of
attribute-value relationships is shown as follows:
AttributeValue
Tour CodeDP9LAX01AB
Valid1999.04.01-2000.03.31
Adult-Price-Single Room-Economic Class35,450
Adult-Price-Double Room-Economic Class 32,500
Adult-Price-Extra Bed-Economic Class30,550
Child-Price-Occupation-Economic Class25,800
Child-Price-Extra Bed-Economic Class23,850
Child-Price-No Occupation-Economic Class 22,900
Adult-Price-Single Room-Extension2,510
Adult-Price-Double Room-Extension1,430
Adult-Price-Extra Bed-Extension720
Child-Price-Occupation-Extension1,430
Child-Price-Extra Bed-Extension720
Child-Price-No Occupation-Extension360
" }, "TABREF2": { "num": null, "type_str": "table", "html": null, "text": "However, a table does not always exist when table wrapper appears in HTML text. This is because writers often employ table tags to represent form or menu. That allows users to input queries or make selections.Another point that should be mentioned is: table designers usually employ COLSPAN (ROWSPAN) to specify how many columns (rows) a table cell should span. In this example, the COLSPAN of cell \"Tour Code\" is 3. That means \"Tour Code\" spans 3 columns. Similarly, the ROWSPAN of cell \"Adult\" is 3.", "content": "
<td COLSPAN=\"3\">Class/Extension</td>
<td>Economic Class</td>
<td>Extension</td>
</tr>
<tr>
<td ROWSPAN=\"3\">Adult</td>
<td ROWSPAN=\"6\"><p>P</p>
<p>R</p>
<p>I</p>
<p>C</p>
<p>E</td>
<td>Single Room</td>
<td>35,450</td>
<td>2,510</td>
</tr>
<tr>
<td>Double Room</td>
<td>32,500</td>
<td>1,430</td>
</tr>
<tr>
<td>Extra Bed</td>
<td>30,550</td>
<td>720</td>
</tr>
<tr>
<td>Child</td>
<td>Occupation<</td>
<td>25,800</td>
<td>1,430</td>
</tr>
<tr>
<td>Extra Bed</td>
<td>23,850</td>
<td>720</td>
</tr>
<tr>
<td>No Occupation</td>
<td>22,900</td>
<td>360</td>
</tr>
</table>
The table wrapper (<table> \u2026 </table>) is
a useful cue for table recognition. The HTML
text for the above example is shown as follows.
The table tags are enclosed by a table wrapper.
<table border>
<tr>
<td COLSPAN=\"3\">Tour Code</td>
<td COLSPAN=\"2\">DP9LAX01AB</td> </tr>This cell spans 3 rows.COLSPAN and
<tr>ROWSPAN provide flexibility for users to
<td COLSPAN=\"3\">Valid</td> <td COLSPAN=\"2\">1999.04.01-2000.03.31</td> </tr>design any kinds of tables, but they make automatic table interpretation more
<tr>challengeable.
" }, "TABREF3": { "num": null, "type_str": "table", "html": null, "text": "Statistics of Test Data", "content": "
Airlines ChinaEvaMandarinSingaporeFareastSum
AirlineAirlineAirlineAirlineAirline
Number of694366142110601372
Pages
# of20755681841632283218
Wrappers(2.35)
Number of7519823406918
Tables(0.67)
" }, "TABREF4": { "num": null, "type_str": "table", "html": null, "text": "Performance of Filtering Rules These four rows list the names of airlines, total number of web pages, total number of table wrappers, and total number of tables, respectively. On the average, there are 2.35 table wrappers, and 0.67 tables for each web page. The statistics shows that table tags are used quite often in HTML text, and only 28.53% are actual tables.", "content": "
ChinaEvaMandarinSingaporeFareastSum
AirlineAirlineAirlineAirlineAirline
# of20755681841632283218
wrappers
Number of7519823406918
Tables
Number of13244701611232222300
Non-Tables
Total973455158782131877
Filter
Wrong15003220
Filter
Correct98.46% 100% 100%96.15% 99.06% 98.93%
Rate
" }, "TABREF5": { "num": null, "type_str": "table", "html": null, "text": "If the percentage is above a threshold, the table tags are interpreted as a table. The data after table filtering (Section 2) is used to evaluate the strategies in table recognition.", "content": "
is
employed to tell if a cell is a specific
named entity. The neighboring cells
belonging to the same named entity
category are similar.
(3) Number category similarity
Number characters (0-9) appear very
often. If total number characters in a
cell exceeds a threshold, we call the
cell belongs to the number category.
The neighboring cells in number
category are similar.
We count how many neighboring cells are
similar.
" }, "TABREF6": { "num": null, "type_str": "table", "html": null, "text": "String Similarity", "content": "
ChinaEvaMandarinSingaporeFareastSum
AirlineAirlineAirlineAirlineAirline
Number of7519823406918
Tables
Tables150417175220
Proposed
Correct134397143197
Precision89.33% 95.12% 100%82.35% 60% 89.55%
Rate
Recall Rate 17.84% 39.80% 30.43% 35.00% 50% 21.46%
F-measure 53.57% 67.46% 65.22% 58.68% 55% 55.50%
Table 5. String or Named Entity Similarity
ChinaEvaMandarinSingaporeFareastSum
AirlineAirlineAirlineAirlineAirline
Number of7519823406918
Tables
Tables151427175222
Proposed
Correct135407143199
Precision89.40% 95.24% 100%82.35% 60% 89.64%
Rate
Recall Rate 17.98% 40.82% 30.43% 35.00% 50% 21.68%
F-measure 53.69% 68.03% 65.22% 58.68% 55% 55.66%
" }, "TABREF7": { "num": null, "type_str": "table", "html": null, "text": "String, Named Entity, or Number Category Similarity", "content": "
ChinaEvaMandarinSingaporeFareastSum
AirlineAirlineAirlineAirlineAirline
Number of7519823406918
Tables
Tables6686016416791
Proposed
Correct6275814324735
Precision93.86% 96.67% 87.50% 78.05% 66.67% 92.92%
P =ated ystemGener rOfTablesS TotalNumbe erated sSystemGen rrectTable NumberOfCoRate Recall Rate 83.49% 59.18% 60.87% 80.00% 66.67% 80.07% F-measure 88.88% 77.93% 74.19% 79.03% 66.67% 86.50%
" }, "TABREF8": { "num": null, "type_str": "table", "html": null, "text": "Reformulation of Example inTable 1", "content": "
Tour CodeTour CodeTour Code DP9LAX01AB DP9LAX01AB
ValidValidValid1999.04.01-2000.03.311999.04.01-2000.03.31
Class/ ExtensionClass/ ExtensionClass/ ExtensionEconomic ClassExtension
AdultPRICESingle Room35,4502,510
AdultPRICEDouble Room32,5001,430
AdultPRICE Extra Bed30,550720
ChildPRICE Occupation25,8001,430
ChildPRICE Extra Bed23,850720
ChildPRICENo Occupation22,900
" }, "TABREF9": { "num": null, "type_str": "table", "html": null, "text": "", "content": "
AttributeValue
Single Room35,450
Single Room2,510
Double Room32,500
Double Room1,430
\u2026\u2026
No Occupation22,900
1 st cycleNo Occupation360
Economic Class35,450
Economic Class32,500
\u2026\u2026
Economic Class22,900
Extension2,510
Extension1,430
\u2026\u2026
Extension360
Class/ExtensionEconomic Class
Class/ExtensionExtension
Valid1999.04.01-2000.03.31
2 nd cyclePriceSingle Room
PriceDouble Room
\u2026\u2026
PRICENo Occupation
Tour CodeDP9LAX01ANB
3 rd cycleValid1999.04.01-2000.03.31
AdultPrice
ChildPrice
" }, "TABREF10": { "num": null, "type_str": "table", "html": null, "text": "are discussed. The cues from HTML tags and information in table cells are employed to recognize and interpret tables. The F-measure for table recognition is 86.50%.", "content": "" } } } }