{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:24:04.136782Z" }, "title": "ISPRAS@FinTOC-2021 Shared Task: Two-stage TOC generation model", "authors": [ { "first": "Ilya", "middle": [], "last": "Kozlov", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ivannikov Institute for System Programming of the RAS 25", "location": { "addrLine": "Alexander Solzhenitsyn Str", "postCode": "109004", "settlement": "Moscow", "country": "Russia" } }, "email": "kozlov-ilya@ispras.ru" }, { "first": "Oksana", "middle": [], "last": "Belyaeva", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ivannikov Institute for System Programming of the RAS 25", "location": { "addrLine": "Alexander Solzhenitsyn Str", "postCode": "109004", "settlement": "Moscow", "country": "Russia" } }, "email": "belyaeva@ispras.ru" }, { "first": "Anastasiya", "middle": [], "last": "Bogatenkova", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ivannikov Institute for System Programming of the RAS 25", "location": { "addrLine": "Alexander Solzhenitsyn Str", "postCode": "109004", "settlement": "Moscow", "country": "Russia" } }, "email": "" }, { "first": "Andrew", "middle": [], "last": "Perminov", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ivannikov Institute for System Programming of the RAS 25", "location": { "addrLine": "Alexander Solzhenitsyn Str", "postCode": "109004", "settlement": "Moscow", "country": "Russia" } }, "email": "perminov@ispras.ru" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose a two-stage approach for TOC generation from financial documents. This work connected with participation in FinTOC-2021 Shared Task: \"Financial Document Structure Extraction\". The competition contains two subtasks: title detection and TOC generation. Our model consists of two classifiers: the first binary classifier separates title lines from non-title, the second one determines the title level. In the title detection task, we got 0.813 and 0.787 F1 measure, in the TOC generation task we got 37.9 and 42.1 the harmonic mean between Inex F1 score and Inex level accuracy for English and French documents respectively. With these results, our approach took third place among all submissions. As a team, we took second place in the competition in all categories.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We propose a two-stage approach for TOC generation from financial documents. This work connected with participation in FinTOC-2021 Shared Task: \"Financial Document Structure Extraction\". The competition contains two subtasks: title detection and TOC generation. Our model consists of two classifiers: the first binary classifier separates title lines from non-title, the second one determines the title level. In the title detection task, we got 0.813 and 0.787 F1 measure, in the TOC generation task we got 37.9 and 42.1 the harmonic mean between Inex F1 score and Inex level accuracy for English and French documents respectively. With these results, our approach took third place among all submissions. As a team, we took second place in the competition in all categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Currently, electronic documents have become widespread. A large number of documents are presented in a PDF format, but only a few of them contain an automatic table of contents (TOC). However, there may be the need for a quick search of information and it may be a problem for large documents. One example is financial documents, which can be over 100 pages long. Financial documents contain a lot of important information and can have different appearances and structures. The task of automatically extracting the table of contents from financial documents seems to be relevant and its solution is not obvious. FinTOC-2021 (El Maarouf et al., 2021 offers to solve the problem of extracting structure from financial documents in two languages: English and French. The results of solving two subtasks are evaluated:", "cite_spans": [ { "start": 612, "end": 623, "text": "FinTOC-2021", "ref_id": null }, { "start": 624, "end": 648, "text": "(El Maarouf et al., 2021", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Title detection (TD) -selection from all lines of the document only those that should be included in the table of contents;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Table of contents (TOC) generation -identification nesting depths of selected titles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The competition is held for the third time. Similar tasks were solved at FinTOC-2019 (Juge et al., 2019) and FinTOC-2020 (Bentabet et al., 2020 ; in 2020, documents in French were added.", "cite_spans": [ { "start": 85, "end": 104, "text": "(Juge et al., 2019)", "ref_id": "BIBREF5" }, { "start": 109, "end": 120, "text": "FinTOC-2020", "ref_id": null }, { "start": 121, "end": 143, "text": "(Bentabet et al., 2020", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In FinTOC-2019, the best solution (Tian and Peng, 2019) for title detection is based on the LSTM with augmentation and attention. The best solution (Giguet and Lejeune, 2019) for the TOC generation task relies on the decision tree classifier DT 10 and TOC page detection.", "cite_spans": [ { "start": 34, "end": 55, "text": "(Tian and Peng, 2019)", "ref_id": "BIBREF9" }, { "start": 148, "end": 174, "text": "(Giguet and Lejeune, 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In FinTOC-2020, the best solution (Hercig and Kral, 2020) for title detection (French) was obtained with the maximum entropy classifier. For title detection in English documents (Premi et al., 2020) LSTM, CharCNN, and a fully connected network with some handcrafted features were used. The best approach for TOC generation (Kosmajac et al., 2020) consisted in extracting linguistic and structural information and using the Random Forest classifier.", "cite_spans": [ { "start": 34, "end": 57, "text": "(Hercig and Kral, 2020)", "ref_id": "BIBREF4" }, { "start": 178, "end": 198, "text": "(Premi et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we describe our solution to the shared task. This work is a continuation of (Bogatenkova et al., 2020). We make a list of features for each document line and use two classifiers for the consequent solution of both title detection and TOC generation tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The paper is organized as follows. We describe in detail the given dataset for the competitions in Section 2. We present our approach to solving the task in Section 3. Results and a discussion are given in Section 4 and 5 respectively. Section 6 contains a conclusion about our work. are very heterogeneous, both groups contain documents with and without TOC. Moreover, not only existing TOC should be included in the result, but also smaller titles of each document. The main information about the training dataset is in the Table 1 . The mean of pages' numbers is 64 and 30 for English and French documents respectfully. But the size of documents varies greatly from 3 to 285 pages. The dataset contains one-column, two-column, and even three-column documents. At the same time, a different number of columns may occur within one document. Moreover, documents are different in their appearance (e. g. the appearance of titles or existing TOC) and logical structure. So, there is no way to extract a complete TOC using regular expressions and we need to use machine learning techniques.", "cite_spans": [], "ref_spans": [ { "start": 526, "end": 533, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There is a set of annotations for each document. Annotations include only titles with the text and the depth for each title. The number of titles and maximum title depth are also different for each document. The number of titles varies from 20 to 1004 and from 33 to 527 for documents in English and French, respectively. Maximum title depth is from 3 to 9 for English documents while it equals from 4 to 6 for French documents. Thus, a sample of very different documents is presented at the competition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The test dataset is similar to the training dataset. It contains 10 documents in each English and French set. The documents are also very diverse, none of the French documents contain a table of contents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test dataset", "sec_num": "2.2" }, { "text": "More statistics are shown in the Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 33, "end": 40, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Test dataset", "sec_num": "2.2" }, { "text": "We proposed the 2-stage method (see figure 1) for solving the both tasks TD and TOC generation. Each stage includes classification using the XG-Boost classifier. In the first stage, the binary classifier classifies each line as title or non-title. Thus, the first stage is a process of filtering all lines of the document. In the second stage of our method hierarchy levels are found for each filtered title from the first stage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed approach", "sec_num": "3" }, { "text": "Text and metadata extraction. Since the input PDF documents have a textual layer, we extracted text, bold and italic font, and colors of the text with help of PDFMiner (Yusuke Shinyama, 2019) . PDFMiner has different layout reading modes. To read the entire document we have chosen the universal layout mode for multi-column documents with parameters LA-Params(line_margin=1.5, line_overlap=0.5, boxes_flow=0.5, word_margin=0.1, de-tect_vertical=False) . Thus the list of lines with text and metadata is extracted from the input documents. To obtain lines with labels we matched the provided labelled titles and the extracted lines using a Levenshtein distance with 0.8 threshold.", "cite_spans": [ { "start": 176, "end": 191, "text": "Shinyama, 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed approach", "sec_num": "3" }, { "text": "As preprocessing, we remove footers and headers from a document using the method (Lin, 2003) . It helps to improve the quality of the binary classifier and the TOC extration module. The problem with headers and footers is that they are similar to titles and can predicted as the element of TOC.", "cite_spans": [ { "start": 81, "end": 92, "text": "(Lin, 2003)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed approach", "sec_num": "3" }, { "text": "Existing TOC extraction. As additional information, we separately extract a table of content (TOC) for each document. We look for the keywords of the TOC heading in the document (for example, \"Table of contents\", \"CONTENT\") as the beginning of TOC. Then, we detect the TOC's body using regular expressions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed approach", "sec_num": "3" }, { "text": "Most tables of contents in the given documents are one-column regardless of the number of columns in the whole document. The TOC extraction module requires PDFMiner to be run in the single column mode because the TOC text may be read automatically as a multi-column. In this case, PDFMiner should be run with the parameters LA-Params(line_margin=3.0, line_overlap=0.1, boxes_flow=-1, word_margin=1.5, char_margin=100.0, detect_vertical=False). Features extraction. The list of extracted lines and extracted TOCs (if present) are processed to obtain a vector of features for each extracted line. We formed a vector from 184 features some of which are enlisted below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed approach", "sec_num": "3" }, { "text": "\u2022 visual features: bold or italic text, indentation, spacing between lines, normalized font size, text color;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed approach", "sec_num": "3" }, { "text": "\u2022 features from letter statistics: the percentage of letters, capital letters, numbers, the number of words in line, normalized line's length;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed approach", "sec_num": "3" }, { "text": "\u2022 features from regular expressions for lines beginning: indicators of matching some regular expressions for list items;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed approach", "sec_num": "3" }, { "text": "\u2022 features from regular expressions for lines end: indicators of ending with a dot, colon, semicolon, comma;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed approach", "sec_num": "3" }, { "text": "\u2022 features connected with lines depth: the level of numbering for list with dots (like 1.1.1), relative font size and indentation;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed approach", "sec_num": "3" }, { "text": "\u2022 TOC features: indicator of being in the existing TOC (we extracted it automatically), indicator of being the TOC header;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed approach", "sec_num": "3" }, { "text": "\u2022 other features: normalized page number and line number;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed approach", "sec_num": "3" }, { "text": "\u2022 context features: the same features for 3 previous and 3 next lines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed approach", "sec_num": "3" }, { "text": "Training process and experiments. For both tasks we experimented with two models: one-stage and two-stage classifiers. Under the one-stage model we call the model without the first stage (without the binary classifier). In this case the input lines are not pre-filtered. We use the XGBoost classifier in both models. The training process ran", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed approach", "sec_num": "3" }, { "text": "Model type F1 (TD) Inex08-F1 (TOC) XGBoost 1-stage 0.77 50.5 XGBoost 2-stage 0.81 55.7 with parameters 0.1 learning rate and 100 estimators. We use 3-fold cross-validation for evaluate the results of each model. The mean results for English documents are given in the Table 3 . The evaluation script is provided by the organizers (El Maarouf et al., 2021) .", "cite_spans": [ { "start": 330, "end": 355, "text": "(El Maarouf et al., 2021)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 268, "end": 275, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Proposed approach", "sec_num": "3" }, { "text": "The two-stage model performed better than the single-stage one. Thus, we've chosen two-stage model for solving the task on the test dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed approach", "sec_num": "3" }, { "text": "The competition results on test dataset (see table 2) are presented in the table 4 (Title Detection), and tables 5, 6 (TOC generation). Our approach ranks third among 8 and 6 submitted solutions for English and French documents respectfully. As a team we took the second place in all categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "The two-stage model demonstrates high scores for both tasks. But the model has disadvantages. Pri- Table 6 : TOC Generation Competition on French documents marily, the model misclassifies questionable titles, the ground truth of which are interpreted differently for different documents. For example, one document has a line with some features (color, font size, style and etc.) as a title, but the equivalent line in another document is not a title. Also, we don't combine adjacent titles together as in the ground truth of the data sets. As well, a two-stage model accuracy in the title detection task is limited by the binary classifier. If the model filters out the title lines in the first step, it will not be able to determine their depths in the second one. Therefore, the accuracy of the twostage model will not exceed the accuracy of the binary classifier.", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 106, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "As a development of the work, we propose to consider more advanced and complicated models, e. g. the LSTM model. This model can give greater accuracy through the use of long-term memory. Thus, we will be able to remember the previous predictions made up to this point in the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "We proposed the approach for automatic title detection and TOC generation for PDF financial documents with a textual layer. We extracted lines with metadata using Pdfminer and found existing TOCs using the regular expressions. This information we transform to the feature matrix with the vector of predefined features for each document line. Then we use a two-stage model for solving both title detection and TOC generation problems. First, we filter titles from all document lines using the XGBoost binary classifier. Then, we find the depths of the filtered lines using the second XG-Boost classifier. The described approach does not depend on the presence of the table of contents in the document and can be used for both English and French financial documents. As a result, our team has taken second place in all categories in the FinTOC-2021 competition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The Financial Document Structure Extraction Shared Task (FinToc 2020)", "authors": [ { "first": "", "middle": [], "last": "Najah-Imane", "suffix": "" }, { "first": "Remi", "middle": [], "last": "Bentabet", "suffix": "" }, { "first": "Ismail", "middle": [ "El" ], "last": "Juge", "suffix": "" }, { "first": "Virginie", "middle": [], "last": "Maarouf", "suffix": "" }, { "first": "Dialekti", "middle": [], "last": "Mouilleron", "suffix": "" }, { "first": "Mahmoud", "middle": [], "last": "Valsamou-Stanislawski", "suffix": "" }, { "first": "", "middle": [], "last": "El-Haj", "suffix": "" } ], "year": 2020, "venue": "The 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation (FNP-FNS 2020)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Najah-Imane Bentabet, Remi Juge, Ismail El Maarouf, Virginie Mouilleron, Dialekti Valsamou- Stanislawski, and Mahmoud El-Haj. 2020. The Financial Document Structure Extraction Shared Task (FinToc 2020). In The 1st Joint Workshop on Financial Narrative Processing and MultiL- ing Financial Summarisation (FNP-FNS 2020), Barcelona, Spain.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Oksana Vladimirovna Belyaeva, and Andrey Igorevich Perminov. 2020. Logical structure extraction from scanned documents", "authors": [ { "first": "Anastasiya Olegovna", "middle": [], "last": "Bogatenkova", "suffix": "" } ], "year": null, "venue": "Proceedings of the Institute for System Programming of the RAS", "volume": "32", "issue": "4", "pages": "175--188", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anastasiya Olegovna Bogatenkova, Ilya Sergeyevich Kozlov, Oksana Vladimirovna Belyaeva, and An- drey Igorevich Perminov. 2020. Logical structure extraction from scanned documents. Proceedings of the Institute for System Programming of the RAS, 32(4):175-188.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The Financial Document Structure Extraction Shared Task", "authors": [ { "first": "Ismail", "middle": [ "El" ], "last": "Maarouf", "suffix": "" }, { "first": "Juyeon", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Abderrahim", "middle": [], "last": "Aitazzi", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "Bellato", "suffix": "" }, { "first": "Mei", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Mahmoud", "middle": [], "last": "El-Haj", "suffix": "" } ], "year": 2021, "venue": "The Third Financial Narrative Processing Workshop (FNP 2021)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ismail El Maarouf, Juyeon Kang, Abderrahim Aitazzi, Sandra Bellato, Mei Gan, and Mahmoud El-Haj. 2021. The Financial Document Structure Extraction Shared Task (FinToc 2021). In The Third Financial Narrative Processing Workshop (FNP 2021), Lan- caster, UK.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Daniel@ fintoc-2019 shared task: toc extraction and title detection", "authors": [ { "first": "Emmanuel", "middle": [], "last": "Giguet", "suffix": "" }, { "first": "Ga\u00ebl", "middle": [], "last": "Lejeune", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Second Financial Narrative Processing Workshop (FNP 2019)", "volume": "", "issue": "", "pages": "63--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emmanuel Giguet and Ga\u00ebl Lejeune. 2019. Daniel@ fintoc-2019 shared task: toc extraction and title de- tection. In Proceedings of the Second Financial Nar- rative Processing Workshop (FNP 2019), pages 63- 68.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "UWB@FinTOC-2020 shared task: Financial document title detection", "authors": [ { "first": "Tom\u00e1\u0161", "middle": [], "last": "Hercig", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kral", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation", "volume": "", "issue": "", "pages": "158--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1\u0161 Hercig and Pavel Kral. 2020. UWB@FinTOC- 2020 shared task: Financial document title detection. In Proceedings of the 1st Joint Workshop on Finan- cial Narrative Processing and MultiLing Financial Summarisation, pages 158-162, Barcelona, Spain (Online). COLING.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The fintoc-2019 shared task: Financial document structure extraction", "authors": [ { "first": "R\u00e9mi", "middle": [], "last": "Juge", "suffix": "" }, { "first": "Imane", "middle": [], "last": "Bentabet", "suffix": "" }, { "first": "Sira", "middle": [], "last": "Ferradans", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Second Financial Narrative Processing Workshop (FNP 2019)", "volume": "", "issue": "", "pages": "51--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "R\u00e9mi Juge, Imane Bentabet, and Sira Ferradans. 2019. The fintoc-2019 shared task: Financial document structure extraction. In Proceedings of the Sec- ond Financial Narrative Processing Workshop (FNP 2019), pages 51-57.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "DNLP@FinTOC'20: Table of contents detection in financial documents", "authors": [ { "first": "Dijana", "middle": [], "last": "Kosmajac", "suffix": "" }, { "first": "Stacey", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Mozhgan", "middle": [], "last": "Saeidi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation", "volume": "", "issue": "", "pages": "169--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dijana Kosmajac, Stacey Taylor, and Mozhgan Saeidi. 2020. DNLP@FinTOC'20: Table of contents detec- tion in financial documents. In Proceedings of the 1st Joint Workshop on Financial Narrative Process- ing and MultiLing Financial Summarisation, pages 169-173, Barcelona, Spain (Online). COLING.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Header and footer extraction by page association", "authors": [ { "first": "Xiaofan", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2003, "venue": "Document Recognition and Retrieval X", "volume": "5010", "issue": "", "pages": "164--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaofan Lin. 2003. Header and footer extraction by page association. In Document Recognition and Retrieval X, volume 5010, pages 164-171. Interna- tional Society for Optics and Photonics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "AMEX-AI-LABS: Investigating transfer learning for title detection in table of contents generation", "authors": [ { "first": "Dhruv", "middle": [], "last": "Premi", "suffix": "" }, { "first": "Amogh", "middle": [], "last": "Badugu", "suffix": "" }, { "first": "Himanshu Sharad", "middle": [], "last": "Bhatt", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation", "volume": "", "issue": "", "pages": "153--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dhruv Premi, Amogh Badugu, and Himanshu Sharad Bhatt. 2020. AMEX-AI-LABS: Investigat- ing transfer learning for title detection in table of contents generation. In Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation, pages 153-157, Barcelona, Spain (Online). COLING.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Finance document extraction using data augmentation and attention", "authors": [ { "first": "Ke", "middle": [], "last": "Tian", "suffix": "" }, { "first": "", "middle": [], "last": "Zi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Second Financial Narrative Processing Workshop (FNP 2019)", "volume": "", "issue": "", "pages": "1--4", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ke Tian and Zi Jun Peng. 2019. Finance document ex- traction using data augmentation and attention. In Proceedings of the Second Financial Narrative Pro- cessing Workshop (FNP 2019), pages 1-4.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Pdfminer.six documentation", "authors": [ { "first": "Philippe Guglielmetti Pieter Marsman Yusuke", "middle": [], "last": "Shinyama", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philippe Guglielmetti Pieter Marsman Yusuke Shinyama. 2019. Pdfminer.six docu- mentation.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Full pipeline description", "uris": null }, "TABREF1": { "num": null, "text": "Training dataset statistics", "html": null, "type_str": "table", "content": "
English French
Number of documents1010
Mean number of pages6626
Number of extracted lines 4210013027
Number of TOC90
" }, "TABREF2": { "num": null, "text": "Test dataset statistics", "html": null, "type_str": "table", "content": "" }, "TABREF3": { "num": null, "text": "", "html": null, "type_str": "table", "content": "
: The results from cross-validation on the train-
ing dataset (English)
Team runF1 (English) F1 (French)
Christopher Bourez2 0.8300.818
Christopher Bourez1 0.8220.817
ISP RAS (our)0.8130.787
Yseop Lab0.7280.639
Cilab_fintoc20.514-
NovaFin0.5070.562
Daniel0.4650.606
Cilab_fintoc10.456-
" }, "TABREF4": { "num": null, "text": "Title Detection Competition results", "html": null, "type_str": "table", "content": "" }, "TABREF6": { "num": null, "text": "TOC Generation Competition on English documents", "html": null, "type_str": "table", "content": "
Team runInex08-P Inex08-R Inex08-F1 Inex08-Title Inex08-Level harm
accaccmean
Christopher Bourez2 60.854.357.363.538.757.3
Christopher Bourez1 60.954.257.363.63957.3
ISP RAS (our)52.638.844.553.639.942.1
NovaFin29.724.726.734.63229.1
Yseop Lab46.828.134.447.316.622.4
Daniel49.728.635.853.77.111.8
" } } } }