{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:23:49.423511Z" }, "title": "Summarization of financial documents with TF-IDF weighting of multi-word terms", "authors": [ { "first": "Sophie", "middle": [], "last": "Krimberg", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shamoon College of Engineering (SCE) Beer-Sheva", "location": { "country": "Israel" } }, "email": "" }, { "first": "Natalia", "middle": [], "last": "Vanetik", "suffix": "", "affiliation": {}, "email": "natalyav@sce.ac.il" }, { "first": "Marina", "middle": [], "last": "Litvak", "suffix": "", "affiliation": {}, "email": "marinal@ac.sce.ac.il" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Financial documents, such as corporate annual reports, are usually very long and may consist of more than 100 pages. Every report is divided into thematic sections or statements that have an inner structure and include special financial terms and numbers. This paper describes an approach for summarizing financial documents based on a Bag-of-Words (BOW) document representation. The suggested solution first calculates the Term Frequency-Inverse Document Frequency (TF-IDF) weights for all single-word and multiword expressions in the corpus, then finds the sequence of words with a maximum total weight in each document. The solution is designed to meet the requirements of the Financial Narrative Summarization (FNS 2021) shared task and has been tested on FNS 2021 dataset shared-task dataset.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Financial documents, such as corporate annual reports, are usually very long and may consist of more than 100 pages. Every report is divided into thematic sections or statements that have an inner structure and include special financial terms and numbers. This paper describes an approach for summarizing financial documents based on a Bag-of-Words (BOW) document representation. The suggested solution first calculates the Term Frequency-Inverse Document Frequency (TF-IDF) weights for all single-word and multiword expressions in the corpus, then finds the sequence of words with a maximum total weight in each document. The solution is designed to meet the requirements of the Financial Narrative Summarization (FNS 2021) shared task and has been tested on FNS 2021 dataset shared-task dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Corporate annual reports and financial statements are challenging to summarize due to their length, format, structure, and contents. An annual report is a document of tens and often hundreds of pages. Sometimes annual report includes a table of contents, but there are a lot of reports that do not. Usually, reports have several thematic sections, but the order, the quantity, and the structure of sections differ from one report to another. Financial documents use specialized financial terms. Additionally, every company that publishes a report operates within its field, and this field's lexicon can appear in the report and be an important part of it, while all of the other documents in the corpus do not use that lexicon at all.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The 1st Joint Workshop on financial Narrative Processing and MultiLing financial Summarisation (FNP-FNS 2020) (El-Haj et al., 2020a) ran the financial narrative summarisation (FNS) task, which resulted in the first large-scale experimental results and state-of-the-art summarization methods applied to financial data. The task focused on annual reports produced by UK firms listed on the London Stock Exchange. Because companies usually produce glossy brochures with a much looser structure, this makes automatic summarization of such reports a challenging task. A total number of 9 teams participated in the FNS 2020 shared task with a total of 24 system submissions. All teams were ranked by several ROUGE-based measures and compared to the four topline and baseline summarizers-MUSE (Litvak et al., 2010) , POLY (Litvak and Vanetik, 2013) , TextRank (Mihalcea and Tarau, 2004) and LexRank (Erkan and Radev, 2004)-in (El-Haj et al., 2020b) .", "cite_spans": [ { "start": 110, "end": 132, "text": "(El-Haj et al., 2020a)", "ref_id": null }, { "start": 786, "end": 807, "text": "(Litvak et al., 2010)", "ref_id": "BIBREF12" }, { "start": 815, "end": 841, "text": "(Litvak and Vanetik, 2013)", "ref_id": "BIBREF13" }, { "start": 853, "end": 879, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF16" }, { "start": 892, "end": 902, "text": "(Erkan and", "ref_id": "BIBREF7" }, { "start": 903, "end": 941, "text": "Radev, 2004)-in (El-Haj et al., 2020b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The participating systems used a variety of techniques and methods ranging from rule based extraction methods (Litvak et al., 2020; Vhatkar et al., 2020; Arora and Radhakrishnan, 2020; Azzi and Kang, 2020) to traditional machine learning methods (Suarez et al., 2020; Vhatkar et al., 2020; Arora and Radhakrishnan, 2020) and high performing deep learning models (Agarwal et al., 2020; Singh, 2020; La Quatra and Cagliero, 2020; Vhatkar et al., 2020; Arora and Radhakrishnan, 2020; Azzi and Kang, 2020; Zheng et al., 2020) . The text representation was also very diverse among the participating systems-very basic morphological and structure features (Li et al., 2020; Suarez et al., 2020) , syntactic features (Vhatkar et al., 2020) , and semantic vectors using word embeddings (Agarwal et al., 2020; Suarez et al., 2020) were applied. In addition, some teams (Litvak et al., 2020; Zheng et al., 2020) investigated the hierarchical structure of reports. Different ranking techniques, such as Determinantal Point Processes (Li et al., 2020) , a combination of Pointer Network and Text-to-text transfer Transformer algorithms (Singh, 2020) were used for extractive approaches, together with deep language models (La Quatra and Cagliero, 2020; Zheng et al., 2020), hierarchical summarization under different discourse topics (Litvak et al., 2020) , and ensem-ble based models (Arora and Radhakrishnan, 2020) have also been reported. The main challenge of this task, as reported by its participants, was the average length of a document, which made the training process extremely inefficient. In addition, participants argued that extracting text and then structure from PDF files with numerous tables, charts, and numerical data resulted in a lot of noise.", "cite_spans": [ { "start": 110, "end": 131, "text": "(Litvak et al., 2020;", "ref_id": "BIBREF15" }, { "start": 132, "end": 153, "text": "Vhatkar et al., 2020;", "ref_id": "BIBREF21" }, { "start": 154, "end": 184, "text": "Arora and Radhakrishnan, 2020;", "ref_id": "BIBREF1" }, { "start": 185, "end": 205, "text": "Azzi and Kang, 2020)", "ref_id": "BIBREF2" }, { "start": 246, "end": 267, "text": "(Suarez et al., 2020;", "ref_id": "BIBREF20" }, { "start": 268, "end": 289, "text": "Vhatkar et al., 2020;", "ref_id": "BIBREF21" }, { "start": 290, "end": 320, "text": "Arora and Radhakrishnan, 2020)", "ref_id": "BIBREF1" }, { "start": 362, "end": 384, "text": "(Agarwal et al., 2020;", "ref_id": "BIBREF0" }, { "start": 385, "end": 397, "text": "Singh, 2020;", "ref_id": "BIBREF19" }, { "start": 398, "end": 427, "text": "La Quatra and Cagliero, 2020;", "ref_id": "BIBREF9" }, { "start": 428, "end": 449, "text": "Vhatkar et al., 2020;", "ref_id": "BIBREF21" }, { "start": 450, "end": 480, "text": "Arora and Radhakrishnan, 2020;", "ref_id": "BIBREF1" }, { "start": 481, "end": 501, "text": "Azzi and Kang, 2020;", "ref_id": "BIBREF2" }, { "start": 502, "end": 521, "text": "Zheng et al., 2020)", "ref_id": "BIBREF23" }, { "start": 650, "end": 667, "text": "(Li et al., 2020;", "ref_id": "BIBREF10" }, { "start": 668, "end": 688, "text": "Suarez et al., 2020)", "ref_id": "BIBREF20" }, { "start": 710, "end": 732, "text": "(Vhatkar et al., 2020)", "ref_id": "BIBREF21" }, { "start": 778, "end": 800, "text": "(Agarwal et al., 2020;", "ref_id": "BIBREF0" }, { "start": 801, "end": 821, "text": "Suarez et al., 2020)", "ref_id": "BIBREF20" }, { "start": 860, "end": 881, "text": "(Litvak et al., 2020;", "ref_id": "BIBREF15" }, { "start": 882, "end": 901, "text": "Zheng et al., 2020)", "ref_id": "BIBREF23" }, { "start": 1022, "end": 1039, "text": "(Li et al., 2020)", "ref_id": "BIBREF10" }, { "start": 1124, "end": 1137, "text": "(Singh, 2020)", "ref_id": "BIBREF19" }, { "start": 1322, "end": 1343, "text": "(Litvak et al., 2020)", "ref_id": "BIBREF15" }, { "start": 1373, "end": 1404, "text": "(Arora and Radhakrishnan, 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This year FNS-2021 (El-Haj et al., 2021) shared task asks to provide summaries of annual company reports. The dataset is supplied with 2-4 gold standard summaries per document. These gold standard summaries are complete sections of the original document selected by human financial experts as the most important sections of the documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Term Frequency-Inverse Document Frequency (TF-IDF) (Sammut and Webb, 2010) is a term weighting scheme, commonly used for making relevant decisions and discover the strength of the relationship of words with the document they appear in (Ramos et al., 2003) .", "cite_spans": [ { "start": 235, "end": 255, "text": "(Ramos et al., 2003)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a TF-IDF weighing method that helps to determine the most successful candidate for the extractive summary among the possible continuous document parts of the required length. This approach is based on the fact that all of the gold standard summaries in the data provided by the organizers are in fact sections of the original documents that did not undergo any rewriting. We use the TF-IDF score to detect the most important sequence of up to 1000 words in a document. While the classic implementation is based on the evaluation of single words, we calculate the TF-IDF values for single-word and multi-word terms, mainly to recognize the specific financial terminology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "On purpose to find sequences of up to 1000 most important words in every document of the corpus, we do the next steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The method", "sec_num": "2" }, { "text": "1. define the value of the maximal length of multi-word term (See Section 2.1);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The method", "sec_num": "2" }, { "text": "2. find all the existing multi-word terms in the corpus and calculate the TF-IDF score for every one of them;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The method", "sec_num": "2" }, { "text": "3. compute summarized TF-IDF scores for all continuous sequences with 1000 words in a document;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The method", "sec_num": "2" }, { "text": "4. select the highest-ranking sequence as a summary for the specific document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The method", "sec_num": "2" }, { "text": "The pipeline of our approach is depicted in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 44, "end": 53, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The method", "sec_num": "2" }, { "text": "Because classic TF-IDF is computed for singleword terms only, and we want to extend it to multiword terms, we introduce a parameter that defines a maximal number of words in such a term. The aim of evaluating multi-word terms is to recognize the set of important document-specific phrases from their TF-IDF weights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-word terms", "sec_num": "2.1" }, { "text": "The original files are preprocessed using Python nltk library (Bird et al., 2009) . The preprocessing includes text splitting, tokenization, special symbols removal, removing of phone numbers, emails etc. Stopwords are not removed, but we use a custom stopword list containing the words ['and', 'the', 'is', 'are', ' this','at', 'of', 'to', 'in', 'on', 'for', 'or','a', 'an', 'as', 'page', 'by', 'with', 'our', 'we', 'that', 'may'] . All multi-word terms that contain stopwords only get zero TF-IDF values.", "cite_spans": [ { "start": 62, "end": 81, "text": "(Bird et al., 2009)", "ref_id": "BIBREF3" }, { "start": 287, "end": 431, "text": "['and', 'the', 'is', 'are', ' this','at', 'of', 'to', 'in', 'on', 'for', 'or','a', 'an', 'as', 'page', 'by', 'with', 'our', 'we', 'that', 'may']", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "2.2" }, { "text": "When the maximal number of words in term is defined (denoted by TL), the system finds in the corpus all the existing word sequences of length 1 to TL and calculates the TF-IDF score for every one of them. The following steps are performed:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating the TF-IDF matrix of a document", "sec_num": "2.3" }, { "text": "1. Generate multi-word terms of length 1 to TL as follows. For a document with DL words, there are DL single-word terms, DL \u2212 1 twoword terms, and so on. Finally, we have DL \u2212 TL + 1 terms with TL words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating the TF-IDF matrix of a document", "sec_num": "2.3" }, { "text": "2. Let T be a multi-word term with TL := |T | words in a document D i having DL i := |D i | words in total, and let T appear DR i times in the document D i . Term frequency of T in D i is calculated as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating the TF-IDF matrix of a document", "sec_num": "2.3" }, { "text": "TF (T, Di) = DR i DL i \u2212TL+1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating the TF-IDF matrix of a document", "sec_num": "2.3" }, { "text": "(1) 3. Let T appear in CR documents in the corpus of size N . Then the IDF score of T is: Figure 1 : Pipeline of our approach 4. Finally, the TF-IDF score of a term T in document D i is:", "cite_spans": [], "ref_spans": [ { "start": 90, "end": 98, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Creating the TF-IDF matrix of a document", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "IDF (T ) = log N CR", "eq_num": "(2)" } ], "section": "Creating the TF-IDF matrix of a document", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "TF -IDF (T, Di) = TF (T, Di) \u2022 IDF (T )", "eq_num": "(3)" } ], "section": "Creating the TF-IDF matrix of a document", "sec_num": "2.3" }, { "text": "2.4 Most important sequence in a document", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating the TF-IDF matrix of a document", "sec_num": "2.3" }, { "text": "In every document D i , we find all the sequences of up to 1000 words (there are DL i \u2212 999 such sequences in a document with more than a 1000 words), and calculate the sum of TF-IDF values for all the multi-word terms of any length that appear in every such sequence S:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating the TF-IDF matrix of a document", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "TF -IDF (S) = TL k=1 T \u2208S,|T |=k TF -IDF (T, Di)", "eq_num": "(4)" } ], "section": "Creating the TF-IDF matrix of a document", "sec_num": "2.3" }, { "text": "We rank the sequences by their TF-IDF scores and select the highest-ranking sequence as our summary. Implementation of calculating the totals for multiple sequences is based on the idea that given total of sequence W 1 W 2 . . . W n we can calculate the total of sequence W 2 W 3 . . . W n+1 by subtracting the values of terms that can include W 1 and adding the values of terms that can include W n+1 (according to the maximum number of words in a term). This approach allows the system to calculate and compare thousands of such sequences in each document in a very short time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating the TF-IDF matrix of a document", "sec_num": "2.3" }, { "text": "FNS 2021 Shared Task provides a dataset that contains companies' annual reports and 3-4 gold standard summaries for each report. The gold standard summaries were created by extracting whole sections (one or more) from the original document, according to a human financial expert's decision. The selected summaries sections are considered by the experts as most important and informative. Table 1 describes the dataset contents. The training dataset, which contains 3,000 reports and 9,873 gold summaries, was randomly divided by us into 3 groups of 1,000 documents each to facilitate the tfidf computation. Furthermore, every one of those three groups was divided into two subgroups of 500 documents each. We three variants of our system using values 1, 2, and 3 as the multi-word term size TL.", "cite_spans": [], "ref_spans": [ { "start": 388, "end": 395, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "For preprocessing such as sentence splitting and tokenization we used nltk package (Bird et al., 2009) ; We have used the MUSEEC tool (Litvak et al., 2016) to compute MUSE summaries to be used as a baseline a with 1000-word limits, respectively. We used the ROUGE 2.05 java package (Ganesan, 2018) .", "cite_spans": [ { "start": 83, "end": 102, "text": "(Bird et al., 2009)", "ref_id": "BIBREF3" }, { "start": 134, "end": 155, "text": "(Litvak et al., 2016)", "ref_id": "BIBREF14" }, { "start": 282, "end": 297, "text": "(Ganesan, 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Tools and runtime environment", "sec_num": "3.1" }, { "text": "For evaluation of the results of this approach, we applied it on the validation part of the FNS 2021 shared task dataset and compared the results to the results of Muse (Litvak et al., 2016) on the same set of documents. As an additional reference, we use the results of a trivial TOP-K baseline that includes the first 1000 words of a document. The results are reported in table 2, the results of our approach appear as TFIDF-SUM-N, where the number N is the maximal number of words in a term. 1 Experiments were performed on Google Colab with the default configuration.", "cite_spans": [ { "start": 169, "end": 190, "text": "(Litvak et al., 2016)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Methods and baselines", "sec_num": "3.2" }, { "text": "Four ROUGE (Lin, 2004) metrics-ROUGE-1, ROUGE-2, ROUGE-L, and ROUGE-SU4 were applied on the validation set. Table 2 shows the results, with recall, precision, and F-measure for each metric. It can be seen that as the maximum number of words in the term increases, the results dataset # documents # gold summaries avg sentences avg words avg characters Train 3,000 9,873 2,700 58,838 291,014 Validation 363 1,250 3,786 82,906 416,040 Test 500 NA 3,743 82,676 412,974 improve, but even with a single term (TFIDF-SUM-1), the system outperforms the baselines. Due to time constraints, only the TFIDF-SUM-1 system was submitted to the FNS-2021 shared task competition and it appears in its results as an SCE-3 system. It is important to note that increasing the maximum number of words in a multi-word term increases their amount drastically, and the memory usage increases as well. Therefore running the system with 3-word terms on Colab required us to divide the dataset into two parts and to compute the tf-idf scores for them separately. This approach reduces the precision of tf-idf, but because every run is still performed on almost 200 documents, we can see from the resulting ROUGE scores that an additional term compensates for the lack of tf-idf precision. Table 3 shows the results for the same ROUGE metrics, F-measure, obtained on the test set (provided by the FNS organizers).", "cite_spans": [ { "start": 11, "end": 22, "text": "(Lin, 2004)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 108, "end": 115, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 341, "end": 473, "text": "characters Train 3,000 9,873 2,700 58,838 291,014 Validation 363 1,250 3,786 82,906 416,040 Test 500 NA 3,743 82,676", "ref_id": "TABREF1" }, { "start": 1279, "end": 1286, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Evaluation results", "sec_num": "3.3" }, { "text": "Our system works very fast while producing hundreds of summaries in several minutes. For example, for 363 annual reports from Validation dataset, execution on Google Colab with default configuration was completed in 2 minutes 54 seconds with TFIDF-SUM-1, 6 minutes 22 seconds with TFIDF-SUM-2 and 10 minutes 50 seconds with TFIDF-SUM-3. Times may differ as the performance of Colab itself changes. But as the maximum number of words in a multi-word term increases, more possible terms exist and more memory is required. Using multi-word terms with more than three words resulted in an out-of-memory error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance", "sec_num": "3.4" }, { "text": "This paper introduces a method for summarization of financial documents. The method implements the TF-IDF technique with optimization for multiword terms. The system is fast, simple, and outperforms baselines. The evaluation results show that (1) evaluating multi-word terms vs single-word ones improves the quality of the summaries and (2) that extracting continuous sequence from the document provides the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "4" }, { "text": "Future work may include modifying the current method to extract the most important sentences instead of extracting the whole sequence. In addition, combining the multi-term TF-IDF weighting scheme with machine learning algorithms and Fin-BERT (Yang et al., 2020) embedding may provide interesting results.", "cite_spans": [ { "start": 243, "end": 262, "text": "(Yang et al., 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "4" }, { "text": "The results on the test set, provided by the FNS organizers, can be seen in the Appendix and on the leaderboard: https://www.lancaster.ac.uk/staff/ elhaj/docs/fns2021_results.pdf.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Langresearchlab_nc at fincausal 2020, task 1: A knowledge induced neural net for causality detection", "authors": [ { "first": "Raksha", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Ishaan", "middle": [], "last": "Verma", "suffix": "" }, { "first": "Niladri", "middle": [], "last": "Chatterjee", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation", "volume": "", "issue": "", "pages": "33--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raksha Agarwal, Ishaan Verma, and Niladri Chatterjee. 2020. Langresearchlab_nc at fincausal 2020, task 1: A knowledge induced neural net for causality detec- tion. In Proceedings of the 1st Joint Workshop on Fi- nancial Narrative Processing and MultiLing Finan- cial Summarisation, pages 33-39.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Amex ai-labs: An investigative study on extractive summarization of financial documents", "authors": [ { "first": "Piyush", "middle": [], "last": "Arora", "suffix": "" }, { "first": "Priya", "middle": [], "last": "Radhakrishnan", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation", "volume": "", "issue": "", "pages": "137--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piyush Arora and Priya Radhakrishnan. 2020. Amex ai-labs: An investigative study on extractive sum- marization of financial documents. In Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation, pages 137-142.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Extractive summarization system for annual reports", "authors": [ { "first": "Ait", "middle": [], "last": "Abderrahim", "suffix": "" }, { "first": "Juyeon", "middle": [], "last": "Azzi", "suffix": "" }, { "first": "", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation", "volume": "", "issue": "", "pages": "143--147", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abderrahim Ait Azzi and Juyeon Kang. 2020. Extrac- tive summarization system for annual reports. In Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Sum- marisation, pages 143-147.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Natural language processing with Python: analyzing text with the natural language toolkit", "authors": [ { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Ewan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyz- ing text with the natural language toolkit. \" O'Reilly Media, Inc.\".", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Ans Elhag, Houda Bouamor, Marina Litvak, Paul Rayson, George Giannakopoulos, and Nikiforos Pittaras. 2020a. Proceedings of the 1st joint workshop on financial narrative processing and multiling financial summarisation", "authors": [ { "first": "Mahmoud", "middle": [], "last": "El-Haj", "suffix": "" }, { "first": "Vasiliki", "middle": [], "last": "Athanasakou", "suffix": "" }, { "first": "Sira", "middle": [], "last": "Ferradans", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Salzedo", "suffix": "" } ], "year": null, "venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mahmoud El-Haj, Vasiliki Athanasakou, Sira Fer- radans, Catherine Salzedo, Ans Elhag, Houda Bouamor, Marina Litvak, Paul Rayson, George Gi- annakopoulos, and Nikiforos Pittaras. 2020a. Pro- ceedings of the 1st joint workshop on financial nar- rative processing and multiling financial summarisa- tion. In Proceedings of the 1st Joint Workshop on Fi- nancial Narrative Processing and MultiLing Finan- cial Summarisation.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The financial narrative summarisation shared task (fns 2020)", "authors": [ { "first": "Mahmoud", "middle": [], "last": "El-Haj", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Litvak", "suffix": "" }, { "first": "Nikiforos", "middle": [], "last": "Pittaras", "suffix": "" }, { "first": "George", "middle": [], "last": "Giannakopoulos", "suffix": "" } ], "year": null, "venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mahmoud El-Haj, Marina Litvak, Nikiforos Pittaras, George Giannakopoulos, et al. 2020b. The financial narrative summarisation shared task (fns 2020). In Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Sum- marisation, pages 1-12.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Nikiforos Pittaras, and George Giannakopoulos. 2021. The Financial Narrative Summarisation Shared Task (FNS 2021)", "authors": [], "year": null, "venue": "The Third Financial Narrative Processing Workshop (FNP 2021)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mahmoud El-Haj, Nadhem Zmandar, Paul Rayson, Ahmed AbuRa'ed, Marina Litvak, Nikiforos Pit- taras, and George Giannakopoulos. 2021. The Fi- nancial Narrative Summarisation Shared Task (FNS 2021). In The Third Financial Narrative Processing Workshop (FNP 2021), Lancaster, UK.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Lexrank: Graph-based lexical centrality as salience in text summarization", "authors": [ { "first": "G\u00fcnes", "middle": [], "last": "Erkan", "suffix": "" }, { "first": "", "middle": [], "last": "Dragomir R Radev", "suffix": "" } ], "year": 2004, "venue": "Journal of artificial intelligence research", "volume": "22", "issue": "", "pages": "457--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00fcnes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence re- search, 22:457-479.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Rouge 2.0: Updated and improved measures for evaluation of summarization tasks", "authors": [ { "first": "Kavita", "middle": [], "last": "Ganesan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.01937" ] }, "num": null, "urls": [], "raw_text": "Kavita Ganesan. 2018. Rouge 2.0: Updated and im- proved measures for evaluation of summarization tasks. arXiv preprint arXiv:1803.01937.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "End-toend training for financial report summarization", "authors": [ { "first": "La", "middle": [], "last": "Moreno", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Quatra", "suffix": "" }, { "first": "", "middle": [], "last": "Cagliero", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation", "volume": "", "issue": "", "pages": "118--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moreno La Quatra and Luca Cagliero. 2020. End-to- end training for financial report summarization. In Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Sum- marisation, pages 118-123.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Extractive financial narrative summarisation based on dpps", "authors": [ { "first": "Lei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yafei", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Yinan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation", "volume": "", "issue": "", "pages": "100--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lei Li, Yafei Jiang, and Yinan Liu. 2020. Extractive financial narrative summarisation based on dpps. In Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Sum- marisation, pages 100-104.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Rouge: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text summarization branches out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A new approach to improving multilingual summarization using a genetic algorithm", "authors": [ { "first": "Marina", "middle": [], "last": "Litvak", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Last", "suffix": "" }, { "first": "Menahem", "middle": [], "last": "Friedman", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th annual meeting of the association for computational linguistics", "volume": "", "issue": "", "pages": "927--936", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Litvak, Mark Last, and Menahem Friedman. 2010. A new approach to improving multilingual summarization using a genetic algorithm. In Pro- ceedings of the 48th annual meeting of the associa- tion for computational linguistics, pages 927-936.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Mining the gaps: Towards polynomial summarization", "authors": [ { "first": "Marina", "middle": [], "last": "Litvak", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Vanetik", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "655--660", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Litvak and Natalia Vanetik. 2013. Mining the gaps: Towards polynomial summarization. In Pro- ceedings of the Sixth International Joint Conference on Natural Language Processing, pages 655-660.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Museec: A multilingual text summarization tool", "authors": [ { "first": "Marina", "middle": [], "last": "Litvak", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Vanetik", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Last", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Churkin", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL-2016 System Demonstrations", "volume": "", "issue": "", "pages": "73--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Litvak, Natalia Vanetik, Mark Last, and Elena Churkin. 2016. Museec: A multilingual text summa- rization tool. In Proceedings of ACL-2016 System Demonstrations, pages 73-78.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Sce-summary at the fns 2020 shared task", "authors": [ { "first": "Marina", "middle": [], "last": "Litvak", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Vanetik", "suffix": "" }, { "first": "Zvi", "middle": [], "last": "Puchinsky", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation", "volume": "", "issue": "", "pages": "124--129", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Litvak, Natalia Vanetik, and Zvi Puchinsky. 2020. Sce-summary at the fns 2020 shared task. In Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Sum- marisation, pages 124-129.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Textrank: Bringing order into text", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Tarau", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "404--411", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bring- ing order into text. In Proceedings of the 2004 con- ference on empirical methods in natural language processing, pages 404-411.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Using tf-idf to determine word relevance in document queries", "authors": [ { "first": "Juan", "middle": [], "last": "Ramos", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the first instructional conference on machine learning", "volume": "242", "issue": "", "pages": "29--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juan Ramos et al. 2003. Using tf-idf to determine word relevance in document queries. In Proceedings of the first instructional conference on machine learn- ing, volume 242(1), pages 29-48. Citeseer.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "2010. TF-IDF", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "986--987", "other_ids": { "DOI": [ "10.1007/978-0-387-30164-8_832" ] }, "num": null, "urls": [], "raw_text": "Claude Sammut and Geoffrey I. Webb, editors. 2010. TF-IDF, pages 986-987. Springer US, Boston, MA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Point-5: Pointer network and t-5 based financial narrative summarisation", "authors": [ { "first": "Abhishek", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation", "volume": "", "issue": "", "pages": "105--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhishek Singh. 2020. Point-5: Pointer network and t-5 based financial narrative summarisation. In Pro- ceedings of the 1st Joint Workshop on Financial Nar- rative Processing and MultiLing Financial Summari- sation, pages 105-111.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Combining financial word embeddings and knowledge-based features for financial text summarization uc3m-mc system at fns-2020", "authors": [ { "first": "Paloma", "middle": [], "last": "Jaime Baldeon Suarez", "suffix": "" }, { "first": "Jose", "middle": [ "Luis" ], "last": "Mart\u00ednez", "suffix": "" }, { "first": "", "middle": [], "last": "Mart\u00ednez", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation", "volume": "", "issue": "", "pages": "112--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaime Baldeon Suarez, Paloma Mart\u00ednez, and Jose Luis Mart\u00ednez. 2020. Combining financial word embed- dings and knowledge-based features for financial text summarization uc3m-mc system at fns-2020. In Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Sum- marisation, pages 112-117.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Knowledge graph and deep neural network for extractive text summarization by utilizing triples", "authors": [ { "first": "Amit", "middle": [], "last": "Vhatkar", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" }, { "first": "Kavi", "middle": [], "last": "Arya", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation", "volume": "", "issue": "", "pages": "130--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amit Vhatkar, Pushpak Bhattacharyya, and Kavi Arya. 2020. Knowledge graph and deep neural network for extractive text summarization by utilizing triples. In Proceedings of the 1st Joint Workshop on Finan- cial Narrative Processing and MultiLing Financial Summarisation, pages 130-136.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Finbert: A pretrained language model for financial communications", "authors": [ { "first": "Yi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Christopher Siy", "suffix": "" }, { "first": "Allen", "middle": [], "last": "Uy", "suffix": "" }, { "first": "", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Yang, Mark Christopher Siy Uy, and Allen Huang. 2020. Finbert: A pretrained language model for fi- nancial communications. CoRR, abs/2006.08097.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Sumsum@ fns-2020 shared task", "authors": [ { "first": "Siyan", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Anneliese", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation", "volume": "", "issue": "", "pages": "148--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siyan Zheng, Anneliese Lu, and Claire Cardie. 2020. Sumsum@ fns-2020 shared task. In Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation, pages 148-152.", "links": null } }, "ref_entries": { "TABREF1": { "content": "
SystemR1 RR1 PR1 FR2 RR2 PR2 F
TOP-K0.266 0.241 0.221 0.0400.0380.034
MUSE0.261 0.297 0.243 0.0420.0520.040
TFIDF-SUM-1 0.353 0.317 0.322 0.1530.1100.121
TFIDF-SUM-2 0.450 0.396 0.410 0.2440.1560.183
TFIDF-SUM-3 0.477 0.415 0.433 0.2790.1770.209
SystemRL R RL P RL F RSU4 R RSU4 P RSU4 F
TOP-K0.264 0.239 0.220 0.0810.0760.069
MUSE0.255 0.292 0.238 0.0840.1000.079
TFIDF-SUM-1 0.263 0.279 0.258 0.2180.1410.164
TFIDF-SUM-2 0.374 0.332 0.343 0.3120.1880.227
TFIDF-SUM-3 0.411 0.362 0.374 0.3440.2070.250
", "text": "FNS 2021 dataset statistics.", "num": null, "type_str": "table", "html": null }, "TABREF2": { "content": "
SystemR1 F R2 F RL F RSU4 F
BASE0.450.240.420.27
MUSE0.500.380.520.43
TFIDF-SUM-1 0.330.120.270.17
LexRank0.310.120.270.16
", "text": "ROUGE results for FNS-2021 validation set.", "num": null, "type_str": "table", "html": null }, "TABREF3": { "content": "", "text": "ROUGE results for FNS-2021 test set.", "num": null, "type_str": "table", "html": null } } } }