{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:23:44.179669Z" }, "title": "PoinT-5: Pointer Network and T-5 based Financial Narrative Summarisation", "authors": [ { "first": "Abhishek", "middle": [], "last": "Singh", "suffix": "", "affiliation": {}, "email": "abhishek.s.eee15@iitbhu.ac.in" }, { "first": "Samsung", "middle": [ "R&d" ], "last": "Bangalore", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Companies provide annual reports to their shareholders at the end of the financial year that describes their operations and financial conditions. The average length of these reports is 80, and it may extend up to 250 pages long. In this paper, we propose our methodology PoinT-5 (the combination of Pointer Network and T-5 (Test-to-text transfer Transformer) algorithms) that we used in the Financial Narrative Summarisation (FNS) 2020 task. The proposed method uses Pointer networks to extract important narrative sentences from the report, and then T-5 is used to paraphrase extracted sentences into a concise yet informative sentence. We evaluate our method using ROUGE-N (1,2), L,and SU4. The proposed method achieves the highest precision scores in all the metrics and highest F1 scores in ROUGE 1,and LCS and only solution to cross MUSE solution baseline in ROUGE-LCS metrics.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Companies provide annual reports to their shareholders at the end of the financial year that describes their operations and financial conditions. The average length of these reports is 80, and it may extend up to 250 pages long. In this paper, we propose our methodology PoinT-5 (the combination of Pointer Network and T-5 (Test-to-text transfer Transformer) algorithms) that we used in the Financial Narrative Summarisation (FNS) 2020 task. The proposed method uses Pointer networks to extract important narrative sentences from the report, and then T-5 is used to paraphrase extracted sentences into a concise yet informative sentence. We evaluate our method using ROUGE-N (1,2), L,and SU4. The proposed method achieves the highest precision scores in all the metrics and highest F1 scores in ROUGE 1,and LCS and only solution to cross MUSE solution baseline in ROUGE-LCS metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Annual Reports may extend up to 250 pages long as stated above, which contains different sections General Corporate Information, financial and operating cost, CEOs message, Narrative texts, accounting policies, Financial statement including balance sheet and summary of financial data documents. In the Financial narrative summarisation task, only the narrative section is summarised, which is not explicitly marked in the dataset, making it challenging and interesting. In recent years, previous manual smallscale research in the Accounting and Finance literature has been scaled up with the aid of NLP and ML methods, for example, to examine approaches to retrieving structured content from financial reports, and to study the causes and consequences of corporate disclosure and financial reporting outcomes (El-Haj et al., 2018) .", "cite_spans": [ { "start": 810, "end": 831, "text": "(El-Haj et al., 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Companies produce glossy brochures of annual reports with a much looser structure, and this makes automatic summarisation of narratives in UK annual reports a challenging task (El-Haj et al., 2020) . Hence we summarize the narrative section of annual reports, particular narrative sentences that are spread loosely across the document need to be first identified and summarise those sentences. The summarisation limit is set to 1000 words, where the actual length of the report may go up to 250 pages long. Hence to summarize these long annual reports using a combination of extractive and abstractive summarisation.", "cite_spans": [ { "start": 176, "end": 197, "text": "(El-Haj et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The text summary method can be classified into two paradigms: extractive and abstractive. The extractive summarisation method extracts the meaningful sentences or a section of text from the original text and combines them (ranked or unranked) to form a summary (Cheng and Lapata, 2016; Narayan et al., 2018; Yasunaga et al., 2017; See et al., 2017) . Whereas abstractive summarisation generates words and sentences that are similar in meaning to the given text to form a summary that may not be in actual text (Nallapati et al., 2016; Rush et al., 2015; Paulus et al., 2017; Li et al., 2018) . When summarizing long documents such as in our case up to 250 pages long, extractive summarisation may not produce a coherent and readable summary, and abstractive summarisation cannot cover complete information using encoder-decoder architecture. One problem is that typical seq2seq frameworks often generate unnatural summaries consisting of repeated words or phrases (Li et al., 2018) . Hence, we come up with a combination of extractive and abstractive summarisation to first select important narrative sentences and concisely convey them.", "cite_spans": [ { "start": 261, "end": 285, "text": "(Cheng and Lapata, 2016;", "ref_id": "BIBREF3" }, { "start": 286, "end": 307, "text": "Narayan et al., 2018;", "ref_id": "BIBREF20" }, { "start": 308, "end": 330, "text": "Yasunaga et al., 2017;", "ref_id": "BIBREF29" }, { "start": 331, "end": 348, "text": "See et al., 2017)", "ref_id": "BIBREF24" }, { "start": 510, "end": 534, "text": "(Nallapati et al., 2016;", "ref_id": "BIBREF19" }, { "start": 535, "end": 553, "text": "Rush et al., 2015;", "ref_id": "BIBREF23" }, { "start": 554, "end": 574, "text": "Paulus et al., 2017;", "ref_id": "BIBREF21" }, { "start": 575, "end": 591, "text": "Li et al., 2018)", "ref_id": "BIBREF13" }, { "start": 964, "end": 981, "text": "(Li et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Pointer Networks (Vinyals et al., 2015) is used in various combinatorial optimization problems, such as Travelling Salesman Problem (TSP), Convex hull optimization. We used pointer networks in our task of financial narrative summarization to extract relevant narrative sentences in a particular order to have a logical flow in summary. These extracted sentences are paraphrased to summarise these sentences in an abstractive way using the T-5 sequence-to-sequence model. We train the complete model by optimizing the ROUGE-LCS evaluation metric through a reinforcement learning objective.", "cite_spans": [ { "start": 17, "end": 39, "text": "(Vinyals et al., 2015)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section and Appendix B, we discuss related works in the fields of abstractive summarisation, extractive summarisation, combinations of these two methods, reinforcement learning applications, and summarisation of Financial Narratives and their methodology. Studies of human summarizers show that it is common to apply various operations while condensing, such as paraphrasing, generalization, sentence-level summarisation and reordering (Jing, 2002) . We continue related work in Appendix B.", "cite_spans": [ { "start": 444, "end": 456, "text": "(Jing, 2002)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "The financial narrative summarisation dataset contains 3,863 annual reports for firms listed on LSE covering the period between 2002 and 2017 (El-Haj et al., 2019; El-Haj et al., 2014) . Dataset is randomly split into training (75%), testing and validation (25%) Table: 2. We use NLTK sentence tokenizer 1 to tokenize sentence in the annual report and summary processing for all our experiments. Data is further described and anayzed in Appendix A.", "cite_spans": [ { "start": 142, "end": 163, "text": "(El-Haj et al., 2019;", "ref_id": "BIBREF7" }, { "start": 164, "end": 184, "text": "El-Haj et al., 2014)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 263, "end": 269, "text": "Table:", "ref_id": null } ], "eq_spans": [], "section": "Data Description", "sec_num": "3" }, { "text": "Our model is composed of three parts that are first trained or executed individually and then finally brought together using policy gradient algorithm. As stated earlier the model is combination of extractive as well as abtractive methods. Initially dataset is provided in the form", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "4.1" }, { "text": "{x i , y j i }, 1 <= i <= N, 1 <= j <= 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "4.1" }, { "text": "Here i represents number of annual reports, j represents number of summaries for each annual reports and x, y represent report and summary respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "4.1" }, { "text": "In extraction process we assume that for every summary sentence there is matching sentence in the annual report. To train exaction model we need these corresponding sentences in the reports. Since, annual reports are not marked explicitly with sentences we followed ROUGE scores to extract these sentences as done in (Chen and Bansal, 2018; Nallapati et al., 2016) .For every summary sentence we calculate ROUGE with every sentence in the report and then choose the sentence with maximum value.", "cite_spans": [ { "start": 317, "end": 340, "text": "(Chen and Bansal, 2018;", "ref_id": "BIBREF2" }, { "start": 341, "end": 364, "text": "Nallapati et al., 2016)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "j t = argmax i ( ROUGE-L recall (d i , s t ))", "eq_num": "(1)" } ], "section": "Model Description", "sec_num": "4.1" }, { "text": "In equation 1 j t represent sentence with maximum ROUGE score for summary sentence s t for every sentence in report d i . For every report in training set there are multiple summaries. We calculate ROU GE \u2212 Lsummary for extracted sentences as mentioned in (Lin, 2004) .", "cite_spans": [ { "start": 256, "end": 267, "text": "(Lin, 2004)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "R lcs = n i=1 LCS U (r i , c i ) m", "eq_num": "(2)" } ], "section": "Model Description", "sec_num": "4.1" }, { "text": "In equation 2 R lcs represents ROUGE-L recall, r i , c i are report and summary sentences respectively and m is total number of words in extracted report sentences. Once summary level ROUGE-L recall is calculated we choose summary with maximum value for further processing. Once proxy sentences and a summary is selected applying above methods extraction model is trained. In extraction model sentence level representation of report sentences is calculated using hierarchical word to sentence level Bidirectional Long Short Term Memory (Bi-LSTM) (Hochreiter and Schmidhuber, 1997 applied to word sequence of the sentence to get sentence level semantic information. Then Bi-LSTM is applied to sentences representations to get document level information in each sentence representation. We train attention mechanism (Bahdanau et al., 2014) based Pointer Networks (Vinyals et al., 2015) different from copy mechanism used in (See et al., 2017) . Given these proxy sentences which we treat as ground truth and sentences extracted using pointer network, we train it to minimize cross-entropy loss. Once sentences are extracted using above methods we fine-tune T-5 based sequence-to-sequence model for abstraction. T-5 architecture is pretrained on C-4 dataset 2 using denoising method similar to that of Bert (Devlin et al., 2018) produce better result in language modelling. T-5 treats every task classification, summarisation, question answering, and translation as text-to-text format. These extracted sentences are taken as input and ground truth summary sentences are taken as output which is trained using cross-entropy loss. Input and output are prepared using T-5 tokenizer which outputs input ids and attention masks for input and target. These are then fed to model for training.", "cite_spans": [ { "start": 546, "end": 579, "text": "(Hochreiter and Schmidhuber, 1997", "ref_id": "BIBREF10" }, { "start": 814, "end": 837, "text": "(Bahdanau et al., 2014)", "ref_id": "BIBREF0" }, { "start": 861, "end": 883, "text": "(Vinyals et al., 2015)", "ref_id": "BIBREF26" }, { "start": 922, "end": 940, "text": "(See et al., 2017)", "ref_id": "BIBREF24" }, { "start": 1304, "end": 1325, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "4.1" }, { "text": "Once these individual components are trained individually, final complete model is trained using policy gradient algorithm with similar process as in (Chen and Bansal, 2018) . At every extraction step agent samples an action to extract document sentence an receive reward r(t + 1) which is ROUGE-L F 1 between output from T-5 after abstraction and ground truth summary sentence.", "cite_spans": [ { "start": 150, "end": 173, "text": "(Chen and Bansal, 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r(t + 1) = ROUGE-L F 1 (abtraction (d jt ) , s t )", "eq_num": "(3)" } ], "section": "Model Description", "sec_num": "4.1" }, { "text": "The model is trained using advantage actor-critic model to mitigate bias incurred in REINFORCE (Williams, 1992) . Overall idea of our method is that first proxy sentences are extracted using ROUGE score maximisation, then extraction model is trained to extract unique narrative sentences from the report then these sentences are paraphrased using T-5 algorithm for abstraction to give concise yet informative sentence. Reinforcement learning helps to maximise ROUGE score by rewarding good sentences that are extracted and penalising bad sentences. ", "cite_spans": [ { "start": 95, "end": 111, "text": "(Williams, 1992)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "4.1" }, { "text": "During abstraction we limit maximum number of sentences to 80, since there is limit on word limit of 1000 words and most of the reports' narrative can be summarised in less than 80 sentences as seen in section:3. Word2vec (Mikolov et al., 2013) embedding is used in representation of word in extractor model.. Vocab size is limited to 20000, embedding size is 300, maximum number of words in a sentence 60. Model is trained using Adam optimizer with learning rate of 0.001, decay rate of 0.5. Gradient norm is clipped at 1.0. Beam size if fixed to 2 in T-5 network, repetition penalty of 2.0. Rouge-LCS is used to optimize RL training. Abtractor and extractor networks are trained using cross-entropy loss. Traiing is done on Tesla-K80 12Gb colab GPUs with batch size of 16 and check point frequency of 16 batches.", "cite_spans": [ { "start": 222, "end": 244, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter Tuning", "sec_num": "4.2" }, { "text": "In this section we present results from our experiments and compare with different baselines MUSE (Litvak et al., 2010) , Text-rank (Mihalcea and Tarau, 2004) , Lex-Rank (Erkan and Radev, 2004) , and Polynomial Summarisation (Litvak and Vanetik, 2013) . We train three models in our experiments (PoinT-5) Pointer Network with T-5, Pointer Network alone, and Pre-trained Bert for text summarisation. In 1 bold is marked for highest value amongst all the solutions for the task including baselines. Pre-trained Bert for summarisation is not fine-tuned for this specific task hence its performance is not as good as Pointer Network and PoinT-5 (Pointer Network + T-5). PoinT-5 network gives highest results for precision on all the evaluation metrics ROUGE -1,2,L,SU4. From this it can be inferred that generated summmaries are highly precise in extracting narrative sentences and matches with ground truth summaries. Recall is comparatively much lower than precision in most of the evaluated metrics which means that generated summaries do not cover all the information in ground-truth summaries. This large difference in precision and recall shows that generated summaries do not cover all the sentences possibly due to restriction imposed on the number of sentences during training to follow word limit in summaries. Less recall and high precision value shows that generated summaries provide highly relevant information but does not cover complete information in ground truth summaries. From results it is evident that there is not much difference in performance between pointer net performance and PoinT-5 which extraction played key role in the architecture. PoinT-5 and Pointer Networks are the only system to cross MUSE baseline in ROUGE-L by atleat 5 points. These models give highest F1 results in ROUGE-L and ROUGE 1 metrics and highest precision in ROUGE-L,1,2,SU4.", "cite_spans": [ { "start": 98, "end": 119, "text": "(Litvak et al., 2010)", "ref_id": "BIBREF16" }, { "start": 132, "end": 158, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF17" }, { "start": 170, "end": 193, "text": "(Erkan and Radev, 2004)", "ref_id": "BIBREF9" }, { "start": 225, "end": 251, "text": "(Litvak and Vanetik, 2013)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5" }, { "text": "In this work we present our solution on Financial Narrative Summarisation(FNS2020) dataset using PoinT-5 method explained in 4. It is combination of both extractive and abstractive methods using Pointer Network and T-5. With these methods we are able to achieve highest precision score in every evaluation metric and achieve highest F-1 scores in ROUGE-LCS and ROUGE-1. In our future work we would like to address several limitation of our method such as factual correctness in summaries which is very important in financial domain as done in in summarizing radiology reports. To improve precision of our generated summaries under 1000 words we would formulate a penalty if system generates more than 1000 words during training of RL algorithm rather than restricting algorithm to fixed number of sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "6" }, { "text": "B Extended Related Work (Li et al., 2018) proposes a training framework based on the actor-critic model. They apply the attentionbased sequence-to-sequence model as the actor to conduct summary generation. For the critic, they combine the maximum likelihood estimator with a well designed global summary quality estimator. (Nallapati et al., 2016) propose RNN based encoder-decoder model for abstractive summarisation. They apply bi-directional GRU-RNN at the encoder side and uni-directional GRU in decoder with attention. In their approach, each mini-batch's decoder-vocabulary is restricted to words in the source documents of that batch. (Rush et al., 2015) also propose attention based sequence to sequence model for abstractive summarisation. (Paulus et al., 2017) states that attentional, RNN-based encoder-decoder models for abstractive summarisation have achieved good performance on short input and output sequences. For longer documents and summaries, however, these models often include repetitive and incoherent phrases. Hence they combine standard word prediction with the global sequence prediction training of RL which makes resulting summaries become more readable. (Narayan et al., 2018) propose a reinforcement learning-based sentence ranking approach in extractive summarisation. (Cheng and Lapata, 2016) use attention architecture for words and sentence level extraction in extractive summarisation method. (Yasunaga et al., 2017) proposes a multi-document summarization system that exploits the representational power of deep neural networks and the sentence relation information encoded in graph representations of document clusters. Specifically, they apply Graph Convolutional Networks on sentence relation graphs. (See et al., 2017) propose a novel \"Pointer Generator\" networks for abstractive summarisation. In this model a word is chosen with probability P gen from overall vocabulary and 1\u2212P gen from current sentence using attention weights. They apply coverage mechanism by (Tu et al., 2016) to avoid word repetitions in the summary which is common in large documents. (Chen and Bansal, 2018) apply combination abstractive and extractive summarisation by join training using reinforcement learning. They apply pointer network for extraction (different from Pointer Generator) and RNN based encoder-decoder for abstraction. (Hsu et al., 2018) propose unified extractive and abstractive with a hierarchical sentence and word level attention model using novel inconsistency loss. (Wang et al., 2019) used T-5 (Raffel et al., 2019) based sentence representation for combined extractive and abstractive training.", "cite_spans": [ { "start": 24, "end": 41, "text": "(Li et al., 2018)", "ref_id": "BIBREF13" }, { "start": 323, "end": 347, "text": "(Nallapati et al., 2016)", "ref_id": "BIBREF19" }, { "start": 642, "end": 661, "text": "(Rush et al., 2015)", "ref_id": "BIBREF23" }, { "start": 749, "end": 770, "text": "(Paulus et al., 2017)", "ref_id": "BIBREF21" }, { "start": 1183, "end": 1205, "text": "(Narayan et al., 2018)", "ref_id": "BIBREF20" }, { "start": 1300, "end": 1324, "text": "(Cheng and Lapata, 2016)", "ref_id": "BIBREF3" }, { "start": 1428, "end": 1451, "text": "(Yasunaga et al., 2017)", "ref_id": "BIBREF29" }, { "start": 1740, "end": 1758, "text": "(See et al., 2017)", "ref_id": "BIBREF24" }, { "start": 2005, "end": 2022, "text": "(Tu et al., 2016)", "ref_id": "BIBREF25" }, { "start": 2100, "end": 2123, "text": "(Chen and Bansal, 2018)", "ref_id": "BIBREF2" }, { "start": 2354, "end": 2372, "text": "(Hsu et al., 2018)", "ref_id": "BIBREF11" }, { "start": 2508, "end": 2527, "text": "(Wang et al., 2019)", "ref_id": "BIBREF27" }, { "start": 2537, "end": 2558, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "6" }, { "text": "Financial Narrative Summarisation has been explored in the past by (Cardinaels et al., 2018) . They provide multiple evidence that algorithm-based summaries are less positively biased than management summaries.", "cite_spans": [ { "start": 67, "end": 92, "text": "(Cardinaels et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "6" }, { "text": "https://www.tensorflow.org/datasets/catalog/c4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "A Data Analysis Table 2 presented total summaries and annual reports in train, validation, and test set. During our analysis, we found that most of the annual reports contain 100-200 sentences. There are 279 summaries with more than 500 sentences, which is large. Whereas in summaries, the average number of sentences is 50. Therefore summaries are one forth on average of annual reports and up to one-tenth in many cases. We present these analysis in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 16, "end": 23, "text": "Table 2", "ref_id": null }, { "start": 452, "end": 460, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Appendices", "sec_num": null }, { "text": "Training Validation Testing Report full text 3,000 363 500 Gold Summaries 9,873 1,250 1673 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Type", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.0473" ] }, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Automatic summaries of earnings releases: Attributes and effects on investors' judgments", "authors": [ { "first": "Eddy", "middle": [], "last": "Cardinaels", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Hollander", "suffix": "" }, { "first": "Brian", "middle": [ "J" ], "last": "White", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eddy Cardinaels, Stephan Hollander, and Brian J White. 2018. Automatic summaries of earnings releases: At- tributes and effects on investors' judgments. Available at SSRN 2904384.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Fast abstractive summarization with reinforce-selected sentence rewriting", "authors": [ { "first": "Yen-Chun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.11080" ] }, "num": null, "urls": [], "raw_text": "Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewrit- ing. arXiv preprint arXiv:1805.11080.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural summarization by extracting sentences and words", "authors": [ { "first": "Jianpeng", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1603.07252" ] }, "num": null, "urls": [], "raw_text": "Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. arXiv preprint arXiv:1603.07252.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Detecting document structure in a very large corpus of uk financial reports", "authors": [ { "first": "Mahmoud", "middle": [], "last": "El-Haj", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Rayson", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Young", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mahmoud El-Haj, Paul Rayson, Steven Young, and Martin Walker. 2014. Detecting document structure in a very large corpus of uk financial reports.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The first financial narrative processing workshop (fnp 2018)", "authors": [ { "first": "Mahmoud", "middle": [], "last": "El-Haj", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Rayson", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Moore", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the LREC 2018 Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mahmoud El-Haj, Paul Rayson, and Andrew Moore. 2018. The first financial narrative processing workshop (fnp 2018). In Proceedings of the LREC 2018 Workshop.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Multilingual financial narrative processing: Analysing annual reports in english, spanish and portuguese", "authors": [ { "first": "Mahmoud", "middle": [], "last": "El-Haj", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Rayson", "suffix": "" }, { "first": "Paulo", "middle": [], "last": "Alves", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Herrero-Zorita", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Young", "suffix": "" } ], "year": 2019, "venue": "Multilingual Text Analysis: Challenges, Models, And Approaches", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mahmoud El-Haj, Paul Rayson, Paulo Alves, Carlos Herrero-Zorita, and Steven Young. 2019. Multilingual financial narrative processing: Analysing annual reports in english, spanish and portuguese. Multilingual Text Analysis: Challenges, Models, And Approaches, page 441.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The Financial Narrative Summarisation Shared Task (FNS 2020)", "authors": [], "year": 2020, "venue": "The 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation (FNP-FNS 2020", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mahmoud El-Haj, Ahmed AbuRa'ed, Nikiforos Pittaras, and George Giannakopoulos. 2020. The Financial Nar- rative Summarisation Shared Task (FNS 2020). In The 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation (FNP-FNS 2020, Barcelona, Spain.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Lexrank: Graph-based lexical centrality as salience in text summarization", "authors": [ { "first": "G\u00fcnes", "middle": [], "last": "Erkan", "suffix": "" }, { "first": "", "middle": [], "last": "Dragomir R Radev", "suffix": "" } ], "year": 2004, "venue": "Journal of artificial intelligence research", "volume": "22", "issue": "", "pages": "457--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00fcnes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summa- rization. Journal of artificial intelligence research, 22:457-479.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A unified model for extractive and abstractive summarization using inconsistency loss", "authors": [ { "first": "Wan-Ting", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "Chieh-Kai", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Ming-Ying", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kerui", "middle": [], "last": "Min", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.06266" ] }, "num": null, "urls": [], "raw_text": "Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A unified model for extractive and abstractive summarization using inconsistency loss. arXiv preprint arXiv:1805.06266.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Using hidden markov modeling to decompose human-written summaries", "authors": [ { "first": "Hongyan", "middle": [], "last": "Jing", "suffix": "" } ], "year": 2002, "venue": "Computational linguistics", "volume": "28", "issue": "4", "pages": "527--543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongyan Jing. 2002. Using hidden markov modeling to decompose human-written summaries. Computational linguistics, 28(4):527-543.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Actor-critic based training framework for abstractive summarization", "authors": [ { "first": "Piji", "middle": [], "last": "Li", "suffix": "" }, { "first": "Lidong", "middle": [], "last": "Bing", "suffix": "" }, { "first": "Wai", "middle": [], "last": "Lam", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.11070" ] }, "num": null, "urls": [], "raw_text": "Piji Li, Lidong Bing, and Wai Lam. 2018. Actor-critic based training framework for abstractive summarization. arXiv preprint arXiv:1803.11070.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Rouge: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text summarization branches out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Mining the gaps: Towards polynomial summarization", "authors": [ { "first": "Marina", "middle": [], "last": "Litvak", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Vanetik", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "655--660", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Litvak and Natalia Vanetik. 2013. Mining the gaps: Towards polynomial summarization. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 655-660.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A new approach to improving multilingual summarization using a genetic algorithm", "authors": [ { "first": "Marina", "middle": [], "last": "Litvak", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Last", "suffix": "" }, { "first": "Menahem", "middle": [], "last": "Friedman", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th annual meeting of the association for computational linguistics", "volume": "", "issue": "", "pages": "927--936", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Litvak, Mark Last, and Menahem Friedman. 2010. A new approach to improving multilingual summariza- tion using a genetic algorithm. In Proceedings of the 48th annual meeting of the association for computational linguistics, pages 927-936.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Textrank: Bringing order into text", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Tarau", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "404--411", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing, pages 404-411.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Abstractive text summarization using sequence-to-sequence rnns and beyond", "authors": [ { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1602.06023" ] }, "num": null, "urls": [], "raw_text": "Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Ranking sentences for extractive summarization with reinforcement learning", "authors": [ { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "B", "middle": [], "last": "Shay", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1802.08636" ] }, "num": null, "urls": [], "raw_text": "Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Ranking sentences for extractive summarization with reinforcement learning. arXiv preprint arXiv:1802.08636.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A deep reinforced model for abstractive summarization", "authors": [ { "first": "Romain", "middle": [], "last": "Paulus", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.04304" ] }, "num": null, "urls": [], "raw_text": "Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summariza- tion. arXiv preprint arXiv:1705.04304.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter J", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.10683" ] }, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A neural attention model for abstractive sentence summarization", "authors": [ { "first": "Sumit", "middle": [], "last": "Alexander M Rush", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1509.00685" ] }, "num": null, "urls": [], "raw_text": "Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Get to the point: Summarization with pointergenerator networks", "authors": [ { "first": "Abigail", "middle": [], "last": "See", "suffix": "" }, { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.04368" ] }, "num": null, "urls": [], "raw_text": "Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer- generator networks. arXiv preprint arXiv:1704.04368.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Modeling coverage for neural machine translation", "authors": [ { "first": "Zhaopeng", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Zhengdong", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xiaohua", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1601.04811" ] }, "num": null, "urls": [], "raw_text": "Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. arXiv preprint arXiv:1601.04811.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Pointer networks", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Meire", "middle": [], "last": "Fortunato", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2692--2700", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in neural information processing systems, pages 2692-2700.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A text abstraction summary model based on bert word embedding and reinforcement learning", "authors": [ { "first": "Qicai", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Peiyu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Zhenfang", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Hongxia", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Qiuyue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Lindong", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2019, "venue": "Applied Sciences", "volume": "9", "issue": "21", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qicai Wang, Peiyu Liu, Zhenfang Zhu, Hongxia Yin, Qiuyue Zhang, and Lindong Zhang. 2019. A text abstraction summary model based on bert word embedding and reinforcement learning. Applied Sciences, 9(21):4701.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "authors": [ { "first": "J", "middle": [], "last": "Ronald", "suffix": "" }, { "first": "", "middle": [], "last": "Williams", "suffix": "" } ], "year": 1992, "venue": "Machine learning", "volume": "8", "issue": "3-4", "pages": "229--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learn- ing. Machine learning, 8(3-4):229-256.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Graph-based neural multi-document summarization", "authors": [ { "first": "Michihiro", "middle": [], "last": "Yasunaga", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kshitijh", "middle": [], "last": "Meelu", "suffix": "" }, { "first": "Ayush", "middle": [], "last": "Pareek", "suffix": "" }, { "first": "Krishnan", "middle": [], "last": "Srinivasan", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.06681" ] }, "num": null, "urls": [], "raw_text": "Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based neural multi-document summarization. arXiv preprint arXiv:1706.06681.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Optimizing the factual correctness of a summary: A study of summarizing radiology reports", "authors": [ { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Derek", "middle": [], "last": "Merck", "suffix": "" }, { "first": "Emily", "middle": [ "Bao" ], "last": "Tsai", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Curtis", "middle": [ "P" ], "last": "Langlotz", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.02541" ] }, "num": null, "urls": [], "raw_text": "Yuhao Zhang, Derek Merck, Emily Bao Tsai, Christopher D Manning, and Curtis P Langlotz. 2019. Op- timizing the factual correctness of a summary: A study of summarizing radiology reports. arXiv preprint arXiv:1911.02541.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Complete method diagram", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "content": "
MetricsText-RankLex-Rank Polynomial MUSEPre-trained BertPointer NetPoinT-5
Prec(R-L)0.2350.2100.2600.4700.2130.603 *0.605
Recall(R-L)0.1970.2630.1770.3700.2540.3770.377
F-1(R-L)0.2060.2180.2050.4070.2250.455 *0.456
Prec(R-1)0.4140.3370.3240.4830.2410.611 *0.612
Recall (R-1)0.1180.2690.2530.4130.3780.3920.393
F-1 (R-1)0.1720.2640.2740.4330.2830.465 *0.466
Prec (R-2)0.2290.1930.1470.3110.1140.448 *0.451
Recall (R-2)0.0440.1070.0880.1980.1380.220.0.222
F-1 (R-20.0700.1200.1050.2340.1180.2890.289
Prec(R-SU4)0.3020.2530.2130.3750.1650.506 *0.508
Recall(R-SU4) 0.0480.1170.1050.2010.1490.2080.209
F-1(R-SU4)0.0790.1400.1350.2530.1490.2860.288
Table 1: ROUGE Evaluation on Financial Narrative Summarisation data (Bold represents highest overall,
Prec represents Precision and * represents second-highest overall)
", "text": "", "num": null, "html": null, "type_str": "table" } } } }