{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:13:31.552392Z" }, "title": "Identifying Implicit Quotes for Unsupervised Extractive Summarization of Conversations", "authors": [ { "first": "Ryuji", "middle": [], "last": "Kano", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Institute of Technology", "location": {} }, "email": "kano.ryuji@fujixerox.co.jp" }, { "first": "Yasuhide", "middle": [], "last": "Miura", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Institute of Technology", "location": {} }, "email": "yasuhide.miura@fujixerox.co.jp" }, { "first": "Tomoki", "middle": [], "last": "Taniguchi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Institute of Technology", "location": {} }, "email": "taniguchi.tomoki@fujixerox.co.jp" }, { "first": "Tomoko", "middle": [], "last": "Ohkuma", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Institute of Technology", "location": {} }, "email": "ohkuma.tomoko@fujixerox.co.jp" }, { "first": "\u2020", "middle": [], "last": "Fuji", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Institute of Technology", "location": {} }, "email": "" }, { "first": "Xerox", "middle": [], "last": "Co", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Institute of Technology", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose Implicit Quote Extractor, an endto-end unsupervised extractive neural summarization model for conversational texts. When we reply to posts, quotes are used to highlight important part of texts. We aim to extract quoted sentences as summaries. Most replies do not explicitly include quotes, so it is difficult to use quotes as supervision. However, even if it is not explicitly shown, replies always refer to certain parts of texts; we call them implicit quotes. Implicit Quote Extractor aims to extract implicit quotes as summaries. The training task of the model is to predict whether a reply candidate is a true reply to a post. For prediction, the model has to choose a few sentences from the post. To predict accurately, the model learns to extract sentences that replies frequently refer to. We evaluate our model on two email datasets and one social media dataset, and confirm that our model is useful for extractive summarization. We further discuss two topics; one is whether quote extraction is an important factor for summarization, and the other is whether our model can capture salient sentences that conventional methods cannot.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We propose Implicit Quote Extractor, an endto-end unsupervised extractive neural summarization model for conversational texts. When we reply to posts, quotes are used to highlight important part of texts. We aim to extract quoted sentences as summaries. Most replies do not explicitly include quotes, so it is difficult to use quotes as supervision. However, even if it is not explicitly shown, replies always refer to certain parts of texts; we call them implicit quotes. Implicit Quote Extractor aims to extract implicit quotes as summaries. The training task of the model is to predict whether a reply candidate is a true reply to a post. For prediction, the model has to choose a few sentences from the post. To predict accurately, the model learns to extract sentences that replies frequently refer to. We evaluate our model on two email datasets and one social media dataset, and confirm that our model is useful for extractive summarization. We further discuss two topics; one is whether quote extraction is an important factor for summarization, and the other is whether our model can capture salient sentences that conventional methods cannot.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "As the amount of information exchanged via online conversations is growing rapidly, automated summarization of conversations is in demand. Neuralnetwork-based models have achieved great performance on supervised summarization, but its application to unsupervised summarization is not sufficiently explored. Supervised summarization requires tens of thousands of human-annotated summaries. Because it is not realistic to prepare such large datasets for every domain, there is a growing requirement for unsupervised methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous research proposed diverse methods of unsupervised summarization. Graph-centrality Figure 1 : Example of a post and a reply with a quote and a reply with no quote. Implicit quote is the part of post that reply refers to, but not explicitly shown in the reply.", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 99, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "based on the similarity of sentences (Mihalcea and Tarau, 2004; Erkan and Radev, 2004; Zheng and Lapata, 2019) has long been a strong feature for unsupervised summarization, and is also used to summarize conversations (Mehdad et al., 2014; Shang et al., 2018) . Apart from centrality, centroid of vectors (Gholipour Ghalandari, 2017), Kullback-Leibler divergence (Haghighi and Vanderwende, 2009) , reconstruction loss (He et al., 2012; Liu et al., 2015; Ma et al., 2016) , and path scores of word graphs (Mehdad et al., 2014; Shang et al., 2018) , are leveraged for summarization.", "cite_spans": [ { "start": 37, "end": 63, "text": "(Mihalcea and Tarau, 2004;", "ref_id": "BIBREF24" }, { "start": 64, "end": 86, "text": "Erkan and Radev, 2004;", "ref_id": "BIBREF5" }, { "start": 87, "end": 110, "text": "Zheng and Lapata, 2019)", "ref_id": "BIBREF29" }, { "start": 218, "end": 239, "text": "(Mehdad et al., 2014;", "ref_id": "BIBREF23" }, { "start": 240, "end": 259, "text": "Shang et al., 2018)", "ref_id": "BIBREF27" }, { "start": 363, "end": 395, "text": "(Haghighi and Vanderwende, 2009)", "ref_id": "BIBREF10" }, { "start": 418, "end": 435, "text": "(He et al., 2012;", "ref_id": "BIBREF12" }, { "start": 436, "end": 453, "text": "Liu et al., 2015;", "ref_id": "BIBREF20" }, { "start": 454, "end": 470, "text": "Ma et al., 2016)", "ref_id": "BIBREF22" }, { "start": 504, "end": 525, "text": "(Mehdad et al., 2014;", "ref_id": "BIBREF23" }, { "start": 526, "end": 545, "text": "Shang et al., 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The premise of these methods is that important topics appear frequently in a document. Therefore, if important topics appear only a few times, these methods fail to capture salient sentences. For more accurate summarization, relying solely on the frequency is not sufficient and we need to focus on other aspects of texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As an alternative aspect, we propose \"the probability of being quoted\". When one replies to an email or a post, a quote is used to highlight the important parts of the text; an example is shown in Figure 1 . The reply on the bottom includes a quote, which generally starts with a symbol \">\". If we can predict quoted parts, we can extract important sentences irrespective of how frequently the same topic appears in the text. Thus, we aim to extract quotes as summaries.", "cite_spans": [], "ref_spans": [ { "start": 197, "end": 205, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous research assigned weights to words that appear in quotes, and improved the centroidbased summarization (Carenini et al., 2007; Oya and Carenini, 2014) . However, most replies do not include quotes, so it is difficult to use quotes as the training labels of neural models. We propose a model that can be trained without explicit labels of quotes. The model is Implicit Quote Extractor (IQE). As shown in Figure 1 , implicit quotes are sentences of posts that are not explicitly quoted in replies, but are those the replies most likely refer to. The aim of our model is to extract these implicit quotes for extractive summarization.", "cite_spans": [ { "start": 112, "end": 135, "text": "(Carenini et al., 2007;", "ref_id": "BIBREF3" }, { "start": 136, "end": 159, "text": "Oya and Carenini, 2014)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 412, "end": 420, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We use pairs of a post and reply candidate to train the model. The training task of the model is to predict if a reply candidate is an actual reply to the post. IQE extracts a few sentences of the post as a feature for prediction. To predict accurately, IQE has to extract sentences that replies frequently refer to. Summaries should not depend on replies, so IQE does not use reply features to extract sentences. The model requires replies only during the training and not during the evaluation. We evaluate our model with two datasets of Enron mail (Loza et al., 2014) , corporate and private mails, and verify that our model outperforms baseline models. We also evaluated our model with Reddit TIFU dataset (Kim et al., 2019) and achieved results competitive with those of the baseline models.", "cite_spans": [ { "start": 551, "end": 570, "text": "(Loza et al., 2014)", "ref_id": "BIBREF21" }, { "start": 710, "end": 728, "text": "(Kim et al., 2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our model is based on a hypothesis that the ability of extracting quotes leads to a good result. Using the Reddit dataset where quotes are abundant, we obtain results that supports the hypothesis. Furthermore, we both quantitatively and qualitatively analyzed that our model can capture salient sentences that conventional frequency-based methods cannot. The contributions of our research are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We verified that \"the possibility of being quoted\" is useful for summarization, and demonstrated that it reflects an important aspect of saliency that conventional methods do not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We proposed an unsupervised extractive neural summarization model, Implicit Quote Extractor (IQE), and demonstrated that the model outperformed or achieved results competitive to baseline models on two mail datasets and a Reddit dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Using the Reddit dataset, we verified that quote extraction leads to a high performance of summarization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Summarization methods can be roughly grouped into two methods: extractive summarization and abstractive summarization. Most unsupervised summarization methods proposed are extractive methods. Despite the rise of neural networks, conventional non-neural methods are still powerful in the field of unsupervised extractive summarization. The graph-centrality-based method (Mihalcea and Tarau, 2004; Erkan and Radev, 2004; Zheng and Lapata, 2019) and centroid-based method (Gholipour Ghalandari, 2017) have been major methods in this field. Other models use reconstruction loss (He et al., 2012; Liu et al., 2015; Ma et al., 2016) , Kullback-Leibler divergence (Haghighi and Vanderwende, 2009) or path score calculation (Mehdad et al., 2014; Shang et al., 2018) based on multi-sentence compression algorithm (Filippova, 2010) . These methods assume that important topics appear frequently in a document, but our model focuses on a different aspect of texts: the probability of being quoted. That is, our model can extract salient sentences that conventional methods fail to.", "cite_spans": [ { "start": 369, "end": 395, "text": "(Mihalcea and Tarau, 2004;", "ref_id": "BIBREF24" }, { "start": 396, "end": 418, "text": "Erkan and Radev, 2004;", "ref_id": "BIBREF5" }, { "start": 419, "end": 442, "text": "Zheng and Lapata, 2019)", "ref_id": "BIBREF29" }, { "start": 574, "end": 591, "text": "(He et al., 2012;", "ref_id": "BIBREF12" }, { "start": 592, "end": 609, "text": "Liu et al., 2015;", "ref_id": "BIBREF20" }, { "start": 610, "end": 626, "text": "Ma et al., 2016)", "ref_id": "BIBREF22" }, { "start": 657, "end": 689, "text": "(Haghighi and Vanderwende, 2009)", "ref_id": "BIBREF10" }, { "start": 716, "end": 737, "text": "(Mehdad et al., 2014;", "ref_id": "BIBREF23" }, { "start": 738, "end": 757, "text": "Shang et al., 2018)", "ref_id": "BIBREF27" }, { "start": 804, "end": 821, "text": "(Filippova, 2010)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "A few neural-network-based unsupervised extractive summarization methods were proposed (K\u00e5geb\u00e4ck et al., 2014; Yin and Pei, 2015; Ma et al., 2016) . However, these methods use pretrained neural network models as a feature extractor, whereas we propose an end-to-end neural extractive summarization model.", "cite_spans": [ { "start": 87, "end": 110, "text": "(K\u00e5geb\u00e4ck et al., 2014;", "ref_id": "BIBREF15" }, { "start": 111, "end": 129, "text": "Yin and Pei, 2015;", "ref_id": "BIBREF28" }, { "start": 130, "end": 146, "text": "Ma et al., 2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "As for end-to-end unsupervised neural models, a few abstractive models have been proposed. For sentence compression, Fevry and Phang (2018) employed the task to reorder the shuffled word order of sentences. Baziotis et al. (2019) employed the reconstruction task of the original sentence from a compressed one. For review abstractive summarization, Isonuma et al. (2019) Figure 2 : Description of our model, Implicit Quote Extractor (IQE). The Extractor extracts sentences and uses them as summaries. k and j are indices of the extracted sentences.", "cite_spans": [ { "start": 207, "end": 229, "text": "Baziotis et al. (2019)", "ref_id": "BIBREF1" }, { "start": 349, "end": 370, "text": "Isonuma et al. (2019)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 371, "end": 379, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "! \" ! ! # ! ! $ ! ! \" %&' ! # %&' ! ( %&' \u2026 ! \" ) ! # ) ! * ) \u2026 \u2026 \u2026 \" \" ! Split to sentences Attention & Gumbel Softmax # \" %&' \u2253 ! + ! # , %&' # ( %&' \u2253 ! - ! \" # ! \" $ ! \" \" ) \" # ) \" * ) BiLSTM BiLSTM BiLSTM \u2026", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "(2019) generated summaries from mean vectors of review vectors, and Amplayo and Lapata (2020) employed the prior distribution of Variational Auto-Encoder to induce summaries. Another research employed a task to reconstruct masked sentences for summarization (Laban et al., 2020) .", "cite_spans": [ { "start": 258, "end": 278, "text": "(Laban et al., 2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Research on the summarization of online conversations such as mail, chat, social media, and online discussion fora has been conducted for a long time. Despite the rise of neural summarization models, most research on conversation summarization is based on non-neural models. A few used path scores of word graphs (Mehdad et al., 2014; Shang et al., 2018) . Dialogue act classification is a classification task that classifies sentences depending on what their functions are (e.g.: questions, answers, greetings), and has also been applied for summarization (Bhatia et al., 2014; Oya and Carenini, 2014) .", "cite_spans": [ { "start": 313, "end": 334, "text": "(Mehdad et al., 2014;", "ref_id": "BIBREF23" }, { "start": 335, "end": 354, "text": "Shang et al., 2018)", "ref_id": "BIBREF27" }, { "start": 557, "end": 578, "text": "(Bhatia et al., 2014;", "ref_id": "BIBREF2" }, { "start": 579, "end": 602, "text": "Oya and Carenini, 2014)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Quotes are also important factors of summarization. When we reply to a post or an email and when we want to emphasize a certain part of it, we quote the original text. A few studies used these quotes as features for summarization. Some previous work (Carenini et al., 2007; Oya and Carenini, 2014) assigned weights to words that appeared in quotes, and improved the conventional centroidbased methods. The previous research used quotes as auxiliary features. In our research, we solely focus on quotes, and do not directly use quotes as supervision; rather, we aim to extract implicit quotes.", "cite_spans": [ { "start": 250, "end": 273, "text": "(Carenini et al., 2007;", "ref_id": "BIBREF3" }, { "start": 274, "end": 297, "text": "Oya and Carenini, 2014)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "We propose Implicit Quote Extractor (IQE), an unsupervised extractive summarization model. Figure 2 shows the structure of the model. The inputs to the model during training are a post and reply candidate. A reply candidate can be either a true or a false reply to the post. The training task of the model is to predict whether a reply candidate is true or not.", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 100, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "The model comprises an Encoder, an Extractor, and a Predictor. The Encoder computes features of posts, the Extractor extracts sentences of a post to use for prediction, and the Predictor predicts whether a reply candidate is an actual reply or not. We describe each component below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "Encoder The Encoder computes features of posts. First, the post is split into N sentences", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "{s p 1 , s p 2 , ..., s p N }. Each sentence s p i comprises K i words W p i = {w p i1 , w p i2 , ..., w p iK i }. Words are embedded to continuous vectors X p i = {x p i1 , x p i2 , ..., x p iK i } through word embedding lay- ers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "We compute the features of each sentence h p i by inputting embedded vectors to Bidirectional Long Short-Term Memory (BiLSTM) and concatenating the last two hidden layers:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h p i = BiLSTM(X p i )", "eq_num": "(1)" } ], "section": "Model", "sec_num": "3" }, { "text": "Extractor The Extractor extracts a few sentences of a post for prediction. For accurate prediction, the Extractor learns to extract sentences that replies frequently refer to. Note that the Extractor does not use reply features for extraction. This is because summaries should not depend on replies. IQE requires replies only during the training and can induce summaries without replies during the evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "We employ LSTM to sequentially compute features on the Extractor. We set the mean vector of the sentence features of the Encoder h p i as the initial hidden state of the Extractor h ext 0 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "h ext 0 = 1 N N i=1 h p i (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "The Extractor computes attention weights using the hidden states of the Extractor h ext t and the sentence features h p i computed on the Encoder. The sentence with the highest attention weight is extracted. During the training, we use Gumbel Softmax (Jang et al., 2017) to make this discrete process differentiable. By adding Gumbel noise g using noise u from a uniform distribution, the attention weights a become a one-hot vector. The discretized attention weights \u03b1 are computed as follows:", "cite_spans": [ { "start": 251, "end": 270, "text": "(Jang et al., 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "u i \u223c Uniform(0, 1) (3) g i = \u2212 log (\u2212 log u i )", "eq_num": "(4)" } ], "section": "Model", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a ti = c T tanh(h ext t + h p i )", "eq_num": "(5)" } ], "section": "Model", "sec_num": "3" }, { "text": "\u03c0 ti = exp a ti N k=1 exp a tk (6) \u03b1 ti = exp (log \u03c0 ti + g i )/\u03c4 N k=1 exp (log \u03c0 tk + g k )/\u03c4 (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "c is a parameter vector, and the temperature \u03c4 is set to 0.1. We input the linear sum of the attention weights \u03b1 and the sentence vectors h p i to LSTM and update the hidden state of the Extractor. We repeat this step L times.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "x ext t = N i=1 \u03b1 ti h p i (1 \u2264 t \u2264 L) (8) h ext t+1 = LSTM(x ext t ) (0 \u2264 t \u2264 L \u2212 1) (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "The initial input vector x ext 0 of the Extractor is a parameter, and L is defined by a user depending on the number of sentences required for a summary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "Predictor Then, using only the extracted sentences and a reply candidate, the Predictor predicts whether the candidate is an actual reply or not. We labeled actual replies as positive, and randomly sampled posts as negative. Suppose a reply candidate", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "R = {s r 1 , s r 2 , ..., s r M } has M sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "Sentence vectors {h r j } of each sentence {s r j } on the reply are computed similarly to the equation 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "To compute the relation between the post and the reply candidate, we employ Decomposable Attention (Parikh et al., 2016) .", "cite_spans": [ { "start": 99, "end": 120, "text": "(Parikh et al., 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "From this architecture, we obtain the probability of binary-classification y through the sigmoid function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y = sigmoid(DA(x ext 1 , ..., x ext L\u22121 , h r 1 , ..., h r M ))", "eq_num": "(10)" } ], "section": "Model", "sec_num": "3" }, { "text": "where DA denotes Decomposable Attention. The detail of the computation is described in Appendix A.1. Decomposable Attention. The loss of this classification L rep is obtained by cross entropy as follows where t rep is 1 when a reply candidate is an actual reply, and otherwise 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L rep = \u2212t rep log y \u2212 (1 \u2212 t rep ) log (1 \u2212 y)", "eq_num": "(11)" } ], "section": "Model", "sec_num": "3" }, { "text": "Reranking As we mentioned in the Introduction, we are seeking for a criterion that is different from conventional methods. To take advantage of our method and conventional methods, we employ reranking; we simply reorder summaries (3 sentences) extracted by our model based on the ranking of TextRank (Mihalcea and Tarau, 2004) .", "cite_spans": [ { "start": 300, "end": 326, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "We train and evaluate the model on two domains of datasets. One is a mail dataset, and the other is a dataset from the social media platform, Reddit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "4" }, { "text": "We use Avocado collection 1 for the training. The Avocado collection is a public dataset that comprises emails obtained from 279 custodians of a defunct information technology company. From this dataset, we use post-and-reply pairs to train our model. We exclude pairs where the number of words in a post or a reply is smaller than 50 or 25. After the preprocessing, we have 56,174 pairs. We labeled a pair with an actual reply as positive and a pair with a wrong reply that is randomly sampled from the whole dataset as negative. The number of positive labels and negative labels are equal. Therefore, we have 112,348 pairs in total. For evaluation, we employ the Enron Summarization dataset (Loza et al., 2014) . This dataset has two types of evaluation datasets: ECS (Enron Corporate Single) and EPS (Enron Personal Single). An overview of these datasets is summarized in Table 1 . Because the evaluation datasets do not have validation datasets, we use the ECS dataset as a validation dataset for the EPS dataset, and vice versa. We use the validation datasets to decide which model to use for the evaluation.", "cite_spans": [ { "start": 693, "end": 712, "text": "(Loza et al., 2014)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 875, "end": 882, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Mail Dataset", "sec_num": "4.1" }, { "text": "The Reddit TIFU dataset (Kim et al., 2019 ) is a dataset that leverages tldr tags for the summarization task, which is the abbreviation of \"too long didn't read\". On the discussion forum Reddit TIFU, users post a tldr along with the post. tldr briefly explains what is written in the original post and thus can be regarded as a summary. We preprocess the TIFU dataset similarly as the mail datasets. Because the TIFU dataset does not include replies, we collected replies of the posts included in the TIFU dataset using praw 2 . As a consequence, we obtained 183,500 correct pairs of posts and replies and the same number of wrong pairs. We use that 367,000 pairs of posts and replies as the training dataset. We use 3,000 posts and tldrs that are not included in the training dataset as the validation dataset, and the same number of posts and tldrs as the evaluation dataset. An overview of the TIFU evaluation dataset is also summarized in Table 1 .", "cite_spans": [ { "start": 24, "end": 41, "text": "(Kim et al., 2019", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 943, "end": 950, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Reddit TIFU Dataset", "sec_num": "4.2" }, { "text": "The dimensions of the embedding layers and hidden layers of the LSTM are 100. The size of the vocabulary is set to 30,000. We tokenize each email or post into sentences and each sentence into words using the nltk tokenizer 3 . The upper limit of the number of sentences is set to 30, and that of words in each sentence is set to 200. The epoch size is 10, and we use Adam (Kingma and Ba, 2015) as an optimizer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.3" }, { "text": "In the first few epochs, we do not use the Extractor; all the post sentences are used for the prediction of post-reply relations. This is to train the Extractor and the Predictor efficiently. The Extractor learns to extract proper sentences and the Predictor learns to predict the relation between a post and a reply candidate. Models with several components generally achieve better results if each component is pretrained separately (Hashimoto et al., 2017) . Thus, we train the Predictor in the first few epochs before training the Extractor. We set this threshold as 4.", "cite_spans": [ { "start": 435, "end": 459, "text": "(Hashimoto et al., 2017)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.3" }, { "text": "During training, L, the number of sentences the Extractor extracts is randomly set from 1 to 4, so that the model can extract an arbitrary number of sentences. We replace the named entities on the text data with tags (person, location, and organization) using the Stanford Named Entity Recognizer (NER) 4 , to prevent the model from simply using named entities as a hint for the prediction. We pretrain word embeddings of the model with Skipgram, using the same data as the training. We conduct the same experiment five times and use the average of the results to mitigate the effect of randomness rooting in initialization and optimization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.3" }, { "text": "In the evaluation phase, we only use the Encoder and Extractor and do not use the Predictor. Each model extracts 3 sentences as a summary. Following previous work, we report the average F1 of ROUGE-1, ROUGE-2, and ROUGE-L for the evaluation (Lin, 2004) . We use the first 20, 40, and 60 words of the extracted sentences. For ROUGE computation, we use ROUGE 2.0 (Ganesan, 2015) . As a validation metric, we use an average of ROUGE-1-F, ROUGE-2-F, and ROUGE-L-F.", "cite_spans": [ { "start": 241, "end": 252, "text": "(Lin, 2004)", "ref_id": "BIBREF19" }, { "start": 361, "end": 376, "text": "(Ganesan, 2015)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.4" }, { "text": "As baseline models, we employ TextRank (Mihalcea and Tarau, 2004), LexRank (Erkan and Radev, 2004) , KLSum (Haghighi and Vanderwende, 2009) , PacSum (Zheng and Lapata, 2019) PacSum is an improved model of TextRank, which harnesses the position of sentences as a feature. KLSum employs the Kullbuck-Leibler divergence to constrain extracted sentences and the source text to have the similar word distribution. Lead is a simple method that extracts the first few sentences from the source text but is considered as a strong baseline for the summarization of news articles. PacSum and LexRank leverage idf. We compute idf using the validation data. As another baseline, we employ IQETextRank; the TextRank model that leverages cosine similarities of sentence vectors of IQE's Encoder as similarities between sentences. This is added to verify that the success of our model is not only because our model uses neural networks.", "cite_spans": [ { "start": 75, "end": 98, "text": "(Erkan and Radev, 2004)", "ref_id": "BIBREF5" }, { "start": 107, "end": 139, "text": "(Haghighi and Vanderwende, 2009)", "ref_id": "BIBREF10" }, { "start": 149, "end": 173, "text": "(Zheng and Lapata, 2019)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "4.5" }, { "text": "Experimental results for each evaluation dataset are listed in Table 2 , 3 and 4. Our model outperforms baseline models on the mail datasets (ECS and EPS) in most metrics. On Reddit TIFU dataset, IQE with reranking outperforms most baseline models except TextRank. Reranking improves the accuracy on ECS and TIFU but not on EPS. PacSum significantly outperformed TextRank on the news article dataset (Zheng and Lapata, 2019) but does not work well on our datasets where the sentence position is not an important factor. IQE-TextRank performed worse than IQE with the mail datasets. This indicates that the performance of our model does not result from the use of neural networks.", "cite_spans": [ { "start": 400, "end": 424, "text": "(Zheng and Lapata, 2019)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 63, "end": 70, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5" }, { "text": "Our model outperforms the baseline models more with the EPS dataset than the ECS dataset. The overview of the datasets in Table 1 explains the reason. The average number of words each sentence has is smaller in EPS. Baseline models such as LexRank and TextRank compute similarity of sentences using the co-occurrence of words. Thus, if the lengths of sentences are short, it fails to build decent co-occurrence networks and to capture the saliency of the sentences. IQE did not outperform TextRank on TIFU dataset. It is conceivable that Reddit users are less likely to refer to important topics on the post, given that anyone can reply.", "cite_spans": [], "ref_spans": [ { "start": 122, "end": 129, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5" }, { "text": "Our model performed well on the Mail datasets but two questions remain unclear. First, because we did not use quotes as supervision, it is not clear how well our model extracts quotes. Second, following Table 6 : ROUGE scores of extracted sentences that coincide with quote (IQEquote) and that does not coincide with quotes (IQEnonquote). The ROUGE scores become higher when IQE succeeded in extracting quotes.", "cite_spans": [], "ref_spans": [ { "start": 203, "end": 210, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "The Performance of Summarization and Quote Extraction", "sec_num": "5.1" }, { "text": "Model ROUGE-1-F ROUGE-2-F ROUGE-L-F # of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Performance of Summarization and Quote Extraction", "sec_num": "5.1" }, { "text": "Carenini's work (Carenini et al., 2007; Oya and Carenini, 2014) , we assumed quotes were useful for summarization but it is not clear whether the quote extraction leads to better results of summarization. To answer these questions, we conduct two experiments.", "cite_spans": [ { "start": 16, "end": 39, "text": "(Carenini et al., 2007;", "ref_id": "BIBREF3" }, { "start": 40, "end": 63, "text": "Oya and Carenini, 2014)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "The Performance of Summarization and Quote Extraction", "sec_num": "5.1" }, { "text": "For the experiments, we use the Reddit TIFU dataset and replies extracted via praw as described in 4.2. From the dataset, we extract replies that contain quotes, which start with the symbol \">\". In total, 1,969 posts have replies that include quotes. We label sentences of the posts that are quoted by the replies and verify how accurately our model can extract the quoted sentences. How well our model extracts quotes? To assess the ability of quote extraction, we regard the extraction of quotes as an information retrieval task and evaluate with Mean Reciprocal Rank (MRR). We compute MRR as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Performance of Summarization and Quote Extraction", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "MRR = 1 R(q) (R(q) \u2264 4) 0 (R(q) > 4)", "eq_num": "(12)" } ], "section": "The Performance of Summarization and Quote Extraction", "sec_num": "5.1" }, { "text": "The function R denotes the rank of the saliency scores a model computes; our model does not compute the scores but sequentially extracts sentences, and the order is regarded as the rank here. If a model extracts quotes as salient sentences, the rank becomes higher. Therefore, the MRR in our study indicates the capability of a model to extract quotes. As explained in the section 4.3, we trained our model to extract up to four sentences. Thus we set the threshold at four; if R(q) is larger than 4 we set MRR 0. For each data, we compute MRR and use the mean value as a result. Table 5 shows the results. IQE is more likely to extract quotes than TextRank, LexRank and Random.", "cite_spans": [], "ref_spans": [ { "start": 580, "end": 587, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "The Performance of Summarization and Quote Extraction", "sec_num": "5.1" }, { "text": "Does extracting quotes lead to good summarization? Next, we validate whether the ROUGE scores become better when our model succeeded in extracting quotes. We compute ROUGE scores when our model succeeds or fails in quote extraction (which means when MRR equals 1 or otherwise). IQEquote indicates the data where the extracted sentence coincides with a quote, and IQEnonquote vice versa. The result in the Table 6 shows ROUGE scores are higher when the extracted sentence coincides with a quote. The results of the two analyses support the claim that our model is more likely to extract quotes and that the ability of extracting quotes leads to better summarization.", "cite_spans": [], "ref_spans": [ { "start": 405, "end": 413, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "The Performance of Summarization and Quote Extraction", "sec_num": "5.1" }, { "text": "Effect of replacing named entities As explained in the section 4.3, our models shown in Tables 2, 3 and 4 all use the Stanford NER. To validate the effect of NER, we experiment without replacing named entities. However, on the Reddit TIFU dataset, NER did not affect the accuracy. Reddit is an anonymized social media platform, and the posts are less likely to refer to people's names. Thus, named entities will not be hints to predict reply-relation.", "cite_spans": [], "ref_spans": [ { "start": 88, "end": 99, "text": "Tables 2, 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Ablation Tests", "sec_num": "5.2" }, { "text": "As explained in the section 4.3, we pretrained the Predictor in the first few epochs so that the model can learn the extraction and the prediction separately. Table 7 shows the effect of pretraining. Without pretraining, the accuracy decreased. This shows the importance of the separate training of each component.", "cite_spans": [], "ref_spans": [ { "start": 159, "end": 166, "text": "Table 7", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Effect of pretraining Predictor", "sec_num": null }, { "text": "As explained in the Introduction, most conventional unsupervised summarization methods are based on the assumption that important topics appear frequently in a document. TextRank is a typical example; TextRank is a centrality-based method that extracts sentences with high PageRank as the summary. A sentence having high PageRank indicates that the sentence has high similarity with many other sentences, meaning that many sentences refer to the same topic. We suspected that important topics are not always referred to frequently, and suggested another criterion: the frequency of being referred to in replies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Difference from Conventional Methods", "sec_num": "5.3" }, { "text": "Comparing with TextRank, we verify that our method can capture salient sentences that the centrality-based method fails to. Figure 3 shows the correlation between the maximum PageRank in each post of ECS/EPS and ROUGE-1-F scores of IQE and TextRank. As shown in the Figure, the ROUGE-1-F scores of our model are higher than those of TextRank when the maximum PageRank in the sentence-similarity graph is low. This supports our hypothesis that our model can capture salient sentences even when the important topic is referred to only few times. Table 8 shows a demonstrative example of extracted summaries of IQE and TextRank. The sample is from the EPS dataset. The summary includes descriptions regarding a promotion and that the sender is having a baby. However, those words Source Text Just got your email address from Rachel. Congrats on your promotion. I'm sure it's going to be alot different for you but it sounds like a great deal. My hubby and' I moved out to Katy a few months ago. I love it there -my parents live about 10 minutes away. New news from me -I'm having a baby -due in June. I can't even believe it myself. The thought of me being a mother is downright scary but I figure since I'm almost 30, I probably need to start growing up. I'm really excited though. Rachel is coming to visit me in a couple of weeks. You planning on coming in for any of the rodeo stuff? You'll never guess who I got in touch with about a month ago. It was the weirdest thing -heather evans. I hadn't talked to her in about 10 years. Seems like she's doing well but I can never really tell with her. Anyway, I'll let you go. Got ta get back to work. Looking forward to hearing back from ya.", "cite_spans": [], "ref_spans": [ { "start": 124, "end": 132, "text": "Figure 3", "ref_id": "FIGREF0" }, { "start": 266, "end": 273, "text": "Figure,", "ref_id": null }, { "start": 544, "end": 551, "text": "Table 8", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Difference from Conventional Methods", "sec_num": "5.3" }, { "text": "The sender wants to congratulate the recipient for his/her new promotion, as well as, updating him/her about her life. The sender just move out to Katy few months ago. She is having a baby due in June. She is scared of being a mother but also pretty exited about it. Rachel is coming to visit her in couple of weeks and she is asking if he/she will join for any of the rodeo stuff. She run into heather evans which she hadn't talked in 10 years. appear only once in the source text; thus TextRank fails to capture the salient sentences. Our model, by contrast, can capture them because they are topics that replies often refer to.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary (Gold)", "sec_num": null }, { "text": "This paper proposes Implicit Quote Extractor, a model that extracts implicit quotes as summaries. We evaluated our model with two mail datasets, ECS and EPS, and one social media dataset TIFU, using ROUGE as an evaluation metric, and validated that our model is useful for summarization. We hypothesized that our model is more likely to extract quotes and that ability improved the performance of our model. We verified these hypotheses with the Reddit TIFU dataset, but not with the email datasets, because few emails included annotated summaries, and those emails did not have replies with quotes. For future work, we will examine whether our hypotheses are valid for emails and other datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "As explained in section 3, the Predictor uses Decomposable Attention for prediction. Decomposable Attention computes a two-dimensional attention matrix, computed by two sets of vectors, and thus, captures detailed information useful for prediction. The computation uses the following equations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Decomposable Attention", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E tj = (x ext t ) T h r j (13) \u03b2 t = M j=1 exp (E tj ) M k=1 exp (E tk ) h r j (14) \u03b1 j = L t=1 exp (E tj ) L k=1 exp (E kj ) x ext t", "eq_num": "(15)" } ], "section": "A.1 Decomposable Attention", "sec_num": null }, { "text": "The computation of x ext t and h r j are explained in section 3. First, we compute a co-attention matrix E as in (13). The weights of the co-attention matrix are normalized row-wise and column-wise in the equations 14and 15. \u03b2 i is a linear sum of reply features h r j that is aligned to x ext t and vice versa for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Decomposable Attention", "sec_num": null }, { "text": "\u03b1 j . v 1,t = G([x ext t ; \u03b2 t ]) v 2,j = G([h r j ; \u03b1 j ]) (16) v 1 = L t=1 v 1,t v 2 = M j=1 v 2,j (17) y = sigmoid(H([v 1 ; v 2 ])) (18)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Decomposable Attention", "sec_num": null }, { "text": "Next, we separately compare the aligned phrases \u03b2 t and x ext t , \u03b1 j and h r j , using a function G. G denotes a feed-forward neural network, and [;] denotes concatenation. Finally, we concatenate v 1 and v 2 and obtain binary-classification result y through a linear layer H and the sigmoid function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Decomposable Attention", "sec_num": null }, { "text": "https://catalog.ldc.upenn.edu/LDC2015T03", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://praw.readthedocs.io/ 3 https://www.nltk.org", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://nlp.stanford.edu/software/CRF-NER.shtml", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Unsupervised opinion summarization with noising and denoising", "authors": [ { "first": "Reinald", "middle": [], "last": "Kim Amplayo", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1934--1945", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.175" ] }, "num": null, "urls": [], "raw_text": "Reinald Kim Amplayo and Mirella Lapata. 2020. Un- supervised opinion summarization with noising and denoising. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 1934-1945, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "SEQ\u02c63: Differentiable sequence-to-sequence-to-sequence autoencoder for unsupervised abstractive sentence compression", "authors": [ { "first": "Christos", "middle": [], "last": "Baziotis", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "673--681", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christos Baziotis, Ion Androutsopoulos, Ioannis Konstas, and Alexandros Potamianos. 2019. SEQ\u02c63: Differentiable sequence-to-sequence-to-sequence autoencoder for unsupervised abstractive sentence compression. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 673-681. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Summarizing online forum discussions -can dialog acts of individual messages help?", "authors": [ { "first": "Sumit", "middle": [], "last": "Bhatia", "suffix": "" }, { "first": "Prakhar", "middle": [], "last": "Biyani", "suffix": "" }, { "first": "Prasenjit", "middle": [], "last": "Mitra", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2127--2131", "other_ids": { "DOI": [ "10.3115/v1/D14-1226" ] }, "num": null, "urls": [], "raw_text": "Sumit Bhatia, Prakhar Biyani, and Prasenjit Mitra. 2014. Summarizing online forum discussions -can dialog acts of individual messages help? In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2127-2131. Association for Computational Linguis- tics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Summarizing email conversations with clue words", "authors": [ { "first": "Giuseppe", "middle": [], "last": "Carenini", "suffix": "" }, { "first": "Raymond", "middle": [ "T" ], "last": "Ng", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 16th International Conference on World Wide Web, WWW '07", "volume": "", "issue": "", "pages": "91--100", "other_ids": { "DOI": [ "10.1145/1242572.1242586" ] }, "num": null, "urls": [], "raw_text": "Giuseppe Carenini, Raymond T. Ng, and Xiaodong Zhou. 2007. Summarizing email conversations with clue words. In Proceedings of the 16th International Conference on World Wide Web, WWW '07, pages 91-100. ACM.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Meansum: A neural model for unsupervised multi-document abstractive summarization", "authors": [ { "first": "Eric", "middle": [], "last": "Chu", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 36th International Conference on Machine Learning, ICML 2019", "volume": "", "issue": "", "pages": "1223--1232", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Chu and Peter J. Liu. 2019. Meansum: A neu- ral model for unsupervised multi-document abstrac- tive summarization. In Proceedings of the 36th In- ternational Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 1223-1232.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Lexrank: Graph-based lexical centrality as salience in text summarization", "authors": [ { "first": "G\u00fcnes", "middle": [], "last": "Erkan", "suffix": "" }, { "first": "Dragomir", "middle": [ "R" ], "last": "Radev", "suffix": "" } ], "year": 2004, "venue": "J. Artif. Int. Res", "volume": "22", "issue": "1", "pages": "457--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00fcnes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. J. Artif. Int. Res., 22(1):457-479.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Unsupervised sentence compression using denoising autoencoders", "authors": [ { "first": "Thibault", "middle": [], "last": "Fevry", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Phang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "413--422", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thibault Fevry and Jason Phang. 2018. Unsuper- vised sentence compression using denoising auto- encoders. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 413-422. Association for Computational Linguis- tics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Multi-sentence compression: Finding shortest paths in word graphs", "authors": [ { "first": "Katja", "middle": [], "last": "Filippova", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "322--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katja Filippova. 2010. Multi-sentence compression: Finding shortest paths in word graphs. In Proceed- ings of the 23rd International Conference on Compu- tational Linguistics (Coling 2010), pages 322-330, Beijing, China. Coling 2010 Organizing Committee.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Rouge 2.0: Updated and improved measures for evaluation of summarization tasks", "authors": [ { "first": "Kavita", "middle": [], "last": "Ganesan", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kavita Ganesan. 2015. Rouge 2.0: Updated and im- proved measures for evaluation of summarization tasks.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Revisiting the centroid-based method: A strong baseline for multi-document summarization", "authors": [ { "first": "Ghalandari", "middle": [], "last": "Demian Gholipour", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Workshop on New Frontiers in Summarization", "volume": "", "issue": "", "pages": "85--90", "other_ids": { "DOI": [ "10.18653/v1/W17-4511" ] }, "num": null, "urls": [], "raw_text": "Demian Gholipour Ghalandari. 2017. Revisiting the centroid-based method: A strong baseline for multi-document summarization. In Proceedings of the Workshop on New Frontiers in Summarization, pages 85-90. Association for Computational Lin- guistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Exploring content models for multi-document summarization", "authors": [ { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "362--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aria Haghighi and Lucy Vanderwende. 2009. Explor- ing content models for multi-document summariza- tion. In Proceedings of Human Language Technolo- gies: The 2009 Annual Conference of the North American Chapter of the Association for Compu- tational Linguistics, pages 362-370, Boulder, Col- orado. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A joint many-task model: Growing a neural network for multiple NLP tasks", "authors": [ { "first": "Kazuma", "middle": [], "last": "Hashimoto", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Yoshimasa", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1923--1933", "other_ids": { "DOI": [ "10.18653/v1/D17-1206" ] }, "num": null, "urls": [], "raw_text": "Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsu- ruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple NLP tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1923-1933, Copenhagen, Denmark. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Document summarization based on data reconstruction", "authors": [ { "first": "Zhanying", "middle": [], "last": "He", "suffix": "" }, { "first": "Chun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Bu", "suffix": "" }, { "first": "Can", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Lijun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Deng", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Xiaofei", "middle": [], "last": "He", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, AAAI'12", "volume": "", "issue": "", "pages": "620--626", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhanying He, Chun Chen, Jiajun Bu, Can Wang, Li- jun Zhang, Deng Cai, and Xiaofei He. 2012. Doc- ument summarization based on data reconstruction. In Proceedings of the Twenty-Sixth AAAI Confer- ence on Artificial Intelligence, AAAI'12, pages 620- 626. AAAI Press.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Unsupervised neural single-document summarization of reviews via learning latent discourse structure and its ranking", "authors": [ { "first": "Masaru", "middle": [], "last": "Isonuma", "suffix": "" }, { "first": "Junichiro", "middle": [], "last": "Mori", "suffix": "" }, { "first": "Ichiro", "middle": [], "last": "Sakata", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2142--2152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masaru Isonuma, Junichiro Mori, and Ichiro Sakata. 2019. Unsupervised neural single-document sum- marization of reviews via learning latent discourse structure and its ranking. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 2142-2152. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Categorical reparameterization with gumbel-softmax", "authors": [ { "first": "Eric", "middle": [], "last": "Jang", "suffix": "" }, { "first": "Shixiang", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Poole", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categor- ical reparameterization with gumbel-softmax. In 5th International Conference on Learning Representa- tions, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Extractive summarization using continuous vector space models", "authors": [ { "first": "Mikael", "middle": [], "last": "K\u00e5geb\u00e4ck", "suffix": "" }, { "first": "Olof", "middle": [], "last": "Mogren", "suffix": "" }, { "first": "Nina", "middle": [], "last": "Tahmasebi", "suffix": "" }, { "first": "Devdatt", "middle": [], "last": "Dubhashi", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC)", "volume": "", "issue": "", "pages": "31--39", "other_ids": { "DOI": [ "10.3115/v1/W14-1504" ] }, "num": null, "urls": [], "raw_text": "Mikael K\u00e5geb\u00e4ck, Olof Mogren, Nina Tahmasebi, and Devdatt Dubhashi. 2014. Extractive summarization using continuous vector space models. In Proceed- ings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC), pages 31-39. Association for Computational Lin- guistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Abstractive summarization of Reddit posts with multi-level memory networks", "authors": [ { "first": "Byeongchang", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Hyunwoo", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Gunhee", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2519--2531", "other_ids": { "DOI": [ "10.18653/v1/N19-1260" ] }, "num": null, "urls": [], "raw_text": "Byeongchang Kim, Hyunwoo Kim, and Gunhee Kim. 2019. Abstractive summarization of Reddit posts with multi-level memory networks. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2519-2531. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The summary loop: Learning to write abstractive summaries without examples", "authors": [ { "first": "Philippe", "middle": [], "last": "Laban", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Hsi", "suffix": "" }, { "first": "John", "middle": [], "last": "Canny", "suffix": "" }, { "first": "Marti", "middle": [ "A" ], "last": "Hearst", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5135--5150", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.460" ] }, "num": null, "urls": [], "raw_text": "Philippe Laban, Andrew Hsi, John Canny, and Marti A. Hearst. 2020. The summary loop: Learning to write abstractive summaries without examples. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5135- 5150, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Multi-document summarization based on two-level sparse representation model", "authors": [ { "first": "He", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hongliang", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Zhi-Hong", "middle": [], "last": "Deng", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI'15", "volume": "", "issue": "", "pages": "196--202", "other_ids": {}, "num": null, "urls": [], "raw_text": "He Liu, Hongliang Yu, and Zhi-Hong Deng. 2015. Multi-document summarization based on two-level sparse representation model. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelli- gence, AAAI'15, pages 196-202. AAAI Press.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Building a dataset for summarization and keyword extraction from emails", "authors": [ { "first": "Vanessa", "middle": [], "last": "Loza", "suffix": "" }, { "first": "Shibamouli", "middle": [], "last": "Lahiri", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Po-Hsiang", "middle": [], "last": "Lai", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014)", "volume": "", "issue": "", "pages": "2441--2446", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vanessa Loza, Shibamouli Lahiri, Rada Mihalcea, and Po-Hsiang Lai. 2014. Building a dataset for summa- rization and keyword extraction from emails. In Pro- ceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014), pages 2441-2446. European Languages Resources Association (ELRA).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "An unsupervised multi-document summarization framework based on neural document model", "authors": [ { "first": "Shulei", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Zhi-Hong", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Yunlun", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "1514--1523", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shulei Ma, Zhi-Hong Deng, and Yunlun Yang. 2016. An unsupervised multi-document summarization framework based on neural document model. In Pro- ceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Techni- cal Papers, pages 1514-1523. The COLING 2016 Organizing Committee.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Abstractive summarization of spoken and written conversations based on phrasal queries", "authors": [ { "first": "Yashar", "middle": [], "last": "Mehdad", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Carenini", "suffix": "" }, { "first": "Raymond", "middle": [ "T" ], "last": "Ng", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1220--1230", "other_ids": { "DOI": [ "10.3115/v1/P14-1115" ] }, "num": null, "urls": [], "raw_text": "Yashar Mehdad, Giuseppe Carenini, and Raymond T. Ng. 2014. Abstractive summarization of spoken and written conversations based on phrasal queries. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1220-1230. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "TextRank: Bringing order into text", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Tarau", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "404--411", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Lan- guage Processing, pages 404-411. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Extractive summarization and dialogue act modeling on email threads: An integrated probabilistic approach", "authors": [ { "first": "Tatsuro", "middle": [], "last": "Oya", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Carenini", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)", "volume": "", "issue": "", "pages": "133--140", "other_ids": { "DOI": [ "10.3115/v1/W14-4318" ] }, "num": null, "urls": [], "raw_text": "Tatsuro Oya and Giuseppe Carenini. 2014. Extrac- tive summarization and dialogue act modeling on email threads: An integrated probabilistic approach. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 133-140. Association for Compu- tational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A decomposable attention model for natural language inference", "authors": [ { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2249--2255", "other_ids": { "DOI": [ "10.18653/v1/D16-1244" ] }, "num": null, "urls": [], "raw_text": "Ankur Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249-2255. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Unsupervised abstractive meeting summarization with multisentence compression and budgeted submodular maximization", "authors": [ { "first": "Guokan", "middle": [], "last": "Shang", "suffix": "" }, { "first": "Wensi", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Zekun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Tixier", "suffix": "" }, { "first": "Polykarpos", "middle": [], "last": "Meladianos", "suffix": "" }, { "first": "Michalis", "middle": [], "last": "Vazirgiannis", "suffix": "" }, { "first": "Jean-Pierre", "middle": [], "last": "Lorr\u00e9", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "664--674", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guokan Shang, Wensi Ding, Zekun Zhang, An- toine Tixier, Polykarpos Meladianos, Michalis Vazir- giannis, and Jean-Pierre Lorr\u00e9. 2018. Unsuper- vised abstractive meeting summarization with multi- sentence compression and budgeted submodular maximization. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 664-674. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Optimizing sentence modeling and selection for document summarization", "authors": [ { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Yulong", "middle": [], "last": "Pei", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 24th International Conference on Artificial Intelligence, IJ-CAI'15", "volume": "", "issue": "", "pages": "1383--1389", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenpeng Yin and Yulong Pei. 2015. Optimizing sen- tence modeling and selection for document sum- marization. In Proceedings of the 24th Inter- national Conference on Artificial Intelligence, IJ- CAI'15, pages 1383-1389. AAAI Press.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Sentence centrality revisited for unsupervised summarization", "authors": [ { "first": "Hao", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6236--6247", "other_ids": { "DOI": [ "10.18653/v1/P19-1628" ] }, "num": null, "urls": [], "raw_text": "Hao Zheng and Mirella Lapata. 2019. Sentence cen- trality revisited for unsupervised summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6236-6247, Florence, Italy. Association for Compu- tational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Correlation between ROUGE-1-F score and maximum PageRank of each post on ECS and EPS datasets. X-axis shows rounded maximum PageRank, and Y-axis shows ROUGE-1-F and the error bar represents the standard error.", "type_str": "figure", "uris": null }, "TABREF2": { "num": null, "text": "Overview of the evaluation datasets.", "html": null, "type_str": "table", "content": "" }, "TABREF4": { "num": null, "text": "Results on ECS data. The best results are bolded and the second best results are underlined.", "html": null, "type_str": "table", "content": "
ROUGE-1-FROUGE-2-FROUGE-L-F
Model# of words# of words# of words
204060204060204060
Lead0.128 0.204 0.230 0.045 0.084 0.099 0.150 0.208 0.221
TextRank0.172 0.272 0.317 0.080 0.129 0.151 0.185 0.260 0.290
LexRank0.161 0.254 0.299 0.068 0.113 0.136 0.173 0.245 0.275
Random0.144 0.213 0.238 0.058 0.086 0.099 0.158 0.213 0.232
KLSum0.191 0.287 0.321 0.093 0.141 0.153 0.184 0.254 0.277
PacSum0.179 0.275 0.330 0.082 0.127 0.151 0.171 0.250 0.287
IQETextRank0.158 0.252 0.291 0.069 0.115 0.136 0.169 0.242 0.268
IQE0.189 0.292 0.342 0.091 0.143 0.168 0.189 0.268 0.302
IQE + reranking0.185 0.290 0.340 0.087 0.138 0.164 0.189 0.264 0.299
" }, "TABREF5": { "num": null, "text": "Results on EPS data. The best results are bolded and the second best results are underlined.", "html": null, "type_str": "table", "content": "" }, "TABREF7": { "num": null, "text": "Results on TIFU tldr data. The best results are bolded and the second best results are underlined.", "html": null, "type_str": "table", "content": "
ModelMRR
LexRank0.094
TextRank0.109
Random0.081
IQE0.135
" }, "TABREF8": { "num": null, "text": "", "html": null, "type_str": "table", "content": "
ModelROUGE-1-F ROUGE-2-F ROUGE-L-F
IQEquote0.1840.0300.126
IQEnonquote0.1680.0200.118
: Ability of extract-
ing quotes.
" }, "TABREF9": { "num": null, "text": "", "html": null, "type_str": "table", "content": "
lists the results.
" }, "TABREF10": { "num": null, "text": "", "html": null, "type_str": "table", "content": "
: Results of ablation tests
" }, "TABREF11": { "num": null, "text": "Example of sentences extracted by Implicit Quote Extractor (IQE) (bold) and TextRank (italic).", "html": null, "type_str": "table", "content": "" } } } }