ACL-OCL / Base_JSON /prefixG /json /gem /2021.gem-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:05:36.961971Z"
},
"title": "Semantic Similarity Based Evaluation for Abstractive News Summarization",
"authors": [
{
"first": "Figen",
"middle": [
"Beken"
],
"last": "Fikri",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sabanc\u0131 University",
"location": {
"settlement": "Istanbul",
"country": "Turkey"
}
},
"email": "fbekenfikri@sabanciuniv.edu"
},
{
"first": "Kemal",
"middle": [],
"last": "Oflazer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University -Qatar",
"location": {
"settlement": "Doha",
"country": "Qatar"
}
},
"email": ""
},
{
"first": "Berrin",
"middle": [],
"last": "Yan\u0131koglu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sabanc\u0131 University",
"location": {
"settlement": "Istanbul",
"country": "Turkey"
}
},
"email": "berrin@sabanciuniv.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "ROUGE is a widely used evaluation metric in text summarization. However, it is not suitable for the evaluation of abstractive summarization systems as it relies on lexical overlap between the gold standard and the generated summaries. This limitation becomes more apparent for agglutinative languages with very large vocabularies and high type/token ratios. In this paper, we present semantic similarity models for Turkish and apply them as evaluation metrics for an abstractive summarization task. To achieve this, we translated the English STSb dataset into Turkish and presented the first semantic textual similarity dataset for Turkish. We showed that our best similarity models have better alignment with average human judgments compared to ROUGE in both Pearson and Spearman correlations.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "ROUGE is a widely used evaluation metric in text summarization. However, it is not suitable for the evaluation of abstractive summarization systems as it relies on lexical overlap between the gold standard and the generated summaries. This limitation becomes more apparent for agglutinative languages with very large vocabularies and high type/token ratios. In this paper, we present semantic similarity models for Turkish and apply them as evaluation metrics for an abstractive summarization task. To achieve this, we translated the English STSb dataset into Turkish and presented the first semantic textual similarity dataset for Turkish. We showed that our best similarity models have better alignment with average human judgments compared to ROUGE in both Pearson and Spearman correlations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic document summarization aims to produce a summary that conveys the salient information in the given text(s). Automatic summarizers provide reduction in the size of the text, as well as, combine and cluster different sources of information, while preserving the informational content. There are two approaches to summarization: extractive and abstractive. Extractive summarization yields a summary by extracting important phrases or sentences from the document. In contrast, abstractive summarization provides a much more human-like summary by capturing the internal semantic meaning and generating new sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "ROUGE is a widely used evaluation metric in text summarization. It compares the system summary with the human generated summary or summaries, by considering the overlapping units such as n-gram, word sequences and word pairs (Lin, 2004) . However, in abstractive summarization systems, the generated summary does not necessarily contain the same words in the gold standard summary. On the contrary, an abstractive summarization model is expected to generate new words that may not even appear in the source. For agglutinative languages, the ineffectiveness of ROUGE metric becomes more apparent. For instance, both of the following sentences has the meaning \"I want to call the embassy\":",
"cite_spans": [
{
"start": 225,
"end": 236,
"text": "(Lin, 2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "B\u00fcy\u00fckel\u00e7iligi aramak istiyorum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "B\u00fcy\u00fckel\u00e7ilige telefon etmek istiyorum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While, \"aramak\" is a verb that takes an object in accusative case, \"telefon etmek\" is a compound verb in Turkish and the equivalent of the accusative object in the first sentence is realized with a noun in dative case (as highlighted with underlines). Although, these sentences are semantically equivalent, ROUGE-1, ROUGE-2 and ROUGE-3 scores of these sentences are 0.25, 0, and 0.25 respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a semantic similarity model which can be applied to abstractive summarization as a semantic evaluation metric. To this end, we translated the English Semantic Textual Similarity benchmark (STSb) dataset (Cer et al., 2017) into Turkish and presented the first semantic textual similarity dataset for Turkish as well. STSb dataset is a selection of data from English STS shared tasks between 2012 and 2017. These datasets have been widely used for sentence level similarity and semantic representations research (Cer et al., 2017) .",
"cite_spans": [
{
"start": 229,
"end": 247,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 536,
"end": 554,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also leveraged the NLI-TR dataset that has been presented recently for Turkish natural language inference task (Budur et al., 2020) . The NLI-TR dataset combines the translated Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) and MultiGenre Natural Language Inference (MultiNLI) (Williams et al., 2018) datasets.",
"cite_spans": [
{
"start": 114,
"end": 134,
"text": "(Budur et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 223,
"end": 244,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 298,
"end": 321,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our paper is structured in the following way: In section 2, we explain recent studies and evaluation metrics. In section 3, we explain natural language inference and semantic textual similarity. We present our STSb Turkish dataset and translation quality. In section 4, we present our experiments for semantic textual similarity. In section 5, we present the experiments for summarization. We applied our best performing four semantic similarity models as evaluation metrics to the summarization results. In section 6, we present our results both qualitatively and quantitatively by comparing the semantic similarity and ROUGE scores with human judgments in Pearson and Spearman correlations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The most widely used evaluation metric for summarization is ROUGE which compares the system summary with the human generated summary or summaries by considering the overlapping units such as n-gram, word sequences and word pairs (Lin, 2004) . Recently, there has been a range of studies focusing on the evaluation of factual correctness in the generated summaries. Falke et al. (2019) has studied whether textual entailment can be used to detect factual errors in generated summaries based on the idea that the source document should entail the information in a summary. The authors investigated whether factual errors can be reduced by reranking the alternative summaries using models trained on NLI datasets. They found that out-of-the-box NLI models do not perform well on the task of factual correctness. Kryscinski et al. (2020) proposed a model-based approach on the document-sentence level for verifying factual consistency in generated summaries. Zhao et al. (2020) addressed the problem of unsupported information in the generated summaries known as factual hallucination. Durmus et al. (2020) and suggested question answering based methods to evaluate the faithfullness of the generated summaries.",
"cite_spans": [
{
"start": 229,
"end": 240,
"text": "(Lin, 2004)",
"ref_id": "BIBREF19"
},
{
"start": 365,
"end": 384,
"text": "Falke et al. (2019)",
"ref_id": "BIBREF14"
},
{
"start": 809,
"end": 833,
"text": "Kryscinski et al. (2020)",
"ref_id": "BIBREF18"
},
{
"start": 955,
"end": 973,
"text": "Zhao et al. (2020)",
"ref_id": "BIBREF38"
},
{
"start": 1082,
"end": 1102,
"text": "Durmus et al. (2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In addition to the studies focusing on summarization evaluation, there are some recently proposed metrics to evaluate generated text with the gold standard. Zhang et al. (2019) proposed BERTScore that uses BERT (Devlin et al., 2019) to compute a similarity score between the generated and reference text. Several recent works proposed new evaluaiton metrics for machine translation (BLEURT (Sellam et al., 2020) , COMET (Rei et al., 2020) , YiSi (Lo, 2019) , Prism (Thompson and Post, 2020) ).",
"cite_spans": [
{
"start": 157,
"end": 176,
"text": "Zhang et al. (2019)",
"ref_id": "BIBREF37"
},
{
"start": 211,
"end": 232,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 390,
"end": 411,
"text": "(Sellam et al., 2020)",
"ref_id": "BIBREF30"
},
{
"start": 420,
"end": 438,
"text": "(Rei et al., 2020)",
"ref_id": "BIBREF26"
},
{
"start": 446,
"end": 456,
"text": "(Lo, 2019)",
"ref_id": "BIBREF20"
},
{
"start": 465,
"end": 490,
"text": "(Thompson and Post, 2020)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Natural language inference is the study of determining whether there is an entailment, a contradiction or a neutral relationship between a hypothesis and a given premise. There are two major corpora in literature for natural language inference in English. These are Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) and MultiGenre Natural Language Inference (MultiNLI) (Williams et al., 2018) datasets. The SNLI corpus is about 570k sentence pairs while the MultiNLI corpus is about 433k sentence pairs. The MultiNLI corpus is in the same format as SNLI, but with more varied text genres. Recently, these corpora have been translated into Turkish (Budur et al., 2020) . In this study, we used the NLI-TR dataset. 1",
"cite_spans": [
{
"start": 309,
"end": 330,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 384,
"end": 407,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF34"
},
{
"start": 662,
"end": 682,
"text": "(Budur et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Inference",
"sec_num": "3.1"
},
{
"text": "Semantic textual similarity aims to determine how similar two pieces of texts are. There are many application areas such as machine translation, summarization, text generation, question answering, dialogue and speech systems. It has become a remarkable area with the competitions organized by SemEval since 2012.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Textual Similarity",
"sec_num": "3.2"
},
{
"text": "Semantic textual similarity studies are very common in English, and are based on datasets that are annotated and given similarity scores by human annotators. However, annotation is costly and time consuming. Recently, with the increase of success in machine translation and the development of multi-language models, it has become possible to use datasets by translating them from one language to another, e.g., Isbister and Sahlgren (2020) , Budur et al. (2020) .",
"cite_spans": [
{
"start": 411,
"end": 439,
"text": "Isbister and Sahlgren (2020)",
"ref_id": "BIBREF17"
},
{
"start": 442,
"end": 461,
"text": "Budur et al. (2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Textual Similarity",
"sec_num": "3.2"
},
{
"text": "In this study, we use the English STS Benchmark (STSb) dataset (Cer et al., 2017 ) that we translated into Turkish using the Google Cloud Translation API. 2, 3 The STSb dataset consists of all the English datasets used in SemEval STS studies between 2012 and 2017. It consists of 8628 sentence pairs (5749 train, 1500 dev, 1379 test), (see Table 3 Sentence 1 Sentence 2 Similarity Score Adam ata biniyor.",
"cite_spans": [
{
"start": 63,
"end": 80,
"text": "(Cer et al., 2017",
"ref_id": "BIBREF9"
},
{
"start": 155,
"end": 157,
"text": "2,",
"ref_id": null
},
{
"start": 158,
"end": 159,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 340,
"end": 347,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Semantic Textual Similarity",
"sec_num": "3.2"
},
{
"text": "Bir adam ata biniyor. 5.0 (The man is riding a horse.) (A man is riding on a horse.) Bir k\u0131z u\u00e7urtma u\u00e7uruyor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Textual Similarity",
"sec_num": "3.2"
},
{
"text": "Ko\u015fan bir k\u0131z u\u00e7urtma u\u00e7uruyor. 4.0 (A girl is flying a kite.) (A girl running is flying a kite.) Bir adam gitar \u00e7al\u0131yor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Textual Similarity",
"sec_num": "3.2"
},
{
"text": "Bir adam \u015fark\u0131 s\u00f6yl\u00fcyor ve gitar \u00e7al\u0131yor. 3.6 (A man is playing a guitar.) (A man is singing and playing a guitar.) Bir adam gitar \u00e7al\u0131yor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Textual Similarity",
"sec_num": "3.2"
},
{
"text": "Bir k\u0131z gitar \u00e7al\u0131yor. 2.8 (A man is playing a guitar.) (A girl is playing a guitar.) Bir bebek kaplan bir topla oynuyor. Bir bebek bir oyuncak bebekle oynuyor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Textual Similarity",
"sec_num": "3.2"
},
{
"text": "1.6 (A baby tiger is playing with a ball.) (A baby is playing with a doll.) Bir kad\u0131n dans ediyor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Textual Similarity",
"sec_num": "3.2"
},
{
"text": "Bir adam konu\u015fuyor. 0.0 (A woman is dancing.) (A man is talking.) for details). In this dataset, each sentence pair was annotated by crowdsourcing and assigned a semantic similarity score. Five scores were collected for each pair and gold scores were generated by taking the median value of these scores (Agirre et al., 2016). Scores range from 0 (no semantic similarity) to 5 (semantically equivalent) on a continuous scale. Some examples from the STS dataset and their translations are given in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 497,
"end": 504,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Semantic Textual Similarity",
"sec_num": "3.2"
},
{
"text": "Here, we apply various state-of-the-art models on the translated dataset, and the best performing four models are used for semantic similarity based evaluation metric for the task of abstractive summarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Textual Similarity",
"sec_num": "3.2"
},
{
"text": "It is possible to encounter some translation errors in the translated texts. The most striking mistakes are related to expressions that are not used in Turkish. For instance, the sentence in S1 is translated as T1; however, a more appropriate translation would be C1, as \"sitting\" is translated differently for inanimate subjects. In this paper, we assumed that such translation errors will not cause a major problem in our similarity models. In order to verify our assumption, we tested the quality of translations by selecting 50 sentence pairs (100 sentences) randomly, considering the percentage of the categories in the dataset. So, 6, 19 and 25 pairs chosen from forum, caption and news categories respectively. These sentences were translated by three native Turkish speakers who are fluent in English. We evaluated quality of the system translations with the three references using BLEU (Papineni et al., 2002) score. We used the SacreBLEU 4 tool (Post, 2018) version 1.5.1 and found BLEU score as 60.21 which shows that our system translations can be considered as very high quality translations (Google) . Therefore, no changes have been made to the translations. ",
"cite_spans": [
{
"start": 895,
"end": 918,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF22"
},
{
"start": 1105,
"end": 1113,
"text": "(Google)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Quality",
"sec_num": "3.3"
},
{
"text": "In order to assess the semantic similarity between a pair of texts, there are two main model structures: 1) Sentence representation models that try to map a sentence to a fixed-sized real-value vectors called sentence embeddings. 2) Cross-encoders that directly compute the semantic similarity score of a sentence pair. In this paper, we experimented with state-of-theart sentence representation models that are applicable to Turkish (language-specific and multilingual models) and BERT cross-encoders. In sentence representation models, we obtained the semantic similarity scores using cosine similarity. All models were tested on the STSb-TR test dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments for Semantic Textual Similarity",
"sec_num": "4"
},
{
"text": "We experimented with LASER, LaBSE, MUSE, BERT, XLM-R and Sentence-BERT models as explained below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation Models",
"sec_num": "4.1"
},
{
"text": "LASER Language-Agnostic SEntence Representations (LASER) is a language model based on the BiLSTM encoder trained on parallel data targeting translation. The model has been trained in 93 languages, including Turkish. 6 In this study, Turkish sentence embeddings were computed using a pre-trained LASER model. LaBSE Language-agnostic BERT Sentence Embedding (LaBSE) is a BERT variant masked and 6 https://github.com/facebookresearch/LASER trained on multilingual data for translation language modeling. The model produces languageindependent sentence embeddings for 109 languages, including Turkish (Feng et al., 2020) . Similar to the LASER model, Turkish sentence embeddings were computed using a pre-trained LaBSE model. MUSE Multilingual Universal Sentence Encoder (MUSE) model is a sentence embedding model trained on multiple languages at the same time. The model creates a common semantic embedding area for a total of 16 languages, including Turkish . In this study, CNN 7 and Transformer 8 models that are shared publicly in TensorFlow Hub are used.",
"cite_spans": [
{
"start": 216,
"end": 217,
"text": "6",
"ref_id": null
},
{
"start": 597,
"end": 616,
"text": "(Feng et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation Models",
"sec_num": "4.1"
},
{
"text": "BERT Bidirectional Encoder Representations from Transformers (BERT) is designed to pretrain deep bi-directional representations from unlabeled text by conditioning together in both left and right context on all layers (Devlin et al., 2019) . In this study, BERTurk 9 and M-BERT 10 (Pires et al., 2019) models were used. Sentence embeddings were obtained by averaging the BERT embeddings. 11 In addition, the models were integrated into the Siamese network that we explained in section 4.1.",
"cite_spans": [
{
"start": 218,
"end": 239,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 388,
"end": 390,
"text": "11",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation Models",
"sec_num": "4.1"
},
{
"text": "XLM-R RoBERTa Transformer model 12 has been trained on a large multilingual data using a multilingual masked language modeling goal (Conneau et al., 2020) . In this study, we used the model to compute sentence embeddings similar to BERT models. We also integrated it into the Siamese network used in Sentence-BERT.",
"cite_spans": [
{
"start": 132,
"end": 154,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation Models",
"sec_num": "4.1"
},
{
"text": "Sentence-BERT Sentence-BERT (SBERT) (also called Bi-Encoder BERT) is a modification of pre-trained BERT network (or other transformer models) using Siamese and ternary network structures (Reimers and Gurevych, 2019) . The model derives close fixed-size sentence embedding in vector space for semantically similar sentences. The training loss function differs depending on the dataset the model was trained on. During the training on the NLI dataset, the classification objective function was used; whereas during the training on the STSb dataset, the regression objective function was used (Reimers and Gurevych, 2019) .",
"cite_spans": [
{
"start": 187,
"end": 215,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF28"
},
{
"start": 590,
"end": 618,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation Models",
"sec_num": "4.1"
},
{
"text": "The classification objective function concatenates the sentence embeddings by element-wise difference and multiplies by a trainable weight. The model optimizes the cross entropy loss:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation Models",
"sec_num": "4.1"
},
{
"text": "o = softmax(W t (u, v, |u \u2212 v|)), W t R 3n\u00d7k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation Models",
"sec_num": "4.1"
},
{
"text": "where n is the size of the sentence embedding, and k is the number of labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation Models",
"sec_num": "4.1"
},
{
"text": "In the regression objective function, the cosine similarity between two sentence embeddings, optimize the models for mean square error loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation Models",
"sec_num": "4.1"
},
{
"text": "We adopted cross-encoder architecture as explained in Reimers and Gurevych (2019) . In the crossencoder, both sentences are passed to the network and a similarity score between 0 and 1 obtained; no sentence embeddings are produced. 13 We experimented with BERTurk, M-BERT, and XLM-R with training on NLI-TR and STSb-TR datasets.",
"cite_spans": [
{
"start": 54,
"end": 81,
"text": "Reimers and Gurevych (2019)",
"ref_id": "BIBREF28"
},
{
"start": 232,
"end": 234,
"text": "13",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Encoders",
"sec_num": "4.2"
},
{
"text": "All models were individually trained on NLI-TR and STSb-TR training datasets. Also, the models trained on the NLI-TR dataset were fine-tuned on the STSb-TR dataset. All models were then tested on the STSb-TR test dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Semantic Textual Similarity",
"sec_num": "4.3"
},
{
"text": "We trained/fine-tuned the models on STSb-TR dataset with 4 epochs and 10 random seeds 14 as suggested by Reimers and Gurevych (2018; 2019) . Then, we reported the average test results of 5 successful models that perform best on the validation set. The models were evaluated by calculating the Spearman and Pearson correlations between the estimated similarity scores and the gold labels. Table 4 shows the results as \u03c1 x 100. According to the results, training the models first on the NLI-TR dataset increases the model performance. This is particularly noticeable for the XLM-R models. The BERTurk model also gives very good results when trained directly on the STSb-TR dataset. Here, we observe that the existing multilingual LASER, LaBSE, MUSE models without any training for semantic textual similarity, give very good results. Compared to these models, the performance of BERT models without training are quite low. The best results were obtained by training the BERTurk model on the NLI-TR dataset first, and then on the STSb-TR dataset.",
"cite_spans": [
{
"start": 105,
"end": 132,
"text": "Reimers and Gurevych (2018;",
"ref_id": "BIBREF27"
},
{
"start": 133,
"end": 138,
"text": "2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 388,
"end": 395,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results for Semantic Textual Similarity",
"sec_num": "4.3"
},
{
"text": "To investigate the effectiveness of our semantic similarity models for summarization evaluation, we computed the correlations of ROUGE scores and our best performing four similarity models with human judgments for a state-of-the-art abstractive model. We reported semantic similarity scores for extractive baselines as well in order to observe their alignmnet with the ROUGE scores. Table 6 : Pearson and Spearman correlations of ROUGE, BERTScore and proposed evaluation metrics with human judgments.",
"cite_spans": [],
"ref_spans": [
{
"start": 383,
"end": 390,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments for Summarization",
"sec_num": "5"
},
{
"text": "MLSUM is the first large-scale MultiLingual SUMmarization dataset which contains 1.5M+ article/summary pairs including Turkish (Scialom et al., 2020) . The authors compiled the dataset following the same methodology of CNN/DailyMail dataset. They considered news articles as the text input and their paired highlights/description as the summary. Turkish dataset was created from Internet Haber 15 by crawling archived articles between 2010 and 2019. All the articles shorter than 50 words or summaries shorter than 10 words were discarded. The data was split into train, validation and test sets, with respect to the publication dates. The data from 2010 to 2018 was used for training; data between January-April 2019 was used for validation; and data up to December 2019 was used for test (Scialom et al., 2020) . In this study, we obtained the Turkish dataset from HuggingFace collection. 16 The dataset consists of 249,277 train, 11,565 validation, and 12,775 test samples.",
"cite_spans": [
{
"start": 127,
"end": 149,
"text": "(Scialom et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 790,
"end": 812,
"text": "(Scialom et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 891,
"end": 893,
"text": "16",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "We experimented on MLSUM Turkish dataset with extractive baselines Lead-1 and Lead-3 and a stateof-the-art abstractive model mT5 described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "5.2"
},
{
"text": "Lead-1 We selected the first sentence of the source text as a summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "5.2"
},
{
"text": "We selected the first three sentences of the source text as a summary, based on the observation that the leading three sentences are a strong baseline for summarization (Nallapati et al., 2017; Sharma et al., 2019) .",
"cite_spans": [
{
"start": 169,
"end": 193,
"text": "(Nallapati et al., 2017;",
"ref_id": "BIBREF21"
},
{
"start": 194,
"end": 214,
"text": "Sharma et al., 2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lead-3",
"sec_num": null
},
{
"text": "mT5 Multilingual T5 (mT5) (Xue et al., 2020) is a variant of T5 model (Raffel et al., 2020) that was pre-trained for 101 languages including Turkish on a new Common Crawl-based dataset. For Turkish summarization, we used mT5 model fine-tuned on MLSUM dataset available on HuggingFace. 17 The model was trained with 10 epochs, 8 batch size and 10e-4 learning rate. The max news length was 784 and max summary length was determined as 64. 18",
"cite_spans": [
{
"start": 26,
"end": 44,
"text": "(Xue et al., 2020)",
"ref_id": null
},
{
"start": 70,
"end": 91,
"text": "(Raffel et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 285,
"end": 287,
"text": "17",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lead-3",
"sec_num": null
},
{
"text": "We evaluated the summarization models using semantic similarity-based evaluation, ROUGE scores, and human judgments. All the values were scaled to 100.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluations",
"sec_num": "5.3"
},
{
"text": "We used the best performing four semantic similarity models to evaluate the summarization models. The values under Cross-Encoder are the average similarity scores predicted by the models; whereas, the values under Bi-Encoder are the average cosine similarities of sentence embeddings computed by these models. ROUGE We reported F1 scores for ROUGE-1, ROUGE-2 and ROUGE-L. ROUGE scores were computed using rouge package version 0.3.1. 19",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Similarity Evaluations",
"sec_num": null
},
{
"text": "BERTScore We reported F1 score for BERTScore (Zhang et al., 2019) .",
"cite_spans": [
{
"start": 45,
"end": 65,
"text": "(Zhang et al., 2019)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Similarity Evaluations",
"sec_num": null
},
{
"text": "Human Evaluations Human evaluations were conducted to show the effectiveness of our semantic similarity based evaluation metric. We randomly selected 50 articles from the test set with their predicted summaries via mT5 model. Following the work of Fabbri et al. (2021), we asked native Turkish annotators to rate each predicted summary in terms of relevance (selection of important content from the source), consistency (the factual alignment between the summary and the summarized source) and fluency (the quality of individual sentences) in the range of 1 (very bad) to 5 (very good).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Similarity Evaluations",
"sec_num": null
},
{
"text": "Overall, 5 annotators (3 university students, 1 Ph.D. student, and 1 professor) evaluated the summaries. Average relevance was 3.50 \u00b1 0.78, average consistency was 4.45 \u00b1 0.83, and average fluency was 4.34 \u00b1 0.77.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Similarity Evaluations",
"sec_num": null
},
{
"text": "Quantitative Analysis We computed Pearson and Spearman correlations of human judgments with semantic similarity and ROUGE scores. Correlation values can be seen in Table 6 and are visualized in Figure 1 and Figure 2 . 20 The results show that, our cross-encoder models have significantly better correlations with relevance, consistency, fluency, and human average. The correlations are higher compared to the bi-encoder models. This Table 7 : Example articles from MLSUM Turkish test dataset with their reference and generated summaries. The words that appear in both reference and generated summary are in blue, while the semantically similar words are in red. The italic text pieces in the article appear in the generated summary.",
"cite_spans": [
{
"start": 218,
"end": 220,
"text": "20",
"ref_id": null
}
],
"ref_spans": [
{
"start": 164,
"end": 171,
"text": "Table 6",
"ref_id": null
},
{
"start": 194,
"end": 202,
"text": "Figure 1",
"ref_id": null
},
{
"start": 207,
"end": 215,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 433,
"end": 440,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "also shows that predicted similarity scores are more reliable than computed cosine similarities. While the main idea of this paper is to evaluate abstractive summarization, we also showed that an extractive Lead-3 baseline yields better semantic similarity scores compared to the abstractive mT5 although it outperforms the extractive baselines in terms of BERTScore and ROUGE scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "We analyzed the effectiveness of our proposed metrics qualitatively as well. In Table 7 , we show two example articles. In the first one, there are some overlapping words between two sentences and they share semantically similar information in the following parts: \"ABD'de bir adam, elindeki sunroof cam\u0131yla otomobillerin\u00f6n camlar\u0131n\u0131 par\u00e7alad\u0131\" and \"ABD'de bir otomobilden s\u00f6kt\u00fcg\u00fc sunroof cam\u0131yla b\u00f6lgede bulunan ara\u00e7lar\u0131n\u00f6n camlar\u0131n\u0131 par\u00e7alayan adam\". So, we can say that both ROUGE and semantic similarity scores can be acceptable for this example. On the other hand, the second example is more critical as it has only one overlapping word between the reference and generated summary; however, there is a high semantic similarity between them and the predicted summary has high human evalua-tion scores. Our proposed metrics can capture this but apparently ROUGE cannot.",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 87,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": null
},
{
"text": "In this study, we presented the first Turkish semantic textual similarity corpus, called STSb-TR, by translating the original English STSb dataset via machine translation. We showed that the dataset has high quality translations and does not require costly human annotation. We applied state-of-theart models to the STSb-TR dataset, and used the best performing four models as evaluation metrics for the text summarization task. We used natural language inference (NLI) models and observed that we can improve our semantic similarity models. We found high correlations between human judgments and our models, compared to BERTScore and ROUGE scores. Our qualitative analyses showed that the proposed models can capture the semantic similarity of reference and predicted summaries which cannot be caught by ROUGE scores. We conclude that our models can be applied as evaluation metric to abstractive summarization in Turkish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "NLI-TR dataset consists of the translations of SNLI and MultiNLI data sets available on GitHub: https://github.com/ boun-tabi/NLI-TR 2 https://cloud.google.com/translate/docs/basic/ translating-text 3 https://github.com/verimsu/STSb-TR",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/mjpost/sacrebleu 5 Only the punctuation marks around the word and at the end of sentences were deleted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://tfhub.dev/google/ universal-sentence-encoder-multilingual/3 8 https://tfhub.dev/google/ universal-sentence-encoder-multilingual-large/3 9 https://huggingface.co/dbmdz/bert-base-turkish-cased 10 https://huggingface.co/bert-base-multilingual-cased 11 The output of the CLS vectors yields significantly lower results compared to the results obtained.12 https://huggingface.co/xlm-roberta-base",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.sbert.net/examples/applications/ cross-encoder/README.html 14 Only S-XLM-R + STS was trained with 20 random seeds to have at least 5 successful models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "www.internethaber.com 16 https://github.com/huggingface/datasets/tree/master/ datasets/mlsum",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://huggingface.co/ozcangundes/ mt5-small-turkish-summarization18 During inference, we set max summary length to 120.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is the package and version that the authors of ML-SUM reported: https://github.com/recitalAI/MLSUM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "All the correlations were significant (p<.05) except for the correlations between Fluency and S-BERTurk+STS, BERTScore, ROUGE-L as well as correlations between BERTScore and Consistency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Ara\u00e7lar\u0131n kaputlar\u0131na da \u00e7\u0131kan adam, \u00e7evredeki bir\u00e7ok araca maddi hasar verdi. Sonras\u0131nda, \u00e7evrede bulunan otopark g\u00f6revlisi adama m\u00fcdahale etmek istedi. Elindeki cam tavanla bu sefer g\u00f6revliye sald\u0131ran adam, \u00e7evredeki diger insanlar\u0131n m\u00fcdahalesiyle etkisiz hale getirildi. Olay yerine gelen polis, adam\u0131 g\u00f6zalt\u0131na al\u0131rken; adam\u0131n uyu\u015fturucu etkisi alt\u0131nda oldugu bildirildi. Reference Summary ABD'de bir adam, elindeki sunroof cam\u0131yla otomobillerin\u00f6n camlar\u0131n\u0131 par\u00e7alad\u0131. Adama m\u00fcdahale etmek isteyen park g\u00f6revlisi de adam\u0131n sald\u0131r\u0131s\u0131na ugrad\u0131. Generated Summary ABD'de bir otomobilden s\u00f6kt\u00fcg\u00fc sunroof cam\u0131yla b\u00f6lgede bulunan ara\u00e7lar\u0131n\u00f6n camlar\u0131n\u0131 par\u00e7alayan adam, \u00e7evredeki diger insanlar\u0131n m\u00fcdahalesiyle etkisiz hale getirildi",
"authors": [],
"year": null,
"venue": "Seattle \u015fehrinin merkezinde meydana gelen olayda, Kanadal\u0131 oldugu belirtilen adam, bir otomobilden s\u00f6kt\u00fcg\u00fc sunroof cam\u0131yla b\u00f6lgede bulunan ara\u00e7lar\u0131n\u00f6n camlar\u0131n\u0131 par\u00e7alad\u0131",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seattle \u015fehrinin merkezinde meydana gelen olayda, Kanadal\u0131 oldugu belirtilen adam, bir otomobilden s\u00f6kt\u00fcg\u00fc sunroof cam\u0131yla b\u00f6lgede bulunan ara\u00e7lar\u0131n\u00f6n camlar\u0131n\u0131 par\u00e7alad\u0131. Ara\u00e7lar\u0131n kaputlar\u0131na da \u00e7\u0131kan adam, \u00e7evredeki bir\u00e7ok araca maddi hasar verdi. Sonras\u0131nda, \u00e7evrede bulunan otopark g\u00f6revlisi adama m\u00fcdahale etmek istedi. Elindeki cam tavanla bu sefer g\u00f6revliye sald\u0131ran adam, \u00e7evredeki diger insanlar\u0131n m\u00fcdahalesiyle etkisiz hale getirildi. Olay yerine gelen polis, adam\u0131 g\u00f6zalt\u0131na al\u0131rken; adam\u0131n uyu\u015fturucu etkisi alt\u0131nda oldugu bildirildi. Reference Summary ABD'de bir adam, elindeki sunroof cam\u0131yla otomobillerin\u00f6n camlar\u0131n\u0131 par\u00e7alad\u0131. Adama m\u00fcdahale etmek isteyen park g\u00f6revlisi de adam\u0131n sald\u0131r\u0131s\u0131na ugrad\u0131. Generated Summary ABD'de bir otomobilden s\u00f6kt\u00fcg\u00fc sunroof cam\u0131yla b\u00f6lgede bulunan ara\u00e7lar\u0131n\u00f6n camlar\u0131n\u0131 par\u00e7alayan adam, \u00e7evredeki diger insanlar\u0131n m\u00fcdahalesiyle etkisiz hale getirildi. ROUGE-(1/2/L): 30.00, 10.53, 25.00",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semantic Similarity Scores BERTurk+NLI+STS",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Semantic Similarity Scores BERTurk+NLI+STS (Cross Encoder / Bi-Encoder): 73.67 / 74.35",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Article-2",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Article-2",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Alevlerin b\u00fcy\u00fcmesiyle birlikte otomobil ate\u015f topuna d\u00f6nd\u00fc. S\u00fcr\u00fcc\u00fc Durmu\u015f hemen itfaiye ekiplerine haber verirken olay yerine gelen Manisa B\u00fcy\u00fck\u015fehir Belediyesi Salihli\u0130tfaiye Amirligi ekipleri yang\u0131na m\u00fcdahale etti. S\u00f6nd\u00fcr\u00fclen otomobil kullan\u0131lamaz hale geldi. Yang\u0131nla ilgili soru\u015fturma ba\u015flat\u0131ld\u0131. Reference Summary Manisa'n\u0131n Salihli il\u00e7esinde seyir halinde ilerleyen otomobil alevlere teslim oldu. Generated Summary Manisa'da seyir halindeki otomobilin motor b\u00f6l\u00fcm\u00fcnde yang\u0131n \u00e7\u0131kt\u0131",
"authors": [
{
"first": "Salihli-K\u00f6pr\u00fcba\u015f\u0131",
"middle": [],
"last": "Yang\u0131n",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yolu Taytan Mahallesi \u00c7 Ald\u0131rl\u0131k Mevkisinde Meydana",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Geldi",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang\u0131n, Salihli-K\u00f6pr\u00fcba\u015f\u0131 yolu Taytan Mahallesi \u00c7 ald\u0131rl\u0131k mevkisinde meydana geldi. Edinilen bilgiye g\u00f6re, seyir halinde ilerleyen Servet Durmu\u015f idaresindeki 43 HE 737 plakal\u0131 otomobilin motor b\u00f6l\u00fcm\u00fcnde yang\u0131n \u00e7\u0131kt\u0131. Alevlerin b\u00fcy\u00fcmesiyle birlikte otomobil ate\u015f topuna d\u00f6nd\u00fc. S\u00fcr\u00fcc\u00fc Durmu\u015f hemen itfaiye ekiplerine haber verirken olay yerine gelen Manisa B\u00fcy\u00fck\u015fehir Belediyesi Salihli\u0130tfaiye Amirligi ekipleri yang\u0131na m\u00fcdahale etti. S\u00f6nd\u00fcr\u00fclen otomobil kullan\u0131lamaz hale geldi. Yang\u0131nla ilgili soru\u015fturma ba\u015flat\u0131ld\u0131. Reference Summary Manisa'n\u0131n Salihli il\u00e7esinde seyir halinde ilerleyen otomobil alevlere teslim oldu. Generated Summary Manisa'da seyir halindeki otomobilin motor b\u00f6l\u00fcm\u00fcnde yang\u0131n \u00e7\u0131kt\u0131. ROUGE-(1/2/L): 11.11 / 0 / 11.11",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semantic Similarity Scores BERTurk+NLI+STS",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Semantic Similarity Scores BERTurk+NLI+STS (Cross Encoder / Bi-Encoder): 76.16 / 81.75",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation",
"authors": [
{
"first": "Carmen",
"middle": [],
"last": "References Eneko Agirre",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Gonzalez Agirre",
"suffix": ""
},
{
"first": "German",
"middle": [
"Rigau"
],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Claramunt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2016,
"venue": "SemEval-2016. 10th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "16--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez Agirre, Rada Mihalcea, Ger- man Rigau Claramunt, and Janyce Wiebe. 2016. SemEval-2016 task 1: Semantic textual similar- ity, monolingual and cross-lingual evaluation. In SemEval-2016. 10th International Workshop on Se- mantic Evaluation; 2016 Jun 16-17;",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "ACL (Association for Computational Linguistics)",
"authors": [
{
"first": "C",
"middle": [
"A"
],
"last": "San Diego",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "497--511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "San Diego, CA. Stroudsburg (PA): ACL; 2016. p. 497-511. ACL (As- sociation for Computational Linguistics).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Data and representation for Turkish natural language inference",
"authors": [
{
"first": "Emrah",
"middle": [],
"last": "Budur",
"suffix": ""
},
{
"first": "Tunga",
"middle": [],
"last": "R\u0131za\u00f6z\u00e7elik",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "G\u00fcng\u00f6r",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "8253--8267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emrah Budur, R\u0131za\u00d6z\u00e7elik, Tunga G\u00fcng\u00f6r, and Christopher Potts. 2020. Data and representation for Turkish natural language inference. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8253-8267.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, I\u00f1igo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evalu- ation (SemEval-2017), pages 1-14.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "\u00c9douard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n,\u00c9douard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "FEQA: A question answering evaluation framework for faithfulness assessment in abstractive summarization",
"authors": [
{
"first": "Esin",
"middle": [],
"last": "Durmus",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5055--5070",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Esin Durmus, He He, and Mona Diab. 2020. FEQA: A question answering evaluation framework for faith- fulness assessment in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5055- 5070.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "SummEval: Re-evaluating summarization evaluation",
"authors": [
{
"first": "Wojciech",
"middle": [],
"last": "Alexander R Fabbri",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Kry\u015bci\u0144ski",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2021,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "9",
"issue": "",
"pages": "391--409",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander R Fabbri, Wojciech Kry\u015bci\u0144ski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating summarization evaluation. Transactions of the Asso- ciation for Computational Linguistics, 9:391-409.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Ranking generated summaries by correctness: An interesting but challenging application for natural language inference",
"authors": [
{
"first": "Tobias",
"middle": [],
"last": "Falke",
"suffix": ""
},
{
"first": "F",
"middle": [
"R"
],
"last": "Leonardo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ribeiro",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Prasetya Ajie Utama",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2214--2220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tobias Falke, Leonardo FR Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking generated summaries by correctness: An in- teresting but challenging application for natural lan- guage inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2214-2220.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Languageagnostic BERT sentence embedding",
"authors": [
{
"first": "Fangxiaoyu",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.01852"
]
},
"num": null,
"urls": [],
"raw_text": "Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Language- agnostic BERT sentence embedding. arXiv preprint arXiv:2007.01852.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Evaluating models -automl translation documentation -google cloud",
"authors": [
{
"first": "",
"middle": [],
"last": "Google",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Google. Evaluating models -automl translation doc- umentation -google cloud.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Why not simply translate? a first Swedish evaluation benchmark for semantic similarity",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Isbister",
"suffix": ""
},
{
"first": "Magnus",
"middle": [],
"last": "Sahlgren",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.03116"
]
},
"num": null,
"urls": [],
"raw_text": "Tim Isbister and Magnus Sahlgren. 2020. Why not simply translate? a first Swedish evaluation benchmark for semantic similarity. arXiv preprint arXiv:2009.03116.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Evaluating the factual consistency of abstractive text summarization",
"authors": [
{
"first": "Wojciech",
"middle": [],
"last": "Kryscinski",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "9332--9346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332-9346.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text summarization branches out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text summariza- tion branches out, pages 74-81.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Yisi-a unified semantic MT quality evaluation and estimation metric for languages with different levels of available resources",
"authors": [
{
"first": "Chi-Kiu",
"middle": [],
"last": "Lo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "507--513",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi-kiu Lo. 2019. Yisi-a unified semantic MT quality evaluation and estimation metric for languages with different levels of available resources. In Proceed- ings of the Fourth Conference on Machine Transla- tion (Volume 2: Shared Task Papers, Day 1), pages 507-513.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Summarunner: A recurrent neural network based sequence model for extractive summarization of documents",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Feifei",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "31",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based se- quence model for extractive summarization of docu- ments. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "How multilingual is multilingual BERT?",
"authors": [
{
"first": "Telmo",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4996--5001",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4996- 5001.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A call for clarity in reporting bleu scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting bleu scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Machine Learning Research",
"volume": "21",
"issue": "",
"pages": "1--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the lim- its of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21:1-67.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "COMET: A neural framework for MT evaluation",
"authors": [
{
"first": "Ricardo",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Stewart",
"suffix": ""
},
{
"first": "Ana",
"middle": [
"C"
],
"last": "Farinha",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2685--2702",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 2685-2702.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Why comparing single performance scores does not allow to draw conclusions about machine learning approaches",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.09578"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2018. Why com- paring single performance scores does not allow to draw conclusions about machine learning ap- proaches. arXiv preprint arXiv:1803.09578.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3973--3983",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3973-3983.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "MLSUM: The multilingual summarization corpus",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Scialom",
"suffix": ""
},
{
"first": "Paul-Alexis",
"middle": [],
"last": "Dray",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Lamprier",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Piwowarski",
"suffix": ""
},
{
"first": "Jacopo",
"middle": [],
"last": "Staiano",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "8051--8067",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2020. MLSUM: The multilingual summarization corpus. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8051-8067.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "BLEURT: Learning robust metrics for text generation",
"authors": [
{
"first": "Thibault",
"middle": [],
"last": "Sellam",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7881--7892",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7881-7892.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "BIG-PATENT: A large-scale dataset for abstractive and coherent summarization",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2204--2213",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eva Sharma, Chen Li, and Lu Wang. 2019. BIG- PATENT: A large-scale dataset for abstractive and coherent summarization. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 2204-2213.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Automatic machine translation evaluation in many languages via zero-shot paraphrasing",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "90--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Thompson and Matt Post. 2020. Automatic ma- chine translation evaluation in many languages via zero-shot paraphrasing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 90-121.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Asking and answering questions to evaluate the factual consistency of summaries",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5008--5020",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the fac- tual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics, pages 5008-5020.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Aditya Barua, and Colin Raffel. 2020. mT5: A massively multilingual pre-trained text-to-text transformer",
"authors": [
{
"first": "Linting",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Mihir",
"middle": [],
"last": "Kale",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Siddhant",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.11934"
]
},
"num": null,
"urls": [],
"raw_text": "Linting Xue, Noah Constant, Adam Roberts, Mi- hir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mT5: A mas- sively multilingual pre-trained text-to-text trans- former. arXiv preprint arXiv:2010.11934.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Multilingual universal sentence encoder for semantic retrieval",
"authors": [
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Amin",
"middle": [],
"last": "Ahmad",
"suffix": ""
},
{
"first": "Mandy",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Jax",
"middle": [],
"last": "Law",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [],
"last": "Hernandez Abrego",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Tar",
"suffix": ""
},
{
"first": "Yun-Hsuan",
"middle": [],
"last": "Sung",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "87--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, Steve Yuan, Chris Tar, Yun-hsuan Sung, et al. 2020. Multilingual universal sentence encoder for semantic retrieval. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 87-94.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "BERTScore: Evaluating text generation with BERT",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Kilian",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Weinberger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating text generation with BERT. In Interna- tional Conference on Learning Representations.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Reducing quantity hallucinations in abstractive summarization",
"authors": [
{
"first": "Zheng",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Shay",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.13312"
]
},
"num": null,
"urls": [],
"raw_text": "Zheng Zhao, Shay B Cohen, and Bonnie Webber. 2020. Reducing quantity hallucinations in abstractive sum- marization. arXiv preprint arXiv:2009.13312.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Spearman correlations between different evaluation metrics and human evaluations.",
"uris": null
},
"TABREF0": {
"html": null,
"content": "<table/>",
"text": "Sample translations from STSb-TR dataset and the corresponding labels taken from the English dataset.",
"num": null,
"type_str": "table"
},
"TABREF1": {
"html": null,
"content": "<table><tr><td>S2: Group of people sitting at table of</td></tr><tr><td>restaurant.</td></tr><tr><td>T2: Bir grup insan restoran masada otu-</td></tr><tr><td>ruyor.</td></tr><tr><td>C2: Bir grup insan restoran masas\u0131nda</td></tr><tr><td>oturuyor.</td></tr></table>",
"text": "S1: Old green bottle sitting on a table. T1: Bir masada oturan eski ye\u015fil \u015fi\u015fe. C1: Bir masada duran eski ye\u015fil \u015fi\u015fe.Another typical error is possessive agreement mismatch. For example, the sentence S2 is translated as T2 but the correct translation would be C2.",
"num": null,
"type_str": "table"
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>shows vocabulary size (cased and un-</td></tr><tr><td>cased), type/token ratio, average word length and</td></tr><tr><td>average sentence length values for English and</td></tr><tr><td>Turkish datasets. 5</td></tr></table>",
"text": "",
"num": null,
"type_str": "table"
},
"TABREF3": {
"html": null,
"content": "<table><tr><td/><td>Train</td><td>Dev</td><td>Test Total</td></tr><tr><td>News</td><td>3,299</td><td>500</td><td>500 4,299</td></tr><tr><td colspan=\"2\">Caption 2,000</td><td>625</td><td>625 3,250</td></tr><tr><td>Forum</td><td>450</td><td>375</td><td>254 1,079</td></tr><tr><td>Total</td><td colspan=\"3\">5,749 1,500 1,379 8,628</td></tr></table>",
"text": "English and Turkish STSb dataset statistics. Vocab size is the word count and type/token ratio is the number of different words divided by the total number of words. Word length is the amount of characters in the word and sentence length is the number of words in a sentence.",
"num": null,
"type_str": "table"
},
"TABREF4": {
"html": null,
"content": "<table/>",
"text": "STSb dataset statistics in terms of number of sentence pairs.",
"num": null,
"type_str": "table"
},
"TABREF6": {
"html": null,
"content": "<table><tr><td>: Experiment results for semantic textual sim-</td></tr><tr><td>ilarity. BERTurk, M-BERT and XLM-R are cross-</td></tr><tr><td>encoder models. S-BERTurk, S-M-BERT and S-XLM-</td></tr><tr><td>R are bi-encoder models. Pearson and Spearman corre-</td></tr><tr><td>lations were reported as \u03c1 x 100.</td></tr></table>",
"text": "",
"num": null,
"type_str": "table"
},
"TABREF8": {
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">Relevance</td><td colspan=\"2\">Consistency</td><td colspan=\"2\">Fluency</td><td colspan=\"2\">Human Average</td></tr><tr><td>Metric</td><td colspan=\"8\">Pearson Spearman Pearson Spearman Pearson Spearman Pearson Spearman</td></tr><tr><td>Rouge-1</td><td>42.79</td><td>43.87</td><td>28.18</td><td>32.36</td><td>21.40</td><td>20.30</td><td>36.79</td><td>37.51</td></tr><tr><td>Rouge-2</td><td>38.26</td><td>41.63</td><td>27.39</td><td>35.78</td><td>16.43</td><td>20.83</td><td>32.76</td><td>38.02</td></tr><tr><td>Rouge-L</td><td>41.83</td><td>41.95</td><td>26.29</td><td>28.85</td><td>20.17</td><td>18.63</td><td>35.15</td><td>35.11</td></tr><tr><td>BERTScore</td><td>45.49</td><td>45.75</td><td>25.14</td><td>22.47</td><td>24.74</td><td>19.85</td><td>37.88</td><td>38.07</td></tr><tr><td>S-BERTurk+STS</td><td>55.44</td><td>52.82</td><td>30.25</td><td>30.04</td><td>25.63</td><td>26.70</td><td>44.26</td><td>45.86</td></tr><tr><td>S-BERTurk+NLI+STS</td><td>58.77</td><td>58.72</td><td>32.80</td><td>32.67</td><td>31.24</td><td>30.17</td><td>48.80</td><td>51.85</td></tr><tr><td>BERTurk+STS</td><td>56.87</td><td>53.54</td><td>38.02</td><td>32.46</td><td>34.10</td><td>27.88</td><td>51.32</td><td>48.59</td></tr><tr><td>BERTurk+NLI+STS</td><td>59.98</td><td>59.17</td><td>39.95</td><td>34.24</td><td>34.62</td><td>29.31</td><td>53.54</td><td>52.10</td></tr></table>",
"text": "Results of the summarization models on MLSUM dataset. The values under Cross-Encoder are the average similarity scores predicted by the models; whereas, the values under Bi-Encoder are the average cosine similarities of sentence embeddings computed by these models. All the values were scaled to 100.",
"num": null,
"type_str": "table"
}
}
}
}