{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:16:34.255985Z" }, "title": "Data-Driven Sentence Simplification: Survey and Benchmark", "authors": [ { "first": "Fernando", "middle": [], "last": "Alva-Manchego", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Sheffield", "location": {} }, "email": "" }, { "first": "Carolina", "middle": [], "last": "Scarton", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Sheffield", "location": {} }, "email": "c.scarton@sheffield.ac.uk" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "", "affiliation": { "laboratory": "", "institution": "Imperial College London", "location": {} }, "email": "l.specia@imperial.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Sentence Simplification (SS) aims to modify a sentence in order to make it easier to read and understand. In order to do so, several rewriting transformations can be performed such as replacement, reordering, and splitting. Executing these transformations while keeping sentences grammatical, preserving their main idea, and generating simpler output, is a challenging and still far from solved problem. In this article, we survey research on SS, focusing on approaches that attempt to learn how to simplify using corpora of aligned original-simplified sentence pairs in English, which is the dominant paradigm nowadays. We also include a benchmark of different approaches on common data sets so as to compare them and highlight their strengths and limitations. We expect that this survey will serve as a starting point for researchers interested in the task and help spark new ideas for future developments.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Sentence Simplification (SS) aims to modify a sentence in order to make it easier to read and understand. In order to do so, several rewriting transformations can be performed such as replacement, reordering, and splitting. Executing these transformations while keeping sentences grammatical, preserving their main idea, and generating simpler output, is a challenging and still far from solved problem. In this article, we survey research on SS, focusing on approaches that attempt to learn how to simplify using corpora of aligned original-simplified sentence pairs in English, which is the dominant paradigm nowadays. We also include a benchmark of different approaches on common data sets so as to compare them and highlight their strengths and limitations. We expect that this survey will serve as a starting point for researchers interested in the task and help spark new ideas for future developments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Text Simplification (TS) is the task of modifying the content and structure of a text in order to make it easier to read and understand, while retaining its main idea and approximating its original meaning. A simplified version of a text could benefit users with several reading difficulties, such as non-native speakers (Paetzold 2016) , people with aphasia (Carroll et al. 1998) , dyslexia (Rello et al. 2013b) , or autism (Evans, Orasan, and Dornescu 2014) . Simplifying a text automatically could also help improve performance on other language processing tasks, such as parsing (Chandrasekar, Doran, and Srinivas 1996) , summarization (Vanderwende et al. 2007; Silveira and Branco 2012) , information extraction (Evans 2011) , semantic role labeling (Vickrey and Koller 2008) , and Machine Translation (MT) (Hasler et al. 2017) .", "cite_spans": [ { "start": 321, "end": 336, "text": "(Paetzold 2016)", "ref_id": "BIBREF85" }, { "start": 359, "end": 380, "text": "(Carroll et al. 1998)", "ref_id": "BIBREF22" }, { "start": 392, "end": 412, "text": "(Rello et al. 2013b)", "ref_id": "BIBREF95" }, { "start": 425, "end": 459, "text": "(Evans, Orasan, and Dornescu 2014)", "ref_id": "BIBREF36" }, { "start": 583, "end": 623, "text": "(Chandrasekar, Doran, and Srinivas 1996)", "ref_id": "BIBREF23" }, { "start": 640, "end": 665, "text": "(Vanderwende et al. 2007;", "ref_id": "BIBREF122" }, { "start": 666, "end": 691, "text": "Silveira and Branco 2012)", "ref_id": null }, { "start": 717, "end": 729, "text": "(Evans 2011)", "ref_id": "BIBREF37" }, { "start": 755, "end": 780, "text": "(Vickrey and Koller 2008)", "ref_id": "BIBREF124" }, { "start": 812, "end": 832, "text": "(Hasler et al. 2017)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Most research on TS has focused on studying simplification of individual sentences. Reducing the scope of the problem has allowed the easier collection and curation of corpora, as well as adapting methods from other text generation tasks, mainly MT. It can be argued that \"true\" TS (i.e., document-level) cannot be achieved by simplifying sentences one at a time, and we make a call in Section 6 for the field to move in that direction. However, because the goal of this article is to review what has been done in TS so far, our survey is limited to Sentence Simplification (SS).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "When simplifying sentences, different rewriting transformations are performed, which range from replacing complex words or phrases for simpler synonyms, to changing the syntactic structure of the sentence (e.g., splitting or reordering components). Modern SS approaches are data-driven; that is, they attempt to learn these transformations using parallel corpora of aligned original-simplified sentences. This results in general simplification models that could be used for any specific type of audience, depending on the data used during training. Although significant progress has been made in this direction, current models are not yet able to execute the task fully automatically with the performance levels required to be directly useful for end users. As such, we believe it is important to review current research in the field, and to analyze it critically and empirically to better identify areas that could be improved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this article, we present a survey of research on data-driven SS for Englishthe dominant paradigm nowadays-and complement it with a benchmark of models whose outputs on standard data sets are publicly available. Our survey differs from other SS surveys in several aspects:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 Shardlow (2014) overviews automatic SS with short notes on different approaches for the task, whereas we provide a more in-depth explanation of the mechanics of how the simplification models are learned, and review the resources used to train and test them.", "cite_spans": [ { "start": 2, "end": 17, "text": "Shardlow (2014)", "ref_id": "BIBREF102" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 Siddharthan (2014) focuses on the motivations for TS and mostly provides details for the earliest automatic SS approaches, which are not necessarily data-driven. We review state-of-the-art models, focusing on those that learn to rewrite a text from examples available in corpora (i.e., data-driven), leaving aside approaches based on manually constructed rules.", "cite_spans": [ { "start": 2, "end": 20, "text": "Siddharthan (2014)", "ref_id": "BIBREF103" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 Saggion (2017) introduces data-driven SS in a Learning to Simplify book chapter. We provide a more extensive literature review of a larger number of approaches and resources. For instance, we include models based on neural sequence-to-sequence architectures (Section 4.4).", "cite_spans": [ { "start": 2, "end": 16, "text": "Saggion (2017)", "ref_id": "BIBREF97" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Finally, our survey introduces a benchmark with common data sets and metrics, so as to provide an empirical comparison between different approaches. This benchmark consists of commonly used evaluation metrics and novel measures of pertransformation performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Different types of readers could benefit from a simplified version of a sentence. Mason and Kendall (1978) report that separating a complex sentence into shorter structures can improve comprehension in low literacy readers. Siddharthan (2014) refers to studies on deaf children that show their difficulty dealing with complex structures, like coordination, subordination, and pronominalization (Quigley, Power, and Steinkamp 1977) , or passive voice and relative clauses (Robbins and Hatcher 1981) . Shewan (1985) states that aphasic adults reduce their comprehension of a sentence as its grammatical complexity increases. An eye-tracking study by Rello et al. (2013a) determined that people with dyslexia read faster if more frequent words are used in a sentence, and also that their understanding of the text improves with shorter words. Crossley et al. (2007) point out that simplified texts are the most commonly used for teaching beginners and intermediate English learners.", "cite_spans": [ { "start": 82, "end": 106, "text": "Mason and Kendall (1978)", "ref_id": "BIBREF70" }, { "start": 224, "end": 242, "text": "Siddharthan (2014)", "ref_id": "BIBREF103" }, { "start": 394, "end": 430, "text": "(Quigley, Power, and Steinkamp 1977)", "ref_id": "BIBREF91" }, { "start": 471, "end": 497, "text": "(Robbins and Hatcher 1981)", "ref_id": "BIBREF96" }, { "start": 500, "end": 513, "text": "Shewan (1985)", "ref_id": null }, { "start": 648, "end": 668, "text": "Rello et al. (2013a)", "ref_id": "BIBREF94" }, { "start": 840, "end": 862, "text": "Crossley et al. (2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Motivation for Sentence Simplification", "sec_num": "1.1" }, { "text": "Motivated by the potential benefits of simplified texts, research has been dedicated to developing simplification methods for specific target audiences: writers (Candido Jr. et al. 2009) , low literacy readers (Watanabe et al. 2009) , English learners (Petersen 2007) , non-native English speakers (Paetzold 2016) , children (De Belder and Moens 2010) , and people suffering from aphasia (Devlin and Tait 1998; Carroll et al. 1998) , dyslexia (Rello et al. 2013b) , or autism (Evans, Orasan, and Dornescu 2014) . Furthermore, simplifying sentences automatically could improve performance on other Natural Language Processing tasks, which has become evident in parsing (Chandrasekar, Doran, and Srinivas 1996) , summarization (Siddharthan, Nenkova, and McKeown 2004; Vanderwende et al. 2007 ; Silveira and Branco 2012), information extraction (Klebanov, Knight, and Marcu 2004; Evans 2011) , relation extraction (Niklaus et al. 2016) , semantic role labeling Vickrey and Koller 2008, and MT (Mirkin, Venkatapathy, and Dymetman 2013; Mishra et al. 2014; \u0160tajner and Popovi\u0107 2016; Hasler et al. 2017) . We refer the interested reader to Siddharthan (2014) for a more in-depth review of studies on the benefits of simplification for different target audiences and Natural Language Processing applications.", "cite_spans": [ { "start": 161, "end": 186, "text": "(Candido Jr. et al. 2009)", "ref_id": "BIBREF21" }, { "start": 210, "end": 232, "text": "(Watanabe et al. 2009)", "ref_id": "BIBREF126" }, { "start": 252, "end": 267, "text": "(Petersen 2007)", "ref_id": "BIBREF88" }, { "start": 298, "end": 313, "text": "(Paetzold 2016)", "ref_id": "BIBREF85" }, { "start": 325, "end": 351, "text": "(De Belder and Moens 2010)", "ref_id": "BIBREF33" }, { "start": 388, "end": 410, "text": "(Devlin and Tait 1998;", "ref_id": "BIBREF34" }, { "start": 411, "end": 431, "text": "Carroll et al. 1998)", "ref_id": "BIBREF22" }, { "start": 443, "end": 463, "text": "(Rello et al. 2013b)", "ref_id": "BIBREF95" }, { "start": 476, "end": 510, "text": "(Evans, Orasan, and Dornescu 2014)", "ref_id": "BIBREF36" }, { "start": 668, "end": 708, "text": "(Chandrasekar, Doran, and Srinivas 1996)", "ref_id": "BIBREF23" }, { "start": 725, "end": 765, "text": "(Siddharthan, Nenkova, and McKeown 2004;", "ref_id": "BIBREF104" }, { "start": 766, "end": 789, "text": "Vanderwende et al. 2007", "ref_id": "BIBREF122" }, { "start": 842, "end": 876, "text": "(Klebanov, Knight, and Marcu 2004;", "ref_id": "BIBREF62" }, { "start": 877, "end": 888, "text": "Evans 2011)", "ref_id": "BIBREF37" }, { "start": 911, "end": 932, "text": "(Niklaus et al. 2016)", "ref_id": "BIBREF79" }, { "start": 958, "end": 969, "text": "Vickrey and", "ref_id": "BIBREF124" }, { "start": 970, "end": 986, "text": "Koller 2008, and", "ref_id": "BIBREF124" }, { "start": 987, "end": 1016, "text": "MT (Mirkin, Venkatapathy, and", "ref_id": null }, { "start": 1017, "end": 1031, "text": "Dymetman 2013;", "ref_id": "BIBREF75" }, { "start": 1032, "end": 1051, "text": "Mishra et al. 2014;", "ref_id": "BIBREF76" }, { "start": 1052, "end": 1077, "text": "\u0160tajner and Popovi\u0107 2016;", "ref_id": null }, { "start": 1078, "end": 1097, "text": "Hasler et al. 2017)", "ref_id": "BIBREF49" }, { "start": 1134, "end": 1152, "text": "Siddharthan (2014)", "ref_id": "BIBREF103" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation for Sentence Simplification", "sec_num": "1.1" }, { "text": "A few corpus studies have been carried out to determine how humans simplify sentences. These studies shed some light on the simplification transformations that an automatic SS model should be expected to perform. Petersen and Ostendorf (2007) analyzed a corpus of 104 original and manually simplified news articles in English to understand how professional editors performed the simplifications, so they can later propose ways to automate the process. For their study, every sentence in the simplified version of an article was manually aligned to a corresponding sentence (or sentences) in the original version. Each original-simplified alignment was then categorized as dropped (1 to 0), split (1 to \u2265 2), total (1 to 1), or merged (2 to 1). Their analysis then focused on the split and dropped alignments. The authors determined that the decision to split an original sentence depends on some syntactic features (number of nouns, pronouns, verbs, etc.) and, most importantly, its length. On the other hand, the decision to drop a sentence may be influenced by its position in the text and how redundant the information it contains is. Alu\u00edsio et al. (2008) studied six corpora of simple texts (different genres) and a corpus of non-simple news text in Brazilian Portuguese. Their analysis included counting simple words and discourse markers; calculating average sentence lengths; and counting prepositional phrases, adjectives, adverbs, clauses, and other features. As a result, a manual for Brazilian Portuguese SS was elaborated that contains a set of rules to perform the task (Specia, Alu\u00edsio, and Pardo 2008) . In addition, as part of the same project, Caseli et al. (2009) implemented a tool to aid manual SS considering the following transformations: non-simplification, simple rewriting, strong rewriting (similar content but very different writing), subject-verb-object reordering, passive to active voice transformation, clause reordering, sentence splitting, sentence joining, and full or partial sentence dropping. Bott and Saggion (2011a) worked with a data set of 200 news articles in Spanish with their corresponding manual simplifications. After automatically aligning the sentences, the authors determined the simplification transformations performed: change (e.g., difficult words, pronouns, voice of verb), delete (words, phrases or clauses), insert (word or phrases), split (relative clauses, coordination, etc.), proximization (add locative phrases, change from third to second person), reorder, select, and join (sentences). The first four transformations are the most common in their corpus.", "cite_spans": [ { "start": 213, "end": 242, "text": "Petersen and Ostendorf (2007)", "ref_id": "BIBREF88" }, { "start": 915, "end": 955, "text": "(number of nouns, pronouns, verbs, etc.)", "ref_id": null }, { "start": 1138, "end": 1159, "text": "Alu\u00edsio et al. (2008)", "ref_id": "BIBREF2" }, { "start": 1584, "end": 1617, "text": "(Specia, Alu\u00edsio, and Pardo 2008)", "ref_id": "BIBREF108" }, { "start": 1662, "end": 1682, "text": "Caseli et al. (2009)", "ref_id": "BIBREF23" }, { "start": 2031, "end": 2055, "text": "Bott and Saggion (2011a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Text Transformations for Simplification", "sec_num": "1.2" }, { "text": "From the definition of simplification, the task could easily be confused with summarization. As Shardlow (2014) points out, summarization focuses on reducing length and content by removing unimportant or redundant information. In simplification, some deletion of content can also be performed. However, we could additionally replace words by more explanatory phrases, make co-references explicit, add connectors to improve fluency, and so forth. As a consequence, a simplified text could end up being longer than its original version while still improving the readability of the text. Therefore, although summarization and simplification are related, they have different objectives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Text Rewriting Tasks", "sec_num": "1.3" }, { "text": "Another related task is sentence compression, which consists of reducing the length of a sentence without losing its main idea and keeping it grammatical (Jing 2000) . Most approaches focus on deleting unnecessary words. As such, this could be considered as a subtask of the simplification process, which also encompasses more complex transformations. Abstractive sentence compression (Cohn and Lapata 2013) , on the other hand, does include transformations like substitution, reordering, and insertion. However, the goal is still to reduce content without necessarily improving readability.", "cite_spans": [ { "start": 154, "end": 165, "text": "(Jing 2000)", "ref_id": "BIBREF54" }, { "start": 385, "end": 407, "text": "(Cohn and Lapata 2013)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Text Rewriting Tasks", "sec_num": "1.3" }, { "text": "Split-and-rephrase (Narayan et al. 2017 ) focuses on splitting a sentence into several shorter ones, and making the necessary rephrasings to preserve meaning and grammaticality. Because SS could involve deletion, it would not always be able to preserve meaning. Rather, its editing decisions may remove details that could distract the reader from understanding the text's central message. As such, split-and-rephrase could be considered as another possible text transformation within simplification.", "cite_spans": [ { "start": 19, "end": 39, "text": "(Narayan et al. 2017", "ref_id": "BIBREF78" } ], "ref_spans": [], "eq_spans": [], "section": "Related Text Rewriting Tasks", "sec_num": "1.3" }, { "text": "In the remainder of this article, Section 2 details the most commonly used resources for SS, with emphasis on corpora used to train SS models. Section 3 explains how the output of a simplification model is generally evaluated. The two main contributions of this article are given in Section 4, which presents a critical summary of the different approaches that have been used to train data-driven sentence models, and Section 5, which benchmarks most of these models using common metrics and data sets to compare them and establish the advantages and disadvantages of each approach. Finally, based on the literature review and analysis presented, in Section 6 we provide directions for future research in the area.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structure of this Article", "sec_num": "1.4" }, { "text": "A data-driven SS model is one that learns to simplify from examples in corpora. In particular, for learning sentence-level transformations, a model requires instances of original sentences and their corresponding simplified versions. In this section, we present the most commonly used resources for SS that provide these examples, including parallel corpora and dictionary-like databases. For each parallel corpus, especially, we outline the motivations behind it, how the much-necessary sentence alignments were extracted, and report on studies about the suitability of the resource for SS research. We describe resources for English in detail and give an overview of resources available for other languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora for Simplification", "sec_num": "2." }, { "text": "As presented in Section 1.2, an original sentence could be aligned to one (1-to-1) or more (1-to-N) simplified sentences. At the same time, several original sentences could be aligned to a single simplified one (N-to-1). The corpora we describe in this section contain many of these types of alignments. In the remainder of this article, we use the term simplification instance to refer to any type of sentence alignment in a general way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora for Simplification", "sec_num": "2." }, { "text": "The Simple English Wikipedia (SEW) 1 is a version of the online English Wikipedia (EW) 2 primarily aimed at English learners, but which can also be beneficial for students, children, and adults with learning difficulties (Simple Wikipedia 2017b). With this purpose, articles in SEW use fewer words and simpler grammatical structures. For example, writers are encouraged to use the list of words of Basic English (Ogden 1930) , which contains 850 words presumed to be sufficient for everyday life communication. Authors also have guidelines on how to create syntactically simple sentences by, for example, giving preference to the subject-verb-object order for their sentences, and avoiding compound sentences (Simple Wikipedia 2017a).", "cite_spans": [ { "start": 412, "end": 424, "text": "(Ogden 1930)", "ref_id": "BIBREF81" } ], "ref_spans": [], "eq_spans": [], "section": "Main -Simple English Wikipedia", "sec_num": "2.1" }, { "text": "2.1.1 Simplification Instances. Much of the popularity of using Wikipedia for research in SS comes from publicly available automatically collected alignments between sentences of equivalent articles in EW and SEW. Several techniques have been explored to produce such alignments with reasonable quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Main -Simple English Wikipedia", "sec_num": "2.1" }, { "text": "A first approach consists of aligning texts according to their term frequency-inverse document frequency (tf-idf) cosine similarity. For the PWKP corpus, Zhu, Bernhard, and Gurevych (2010) measured this directly at sentence-level between all sentences of each article pair, and sentences whose similarity was above a certain threshold were aligned. For the C&K-1 (Coster and Kauchak 2011b) and C&K-2 (Kauchak 2013) corpora, the authors first aligned paragraphs with tf-idf cosine similarity, and then found the best overall sentence alignment with the dynamic programming algorithm proposed by Barzilay and Elhadad (2003) . This algorithm takes context into consideration: The similarity between two sentences is affected by their proximity to pairs of sentences with Table 1 Summary of parallel corpora extracted from EW and SEW. An original sentence can be aligned to one (1-to-1) or more (1-to-N) unique simplified sentences. A (*) indicates that some aligned simplified sentences may not be unique.", "cite_spans": [ { "start": 594, "end": 621, "text": "Barzilay and Elhadad (2003)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 768, "end": 775, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Main -Simple English Wikipedia", "sec_num": "2.1" }, { "text": "Instances Alignment Types PWKP (Zhu, Bernhard, and Gurevych 2010) 108K 1-to-1, 1-to-N C&K-1 (Coster and Kauchak 2011b) 137K 1-to-1, 1-to-N RevisionWL (Woodsend and Lapata 2011a) 15K 1-to-1*, 1-to-N*, N-to-1* AlignedWL (Woodsend and Lapata 2011a) 142K 1-to-1, 1-to-N C&K-2 (Kauchak 2013) 167K 1-to-1, 1-to-N EW-SEW (Hwang et al. 2015) 392K 1-to-1 sscorpus (Kajiwara and Komachi 2016) 493K 1-to-1 WikiLarge (Zhang and Lapata 2017) 286K 1-to-1*, 1-to-N*, N-to-1* high similarity. Finally, Woodsend and Lapata (2011a) also adopt the two-step process of Coster and Kauchak (2011b) , using tf-idf when compiling the AlignedWL corpus.", "cite_spans": [ { "start": 92, "end": 118, "text": "(Coster and Kauchak 2011b)", "ref_id": "BIBREF28" }, { "start": 150, "end": 177, "text": "(Woodsend and Lapata 2011a)", "ref_id": "BIBREF130" }, { "start": 218, "end": 245, "text": "(Woodsend and Lapata 2011a)", "ref_id": "BIBREF130" }, { "start": 314, "end": 333, "text": "(Hwang et al. 2015)", "ref_id": "BIBREF52" }, { "start": 355, "end": 382, "text": "(Kajiwara and Komachi 2016)", "ref_id": "BIBREF57" }, { "start": 405, "end": 428, "text": "(Zhang and Lapata 2017)", "ref_id": "BIBREF136" }, { "start": 486, "end": 513, "text": "Woodsend and Lapata (2011a)", "ref_id": "BIBREF130" }, { "start": 549, "end": 575, "text": "Coster and Kauchak (2011b)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 272, "end": 286, "text": "(Kauchak 2013)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Corpora", "sec_num": null }, { "text": "Another approach is to take advantage of the revision histories in Wikipedia articles. When editors change the content of an article, they need to comment on what the change was and the reason for it. For the RevisionWL corpus, Woodsend and Lapata (2011a) looked for keywords simple, clarification, or grammar in the revision comments of articles in SEW. Then, they used Unix commands diff and dwdiff to identify modified sections and sentences, respectively, to produce the alignments. This approach is inspired by Yatskar et al. (2010) , who used a similar method to automatically extract high-quality lexical simplifications (e.g., collaborate \u2192 work together).", "cite_spans": [ { "start": 516, "end": 537, "text": "Yatskar et al. (2010)", "ref_id": "BIBREF135" } ], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": null }, { "text": "More sophisticated techniques for measuring sentence similarity have also been explored. For their EW-SEW corpus, Hwang et al. (2015) implemented an alignment method using word-level semantic similarity based on Wiktionary. 3 They first created a graph using synonym information and word-definition co-occurrence in Wiktionary. Then, similarity is measured based on the number of shared neighbors between words. This word-level similarity metric is then combined with a similarity score between dependency structures. This final similarity rate is used by a greedy algorithm that forces 1-to-1 matches between original and simplified sentences. Kajiwara and Komachi (2016) propose several similarity measures based on word embeddings alignments. Given two sentences, their best metric (1) finds, for each word in one sentence, the word that is most similar to it in the other sentence, and (2) averages the similarities for all words in the sentence. For symmetry, this measure is calculated twice (simplified \u2192 original, original \u2192 simplified) and their average is the final similarity measure between the two sentences. This metric was used to align original and simplified sentences from articles in a 2016 Wikipedia dump and produce the sscorpus. It contains 1-to-1 alignments from sentences whose similarity was above a certain threshold.", "cite_spans": [ { "start": 114, "end": 133, "text": "Hwang et al. (2015)", "ref_id": "BIBREF52" }, { "start": 645, "end": 672, "text": "Kajiwara and Komachi (2016)", "ref_id": "BIBREF57" } ], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": null }, { "text": "The alignment methods described have produced different versions of parallel corpora from EW and SEW, which are currently used for research in SS. Table 1 summarizes some of their characteristics.", "cite_spans": [], "ref_spans": [ { "start": 147, "end": 154, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Corpora", "sec_num": null }, { "text": "RevisionWL is the smallest parallel corpus listed and its instances may not be as clean as those of the others. A 1-to-1* alignment means that an original sentence can be aligned to a simplified one that appears more than once in the corpus. A 1-to-N* alignment means that an original sentence can be aligned to several simplified sentences, but some (or all of them) repeat more than once in the corpus. Lastly, a N-to-1* alignment means that several original sentences can be aligned to one simplified sentence that repeats more than once in the corpus. This sentence repetition is indicative of misalignments, which makes this corpus noisy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": null }, { "text": "EW-SEW and sscorpus provide the largest number of instances. These corpora also specify a similarity score per aligned sentence pair, which can help filter out instances with less confidence to reduce noise. Unfortunately, they only contain 1-to-1 alignments. Despite being smaller in size, PWKP, C&K-1, C&K-2, and AlignedWL also offer 1-to-N alignments, which is desirable if we want an SS model to learn how to split sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": null }, { "text": "Finally, WikiLarge (Zhang and Lapata 2017) joins instances from four Wikipediabased data sets: PWKP, C&K-2, AlignedWL, and RevisionWL. It is the most common corpus used for training neural sequence-to-sequence models for SS (see Sec. 4.4). However, it is not the biggest in size currently available, and can contain noisy alignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": null }, { "text": "Research. Several studies have been carried out to determine the characteristics that make Wikipedia-based corpora suitable (or unsuitable) for the simplification task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Suitability for Simplification", "sec_num": "2.1.2" }, { "text": "Some research has focused on determining if SEW is actually simple. Yasseri, Kornai, and Kert\u00e9sz (2012) conducted a statistical analysis on a dump of the whole corpus from 2010 and concluded that even though SEW articles use fewer complex words and shorter sentences, their syntactic complexity is basically the same as EW (as compared by part-of-speech n-gram distribution).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Suitability for Simplification", "sec_num": "2.1.2" }, { "text": "Other studies target the automatically produced alignments used to train SS models. Coster and Kauchak (2011b) found that in their corpus (C&K-1), the majority (65%) of simple paragraphs do not align with an original one, and even between aligned paragraphs not every sentence is aligned. Also, around 27% of instances are identical, which could induce SS models to learn to not modify an original sentence, or to perform very conservative rewriting transformations. Xu, Callison-Burch, and Napoles (2015) analyzed 200 randomly selected instances of the PWKP corpus and found that around 50% of the alignments are not real simplifications. Some of them (17%) correspond to misalignments and, on the others (33%), the simple sentence presents the same level of complexity as its counterpart. Although instances formed by identical sentence pairs are important for learning when not to simplify, misalignments add noise to the data and prevent models from learning how to perform the task accurately.", "cite_spans": [ { "start": 84, "end": 110, "text": "Coster and Kauchak (2011b)", "ref_id": "BIBREF28" }, { "start": 467, "end": 505, "text": "Xu, Callison-Burch, and Napoles (2015)", "ref_id": "BIBREF132" } ], "ref_spans": [], "eq_spans": [], "section": "Suitability for Simplification", "sec_num": "2.1.2" }, { "text": "Another line of research tries to determine the simplification transformations realized in available parallel data. Coster and Kauchak (2011b) used word alignments on C&K-1 and found rewordings (65%), deletions (47%), reorders (34%), merges (31%), and splits (27%). Amancio and Specia (2014) extracted 143 instances also from C&K-1, and manually annotated the simplification transformations performed: sentence splitting, paraphrasing (either single word or whole sentence), drop of information, sentence reordering, information insertion, and misalignment. They found that the most common operations were paraphrasing (39.8%) and drop of information (26.76%). Xu, Callison-Burch, and Napoles (2015) categorized the real simplifications they encountered in PWKP according to the simplification performed, and found: deletion only (21%), paraphrase only (17%), and deletion+paraphrase (12%). These results show a tendency toward lexical simplification and compression operations. Also, Xu, Callison-Burch, and Napoles (2015) state that the simplifications found are not ideal, because many of them are minimal: Just a few words are simplified (replaced or dropped) and the rest is left unchanged.", "cite_spans": [ { "start": 116, "end": 142, "text": "Coster and Kauchak (2011b)", "ref_id": "BIBREF28" }, { "start": 266, "end": 291, "text": "Amancio and Specia (2014)", "ref_id": "BIBREF5" }, { "start": 661, "end": 699, "text": "Xu, Callison-Burch, and Napoles (2015)", "ref_id": "BIBREF132" }, { "start": 985, "end": 1023, "text": "Xu, Callison-Burch, and Napoles (2015)", "ref_id": "BIBREF132" } ], "ref_spans": [], "eq_spans": [], "section": "Suitability for Simplification", "sec_num": "2.1.2" }, { "text": "These studies evidence problems with instances in corpora extracted from EW and SEW alignments. Noisy data in the form of misalignments as well as lack of variety of simplification transformations can lead to suboptimal SS models that learn to simplify from these corpora. However, their scale and public availability are strong assets and simplification models have been shown to learn to perform some simplifications (albeit still with mistakes) from this data. Therefore, this is still an important resource for research in SS. One promising direction is to devise ways to mitigate the effects of the noise in the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Suitability for Simplification", "sec_num": "2.1.2" }, { "text": "In order to tackle some of the problems identified in EW and SEW alignments, Xu, Callison-Burch, and Napoles (2015) introduced the Newsela corpus. It contains 1,130 news articles with up to five simplified versions each: The original text is version 0 and the most simplified version is 5. The target audience considered was children with different education grade levels. These simplifications were produced manually by professional editors, which is an improvement over SEW where volunteers performed the task. A manual analysis of 50 random automatically aligned sentence pairs (reproduced in Figure 1 ) shows a better presence and distribution of simplification transformations in the Newsela corpus.", "cite_spans": [ { "start": 77, "end": 115, "text": "Xu, Callison-Burch, and Napoles (2015)", "ref_id": "BIBREF132" } ], "ref_spans": [ { "start": 596, "end": 604, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Newsela Corpus", "sec_num": "2.2" }, { "text": "The statistics of Figure 1 show that there is still a preference toward compression and lexical substitution transformations, rather than more complex syntactic alterations. However, splitting starts to appear in early simplification versions. In addition, just like with EW and SEW, there are sentences that are not simpler than their counterparts in the previous version. This is likely to be because they did not need any further", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 26, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Newsela Corpus", "sec_num": "2.2" }, { "text": "Manual categorization of simplification transformations in sample sentences from two simplified versions in the Newsela corpus. Simp-N means sentences from the original article (version 0) automatically aligned with sentences in version-N of the same article. Extracted from Xu, Callison-Burch, and Napoles (2015) . simplifications to comply with the readability requirements of the grade level of the current version. Xu, Callison-Burch, and Napoles (2015) also presented an analysis of the most frequent syntax patterns in original and simplified texts for PWKP and Newsela. These patterns correspond to parent node (head node) \u2192 children node(s) structures. Overall, the Wikipedia corpus has a higher tendency to retain complex patterns in its simple counterpart than Newsela. Finally, the authors present a study on discourse connectives that are important for readability according to Siddharthan (2003) . They report that simple cue words are more likely to appear in Newsela's simplifications, and that complex connectives have a higher probability to be retained in Wikipedia's. This could enable research on how discourse features influence simplification.", "cite_spans": [ { "start": 275, "end": 313, "text": "Xu, Callison-Burch, and Napoles (2015)", "ref_id": "BIBREF132" }, { "start": 419, "end": 457, "text": "Xu, Callison-Burch, and Napoles (2015)", "ref_id": "BIBREF132" }, { "start": 890, "end": 908, "text": "Siddharthan (2003)", "ref_id": "BIBREF103" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "2.2.1 Simplification Instances. Newsela is a corpus that can be obtained for free for research purposes, 4 but it cannot be redistributed. As such, it is not possible to produce and release sentence alignments for the research community in SS. This is certainly a disadvantage, because it is difficult to compare SS models developed using this corpus without a common split of the data and the same document, paragraph, and sentence alignments. Xu, Callison-Burch, and Napoles (2015) align sentences between consecutive versions of articles in the corpus using Jaccard similarity (Jaccard 1912 ) based on overlapping word lemmas. Alignments with the highest similarity become simplification instances.", "cite_spans": [ { "start": 445, "end": 483, "text": "Xu, Callison-Burch, and Napoles (2015)", "ref_id": "BIBREF132" }, { "start": 580, "end": 593, "text": "(Jaccard 1912", "ref_id": "BIBREF53" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "Stajner et al. (2017) explore three similarity metrics and two alignment methods to produce paragraph and sentence alignments in Newsela. The first similarity metric uses a character 3-gram model (Mcnamee and Mayfield 2004) with cosine similarity. The second metric averages the word embeddings (trained in EW) of the text snippet and then uses cosine similarity. The third metric computes the cosine similarity between all word embeddings in the text snippet (instead of the average). Regarding the alignment methods, the first one uses any of the previous metrics to compute the similarity between all possible sentence pairs in a text and chooses the pair of highest similarity as the alignment. The second method uses the previous strategy first, but instead of choosing the pair with highest similarity, assumes that the order of sentences of the original text is preserved in its simplified version, and thus chooses the sequence of sentence alignments that best supports this assumption. The produced instances were evaluated based on human judgments for 10 original texts with three of their corresponding simplified versions. Their best method measures similarity between text snippets with the character 3-gram model and aligns using the first strategy. Even though the alignments are not publicly available, the algorithms and metrics to produce them can be found in the CATS software (\u0160tajner et al. 2018). 5 The vicinity-driven algorithms of Paetzold and Specia (2016) are used in Alva-Manchego et al. (2017) to generate paragraph and sentence alignments between consecutive versions of articles in the Newsela corpus. Given two documents/paragraphs, their method first creates a similarity matrix between all paragraphs/sentences using tf-idf cosine similarity. Then, it selects a coordinate in the matrix that is closest to the beginning [0,0] and that corresponds to a pair of text snippets with a similarity score above a certain threshold. From this point on, it iteratively searches for good alignments in a hierarchy of vicinities: V1 (1-1, 1-N, N-1 alignments), V2 (skipping one snippet), and V3 (long-distance skips). They first align paragraphs and then sentences within each paragraph. The extracted sentence alignments correspond to 1-to-1, 1-to-N, and N-to-1 instances. The alignment algorithms are publicly available as part of the MASSAlign toolkit (Paetzold, Alva-Manchego, and Specia 2017) . 6 Because articles in the Newsela corpus have different simplified versions that correspond to different grade levels, models using paragraph or sentence alignments between consecutive versions (e.g., 0-1, 1-2, 2-3) may learn different text transformations than those using non-consecutive versions (e.g., 0-2, 1-3, 2-4). This is important to keep in mind when learning from automatic alignments of this corpus.", "cite_spans": [ { "start": 196, "end": 223, "text": "(Mcnamee and Mayfield 2004)", "ref_id": "BIBREF72" }, { "start": 1419, "end": 1420, "text": "5", "ref_id": null }, { "start": 1455, "end": 1481, "text": "Paetzold and Specia (2016)", "ref_id": "BIBREF85" }, { "start": 1494, "end": 1521, "text": "Alva-Manchego et al. (2017)", "ref_id": "BIBREF3" }, { "start": 2377, "end": 2419, "text": "(Paetzold, Alva-Manchego, and Specia 2017)", "ref_id": "BIBREF84" }, { "start": 2422, "end": 2423, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "Research. Scarton, Paetzold, and Specia (2018b) studied automatically aligned sentences from the Newsela corpus in order to determine its suitability for SS. They first analyzed the corpus in terms of readability and psycholinguistic metrics, determining that each version of an article is indeed simpler than the previous one. They then used the sentences to train models for four tasks: complex vs. simple classification, complexity prediction, lexical simplification, and sentence simplification. The data set proved useful for the first three tasks, and helped achieve the highest reported performance for a state-of-the-art lexical simplifier. Results for the last task were inconclusive, indicating that more in-depth studies need to be performed, and that research intending to use Newsela for SS needs to be mindful about the types of sentence alignments to use for training models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Suitability for Simplification", "sec_num": "2.2.2" }, { "text": "In this section, we describe some additional resources that are used for SS in English with very specific reasons: tuning and testing of models in general purpose (Turk-Corpus) and domain-specific (SimPA) data, evaluation of sentence splitting (HSplit), readability assessment (OneStopEnglish), training and testing of split-and-rephrase (WEBSPLIT and WikiSplit), and learning paraphrases (PPDB and SPPDB).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Resources for English", "sec_num": "2.3" }, { "text": "2.3.1 TurkCorpus. Just like with other text rewriting tasks, there is no single correct simplification possible for a given original sentence. As such, Xu et al. (2016) asked workers on Amazon Mechanical Turk to simplify 2,350 sentences extracted from the PWKP corpus to collect eight references for each one. This corpus was then randomly split into two sets: one with 2,000 instances intended to be used for system tuning, and one with 350 instances for measuring the performance of SS models using metrics that rely on multiple references (see SARI in Sec. 3.2.3). However, the instances chosen from PWKP are those that focus on paraphrasing (1-to-1 alignments with almost similar lengths), thus limiting the range of simplification operations that SS models can be evaluated on using this multi-reference corpus. This corpus is the most commonly used to evaluate and compare SS systems trained on English Wikipedia data.", "cite_spans": [ { "start": 152, "end": 168, "text": "Xu et al. (2016)", "ref_id": "BIBREF133" } ], "ref_spans": [], "eq_spans": [], "section": "Other Resources for English", "sec_num": "2.3" }, { "text": "2.3.2 HSplit. Sulem, Abend, and Rappoport (2018a) created a multi-reference corpus specifically for assessing sentence splitting. They took the sentences from the test set of TurkCorpus, and manually simplified them in two settings: (1) split the original sentence as much as possible, and (2) split only when it simplifies the original sentence. Two annotators carried out the task in both settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Resources for English", "sec_num": "2.3" }, { "text": "2.3.3 SimPA. Scarton, Paetzold, and Specia (2018a) introduce a corpus that differs from the previously described in two aspects: (1) it contains sentences from the Public Administration domain instead of the more general (Wikipedia) and news (Newsela) \"domains\", and (2) lexical and syntactic simplifications were performed independently. The former could be useful for validation and/or evaluation of SS models in a different domain, whereas the latter allows the analysis of the performance of SS models in the two subtasks in isolation. The current version of the corpus contains 1,100 original sentences, each with three references of lexical simplifications only, and one reference of syntactic simplification. This syntactic simplification was performed starting from a randomly selected lexical simplification reference for each original sentence.", "cite_spans": [ { "start": 13, "end": 50, "text": "Scarton, Paetzold, and Specia (2018a)", "ref_id": "BIBREF98" } ], "ref_spans": [], "eq_spans": [], "section": "Other Resources for English", "sec_num": "2.3" }, { "text": "2.3.4 OneStopEnglish. Vajjala and Lu\u010di\u0107 (2018) compiled a parallel corpus of 189 news articles that were rewritten by teachers to three levels of adult English as a Second Language learners: elementary, intermediate, and advanced. In addition, they used cosine similarity to automatically align sentences between articles in all the levels, resulting in 1,674 instances for ELE-INT, 2,166 for ELE-ADV, and 3,154 for INT-ADV. The initial motivation for creating this corpus was to aid in automatic readability assessment at document and sentence levels. However, OneStopEnglish could also be used for testing the generalization capabilities of models trained on bigger corpora with different target audiences.", "cite_spans": [ { "start": 22, "end": 46, "text": "Vajjala and Lu\u010di\u0107 (2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Other Resources for English", "sec_num": "2.3" }, { "text": "2.3.5 WebSplit. Narayan et al. 2017introduced split-and-rephrase, and created a data set for training and testing of models attempting this task. Extracting information from the WEBNLG data set (Gardent et al. 2017) , they collected WEBSPLIT. Each entry in the data set contains: (1) a meaning representation (MR) of an original sentence, which is a set of Resource Description Framework (RDF) triplets (subject-property-object); (2) the original sentence to which the meaning representation corresponds; and (3) several MRsentence pairs that represent valid splits (\"simple\" sentences) of the original sentence. After its first release, Aharoni and Goldberg (2018) found that around 90% of unique \"simple\" sentences in the development and test sets also appeared in the training set. This resulted in trained models performing well because of memorization rather than learning to split properly. Therefore, Aharoni and Goldberg proposed a new split of the data ensuring that (1) every RDF relation is represented in the training set, and that (2) every RDF triplet appears in only one of the data splits. Later, Narayan et al. released an updated version of their original data set, with more data and following constraint (2).", "cite_spans": [ { "start": 194, "end": 215, "text": "(Gardent et al. 2017)", "ref_id": "BIBREF41" }, { "start": 638, "end": 665, "text": "Aharoni and Goldberg (2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Other Resources for English", "sec_num": "2.3" }, { "text": "2.3.6 WikiSplit. Botha et al. (2018) created a corpus for the split-and-rephrase task based on English Wikipedia edit histories. In the data set, each original sentence is only aligned with two simpler ones. A simple heuristic was used for the alignment: the trigram prefix and trigram suffix of the original sentence should match, respectively, the trigram prefix of the first simple sentence and the trigram suffix of the second simple sentence. The two simple sentences should not have the same trigram suffix either. The BLEU score between the aligned pairs was also used to filter out misalignments according to an empirical threshold. The final corpus contains one million instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Resources for English", "sec_num": "2.3" }, { "text": "2.3.7 Paraphrase Database. Ganitkevitch, Van Durme, and Callison-Burch (2013) released the Paraphrase Database (PPDB), which contains 220 million paraphrases in English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Resources for English", "sec_num": "2.3" }, { "text": "These paraphrases are lexical (one token), phrasal (multiple tokens), and syntactic (tokens and non-terminals). To extract the paraphrases, they used bilingual corpora with the following intuition: \"two strings that translate to the same foreign string can be assumed to have the same meaning.\" The authors utilized the synchronous contextfree grammar formalism to collect paraphrases. Using MT technology, they extracted grammar rules from foreign-to-English corpora. Then, the paraphrase is created from rule pairs where the left-hand side and foreign string match. Each paraphrase in PPDB has a similarity score, which was calculated using monolingual distributional similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Resources for English", "sec_num": "2.3" }, { "text": "2.3.8 Simple Paraphrase Database. Pavlick and Callison-Burch (2016) created the Simple PPDB, a subset of the PPDB tailored for SS. They used machine learning models to select paraphrases that generate a simplification and preserve its meaning. First, they selected 1,000 words from PPDB which also appear in the Newsela corpus. They then selected up to 10 paraphrases for each word. After that, they crowd-sourced the manual evaluation of these paraphrases in two stages: (1) rate their meaning preservation in a scale of 1 to 5, and (2) label the ones with rates higher than 2 as simpler or not. Next, these data were used to train a multi-class logistic regression model to predict whether a paraphrase would produce simpler, more complex, or non-sense output. Finally, they applied this model to PPDB and extracted 4.5 million simplifying paraphrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Resources for English", "sec_num": "2.3" }, { "text": "The most popular (and generally larger) resources available for simplification are in English. However, some resources have been built for other languages:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Resources for Other Languages", "sec_num": "2.4" }, { "text": "\u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Resources for Other Languages", "sec_num": "2.4" }, { "text": "Basque. Gonzalez-Dios et al. (2014) collected 200 articles of science and technology texts from a science and technology magazine (complex corpus) and a Web site for children (simple corpus). They used these corpora to analyze complexity, but the articles in the data set are not parallel.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Resources for Other Languages", "sec_num": "2.4" }, { "text": "\u2022 Brazilian Portuguese. Caseli et al. (2009) compiled 104 newspaper articles (complex corpus), and a linguist simplified each of them following a simplification manual (Specia, Alu\u00edsio, and Pardo 2008) and an annotation editor that registers the simplification transformations performed. The corpus contains 2,116 instances.", "cite_spans": [ { "start": 24, "end": 44, "text": "Caseli et al. (2009)", "ref_id": "BIBREF23" }, { "start": 168, "end": 201, "text": "(Specia, Alu\u00edsio, and Pardo 2008)", "ref_id": "BIBREF108" } ], "ref_spans": [], "eq_spans": [], "section": "Resources for Other Languages", "sec_num": "2.4" }, { "text": "\u2022 Danish. Klerke and S\u00f8gaard (2012) introduced DSim, a parallel corpus of news telegrams and their simplifications produced by trained journalists. The corpus contains 3,701 articles, out of which a total of 48,186 automatically aligned sentence pairs were selected.", "cite_spans": [ { "start": 10, "end": 35, "text": "Klerke and S\u00f8gaard (2012)", "ref_id": "BIBREF64" } ], "ref_spans": [], "eq_spans": [], "section": "Resources for Other Languages", "sec_num": "2.4" }, { "text": "\u2022 German. Klaper, Ebling, and Volk (2013) crawled articles from different Web sites to collect a corpus of around 7K sentences, of which close to 78% have automatic alignments.", "cite_spans": [ { "start": 10, "end": 41, "text": "Klaper, Ebling, and Volk (2013)", "ref_id": "BIBREF61" } ], "ref_spans": [], "eq_spans": [], "section": "Resources for Other Languages", "sec_num": "2.4" }, { "text": "\u2022 Italian. Brunato et al. (2015) collected and manually aligned two corpora. One contains 32 short novels for children and their manually simplified versions, and the other is composed of 24 texts produced and simplified by teachers. They also manually annotated the simplification transformations performed. Tonelli, Aprosio, and Saltori (2016) introduced SIMPITIKI, 7 extracting aligned original-simplified sentences from revision histories of the Italian Wikipedia, and annotating them using the same scheme as Brunato et al. (2015) . The corpus described in their paper contains 345 instances with 575 annotations of simplification tranformations. As part of SIMPITIKI, the authors also created a corpus of the Public Domain by simplifying documents from the Trento Municipality with 591 annotations.", "cite_spans": [ { "start": 11, "end": 32, "text": "Brunato et al. (2015)", "ref_id": "BIBREF19" }, { "start": 309, "end": 345, "text": "Tonelli, Aprosio, and Saltori (2016)", "ref_id": "BIBREF118" }, { "start": 514, "end": 535, "text": "Brunato et al. (2015)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Resources for Other Languages", "sec_num": "2.4" }, { "text": "\u2022 Japanese. Goto, Tanaka, and Kumano (2015) released a corpus of news articles and their simplifications, produced by teachers of Japanese as a foreign language. Their data set consists of 10,651 instances for training (automatic alignments), 723 instances for development (manual alignments), and 2,012 instances for testing (manual alignments).", "cite_spans": [ { "start": 12, "end": 43, "text": "Goto, Tanaka, and Kumano (2015)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Resources for Other Languages", "sec_num": "2.4" }, { "text": "\u2022 Spanish. Bott and Saggion (2011a) describe a corpus of 200 news articles and their simplifications, produced by trained experts and targeted at people with learning disabilities. They produced automatic sentence alignments (Bott and Saggion 2011b) and manually annotated the simplification transformations performed in only a subset of the data set. Newsela also provides a simplification corpus in Spanish which has been used in\u0160tajner et al. (2017, 2018) .", "cite_spans": [ { "start": 225, "end": 249, "text": "(Bott and Saggion 2011b)", "ref_id": "BIBREF16" }, { "start": 446, "end": 452, "text": "(2017,", "ref_id": null }, { "start": 453, "end": 458, "text": "2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Resources for Other Languages", "sec_num": "2.4" }, { "text": "The main goal in SS is to improve the readability and understandability of the original sentence. Independently of the technique used to simplify a sentence, the evaluation methods we use should allow us to determine how good the simplification output is for that end goal. In this section we explain how the outputs of automatic SS models are typically evaluated, based on human ratings and/or using automatic metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Simplification Models", "sec_num": "3." }, { "text": "Arguably, the most reliable method to determine the quality of a simplification consists of asking human judges to rate it. It is common practice to evaluate a model's output on three criteria: grammaticality, meaning preservation, and simplicity (\u0160tajner et al. 2016). For grammaticality (sometimes referred to as fluency), evaluators are presented with a sentence and are asked to rate it using a Likert scale of 1-3 or 1-5 (most common). The lowest score indicates that the sentence is completely ungrammatical, whereas the highest score means that it is completely grammatical. Native or highly proficient speakers of the language are ideal judges for this criterion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Assessment", "sec_num": "3.1" }, { "text": "For meaning preservation (sometimes referred to as adequacy), evaluators are presented with a pair of sentences (the original and the simplification), and are asked to rate (also using a Likert scale) the similarity of the meaning of the sentences. A low score denotes that the meaning is not preserved, while a high score suggests that the sentence pair share the same meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Assessment", "sec_num": "3.1" }, { "text": "For simplicity, evaluators are presented with an original-simplified sentence pair and are asked to rate how much simpler (or easier to understand) the simplified version is when compared with the original version, also using a Likert scale. Xu et al. (2016) differs from this standard, asking judges to evaluate simplicity gain, which means to count the correct lexical and syntactic paraphrases performed. Sulem, Abend, and Rappoport (2018b) introduce the notion of structural simplicity, which ignores lexical simplifications and focuses on structural transformations with the question: Is the output simpler than the input, ignoring the complexity of the words?", "cite_spans": [ { "start": 242, "end": 258, "text": "Xu et al. (2016)", "ref_id": "BIBREF133" }, { "start": 408, "end": 443, "text": "Sulem, Abend, and Rappoport (2018b)", "ref_id": "BIBREF109" } ], "ref_spans": [], "eq_spans": [], "section": "Human Assessment", "sec_num": "3.1" }, { "text": "Even though human evaluation is the preferred method for assessing the quality of simplifications, they are costly to produce and may require expert annotators (linguists) or end-users of a specific target audience (e.g., children suffering from dyslexia). Therefore, researchers turn to automatic measures as a means of obtaining faster and cheaper results. Some of these metrics are based on comparing the automatic simplifications to manually produced references; others compute the readability of the text based on psycholinguistic metrics; whereas others are trained on specially annotated data so as to learn to predict the quality or usefulness of the simplification being evaluated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Metrics", "sec_num": "3.2" }, { "text": "Metrics. These metrics are mostly borrowed from the MT literature, since SS can be seen as translating a text from complex to simple. The most commonly used are BLEU and TER.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Similarity", "sec_num": "3.2.1" }, { "text": "BLEU (BiLingual Evaluation Understudy), proposed by Papineni et al. (2002) , is a precision-oriented metric, which means that it depends on the number of n-grams in the candidate translation that match with n-grams of the reference, independent of position. BLEU values range from 0 to 1 (or to 100); the higher the better.", "cite_spans": [ { "start": 52, "end": 74, "text": "Papineni et al. (2002)", "ref_id": "BIBREF86" } ], "ref_spans": [], "eq_spans": [], "section": "String Similarity", "sec_num": "3.2.1" }, { "text": "BLEU calculates a modified n-gram precision: (i) count the maximum number of times that an n-gram occurs in any of the references, (ii) clip the total count of each candidate n-gram by its maximum reference count (i.e, Count clip = min(Count, MaxRefCount)), and (iii) add these clipped counts up, and divide by the total (unclipped) number of candidate words. Short sentences (compared with the lengths of the references) could inflate this modified precision. As such, BLEU uses a Brevity Penalty (BP) factor, calculated as in Equation 1, where c is the length of the candidate translation, r is the reference corpus length, and r/c is used in a decaying exponential (in this case, c is the total length of the candidate translation corpus).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Similarity", "sec_num": "3.2.1" }, { "text": "BP = 1 if c > r e (1\u2212r/c) if c \u2264 r (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Similarity", "sec_num": "3.2.1" }, { "text": "The final BLEU score is computed as in Equation 2. Traditionally, N = 4 and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Similarity", "sec_num": "3.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w n = 1/N. BLEU = BP \u2022 exp N n=1 w n log(p n )", "eq_num": "(2)" } ], "section": "String Similarity", "sec_num": "3.2.1" }, { "text": "In simplification research, several studies (Wubben, van den Bosch, and Krahmer 2012; \u0160tajner, Mitkov, and Saggion 2014; Xu et al. 2016) show that BLEU has high correlation with human assessments of grammaticality and meaning preservation, but not simplicity. Also, Sulem, Abend, and Rappoport (2018a) show that this correlation is low or non-existent when sentence splitting has been performed. As such, BLEU should not be used as the only metric for evaluation and comparison of SS models. In addition, because of its definition, this metric is more useful with simplification corpora that provides multiple references for each original sentence.", "cite_spans": [ { "start": 44, "end": 85, "text": "(Wubben, van den Bosch, and Krahmer 2012;", "ref_id": "BIBREF131" }, { "start": 86, "end": 120, "text": "\u0160tajner, Mitkov, and Saggion 2014;", "ref_id": null }, { "start": 121, "end": 136, "text": "Xu et al. 2016)", "ref_id": "BIBREF133" }, { "start": 266, "end": 301, "text": "Sulem, Abend, and Rappoport (2018a)", "ref_id": "BIBREF109" } ], "ref_spans": [], "eq_spans": [], "section": "String Similarity", "sec_num": "3.2.1" }, { "text": "TER (Translation Edit Rate), designed by Snover et al. (2006) , measures the minimum number of edits necessary to change a candidate translation so that it matches perfectly to one of the references, normalized by the average length of the references. Only the reference that is closest (according to TER) is considered for the final score. The edits to be considered are insertions, deletions, substitutions of single words, and shifts (positional changes) of word sequences. TER is an edit-distance metric Equation 3, with values ranging from 0 to 100; lower values are better.", "cite_spans": [ { "start": 41, "end": 61, "text": "Snover et al. (2006)", "ref_id": "BIBREF105" } ], "ref_spans": [], "eq_spans": [], "section": "String Similarity", "sec_num": "3.2.1" }, { "text": "# of edits average # of reference words", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TER =", "sec_num": null }, { "text": "(3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TER =", "sec_num": null }, { "text": "In order to calculate the number of shifts, TER follows a two-step process: (i) use dynamic programming to count insertions, deletions, and substitutions; and use a greedy search to find the set of shifts that minimizes the number of insertions, deletions, and substitutions; then (ii) calculate the optimal remaining edit distance using minimumedit-distance and dynamic programming.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TER =", "sec_num": null }, { "text": "For simplification research, TER's intermediate calculations (i.e., the edits counts) have been used to show the simplification operations that an SS model is able to perform (Zhang and Lapata 2017) . However, this is not a general practice and no studies have been conducted to verify that the edits correlate with simplification transformations. Scarton, Paetzold, and Specia (2018b) use TER to study the differences between different simplification versions in articles of the Newsela corpus.", "cite_spans": [ { "start": 175, "end": 198, "text": "(Zhang and Lapata 2017)", "ref_id": "BIBREF136" } ], "ref_spans": [], "eq_spans": [], "section": "TER =", "sec_num": null }, { "text": "iBLEU is a variant of BLEU introduced by Sun and Zhou (2012) as a way to measure the quality of a candidate paraphrase. The metric balances the semantic similarity between the candidate and the reference, with the dissimilarity between the candidate and the source. Given a candidate paraphrase c, human references r s , and input text s, iBLEU is computed as in Equation 4, with values ranging from 0 to 1 (or to 100); higher values are better.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TER =", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "iBLEU(s, r s , c) = \u03b1 \u00d7 BLEU(c, r s ) \u2212 (1 \u2212 \u03b1) \u00d7 BLEU(c, s)", "eq_num": "(4)" } ], "section": "TER =", "sec_num": null }, { "text": "After empirical evaluations, the authors recommend using a value of \u03b1 between 0.7 and 0.9. For example, Mallinson, Sennrich, and Lapata (2017) experiment with a value of 0.8, while Xu et al. (2016) set it to 0.9. (FRE, Flesch 1948 ) is a metric that attempts to measure how easy a text is to understand. It is based on average sentence length and average word length. Longer sentences could imply the use of more complex syntactic structures (e.g., subordinated clauses), which makes reading harder. The same analogy applies to words: Longer words contain prefixes and suffixes that present more difficulty to the reader. This metric Equation (5) gives a score between 0 and 100, with lower values indicating a higher level of difficulty. 6FKBLEU (Xu et al. 2016 ) combines iBLUE and FKGL to ensure grammaticality and simplicity in the generated text. Given an output simplification O, a reference R, and an input original sentence I, FKBLEU is calculated according to Equation 7; higher values mean better simplifications.", "cite_spans": [ { "start": 213, "end": 230, "text": "(FRE, Flesch 1948", "ref_id": null }, { "start": 747, "end": 762, "text": "(Xu et al. 2016", "ref_id": "BIBREF133" } ], "ref_spans": [], "eq_spans": [], "section": "TER =", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "FKBLEU = iBLEU(I, R, O) \u00d7 FKGLdiff(I, O) FKGLdiff = sigmod(FKGL(O) \u2212 FKGL(I))", "eq_num": "(7)" } ], "section": "Flesch-Based Metrics. Flesch Reading Ease", "sec_num": "3.2.2" }, { "text": "Because of the way these Flesch-based metrics are computed, short sentences could obtain good scores, even if they are ungrammatical or non-meaning preserving. As such, their values could be used to measure superficial simplicity, but not as an overall evaluation or for comparison of SS models (Wubben, van den Bosch, and Krahmer 2012) . Many other metrics could be used for more advanced readability assessment (McNamara et al. 2014) ; however, these are not commonly used in simplification research.", "cite_spans": [ { "start": 295, "end": 336, "text": "(Wubben, van den Bosch, and Krahmer 2012)", "ref_id": "BIBREF131" }, { "start": 413, "end": 435, "text": "(McNamara et al. 2014)", "ref_id": "BIBREF71" } ], "ref_spans": [], "eq_spans": [], "section": "Flesch-Based Metrics. Flesch Reading Ease", "sec_num": "3.2.2" }, { "text": "p keep (n) = g\u2208I min(# g (I\u2229O),# g (I\u2229R )) g\u2208I # g (I\u2229O) , # g (I \u2229 O) = min(# g (I), # g (O)) r keep (n) = g\u2208I min(# g (I\u2229O),# g (I\u2229R )) g\u2208I # g (I\u2229R ) , # g (I \u2229 R ) = min(# g (I), # g (R)/r) p del (n) = g\u2208I min(# g (I\u2229\u014c),# g (I\u2229R )) g\u2208I # g (I\u2229\u014c) # g (I \u2229\u014c) = max(# g (I) \u2212 # g (O), 0) # g (I \u2229 R ) = max(# g (I) \u2212 # g (R)/r, 0)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flesch-Based Metrics. Flesch Reading Ease", "sec_num": "3.2.2" }, { "text": "For keep and delete, R marks n-gram counts over R with fractions. For example, if a unigram occurs 2 out of the total r references, then its count is weighted by 2/r when computing precision and recall. Recall is not calculated for deletions to avoid rewarding over-deleting. Finally, SARI is calculated as shown in Equation 8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flesch-Based Metrics. Flesch Reading Ease", "sec_num": "3.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "SARI = d 1 F add + d 2 F keep + d 3 F del", "eq_num": "(8)" } ], "section": "Flesch-Based Metrics. Flesch Reading Ease", "sec_num": "3.2.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flesch-Based Metrics. Flesch Reading Ease", "sec_num": "3.2.2" }, { "text": "d 1 = d 2 = d 3 = 1/3 and P operation = 1 k n=[1,..,k] p operation (n) R operation = 1 k n=[1,..,k]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flesch-Based Metrics. Flesch Reading Ease", "sec_num": "3.2.2" }, { "text": "r operation (n)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flesch-Based Metrics. Flesch Reading Ease", "sec_num": "3.2.2" }, { "text": "F operation = 2 \u00d7 P operation \u00d7 R operation P operation + R operation operation \u2208 [del, keep, add]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flesch-Based Metrics. Flesch Reading Ease", "sec_num": "3.2.2" }, { "text": "An advantage of SARI is considering both the input original sentence and the references in its calculation. This is different from BLEU, which only ponders the similarity of the output with the references. Although iBLEU also uses both input and references, it compares the output against them independently, combining these scores in a way that rewards outputs that are similar to the references, but not so similar to the input. In contrast, SARI compares the output against the input sentence and references simultaneously, and rewards outputs that modify the input in ways that are expressed by the references. In addition, not all n-gram matches are considered equal: The more references \"agree\" with keeping/deleting certain n-gram, the higher the importance of the match in the score computation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flesch-Based Metrics. Flesch Reading Ease", "sec_num": "3.2.2" }, { "text": "One disadvantage of SARI is the limited number of simplification transformations taken into account, restricting the evaluation to only 1-to-1 paraphrased sentences. As such, it needs to be used in conjunction with other metrics or evaluation procedures when measuring the performance of an SS model. Also, if only one reference exists that is identical to the original sentence, and the model's output does not change the original sentence, SARI would over-penalize it and give a low score. Therefore, SARI requires multiple references that are different from the original sentence to be reliable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flesch-Based Metrics. Flesch Reading Ease", "sec_num": "3.2.2" }, { "text": "SAMSA (Simplification Automatic evaluation Measure through Semantic Annotation) was introduced by Sulem, Abend, and Rappoport (2018b) to tackle some of the shortcomings of reference-based simplicity metrics (i.e., SARI). The authors show that SARI has low correlation with human judgments when the simplification of a sentence involves structural changes, specifically sentence splitting. The new metric, on the other hand, correlates with meaning preservation and structural simplicity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flesch-Based Metrics. Flesch Reading Ease", "sec_num": "3.2.2" }, { "text": "Consequently, SARI and SAMSA should be used in conjunction to have a more complete evaluation of different simplification transformations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flesch-Based Metrics. Flesch Reading Ease", "sec_num": "3.2.2" }, { "text": "To calculate SAMSA, the original (source) sentence is semantically parsed using the UCCA scheme (Abend and Rappoport 2013), either manually or by the automatic parser TUPA (Hershcovich, Abend, and Rappoport 2017). The resulting graph contains the Scenes in the sentence (e.g., actions), as well as their corresponding Participants. SAMSA's premise is that a correct splitting of an original sentence should create a separate simple sentence for each UCCA Scene and its Participants. To verify this, SAMSA uses the word alignment between the original and the simplified output to count how many Scenes and Participants hold the premise. This process does not require simplification references (unlike SARI), and because the semantic parsing is only performed in the original sentence, it prevents adding parser errors of (possibly) grammatically incorrect simplified sentences produced by the SS model being evaluated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flesch-Based Metrics. Flesch Reading Ease", "sec_num": "3.2.2" }, { "text": "Metrics. If reference simplifications are not available, a possible approach is to evaluate the simplicity of the simplified output sentence by itself, or compare it to the one from the original sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction-Based", "sec_num": "3.2.4" }, { "text": "Most approaches in this line of research attempt to classify a given sentence into categories that define its simplicity by extracting several features from the sentence and training a classification model. For instance, Napoles and Dredze (2010) used lexical and morpho-syntactic features to predict if a sentence was more likely to be from Main or Simple English Wikipedia. Later on, inspired by work on Quality Estimation for MT, 8 Stajner, Mitkov, and Saggion (2014) proposed training classifiers to predict the quality of simplified sentences, with respect to grammaticality and meaning preservation. In this case, the features extracted correspond to values from metrics such as BLEU or (components of) TER. The authors proposed two tasks: (1) to classify, independently, the grammaticality and meaning preservation of sentences into three classes: bad, medium, and good; and (2) to classify the overall quality of the simplification using either a set of three classes (OK, needs post-editing, discard) or two classes (retain, discard). The same approach was the main task in the 1st Quality Assessment for Text Simplification Workshop (QATS;\u0160tajner et al. 2016), but automatic judgments of simplicity were also considered. There were promising results with respect to predicting grammaticality and meaning preservation, but not for simplicity or an overall quality evaluation metric. Afterwards, Martin et al. (2018) extended\u0160tajner, Mitkov, and Saggion (2014)'s work with features from\u0160tajner, Popovi\u0107, and B\u00e9chera (2016) to analyze how different feature groups correlate with human judgments on grammaticality, meaning preservation, and simplicity using data from QATS. Using Quality Estimation research for reference-less evaluation in simplification is still an area not sufficiently explored, mainly because it requires human annotations on example instances that can be used as training data, which can be expensive to collect.", "cite_spans": [ { "start": 433, "end": 470, "text": "8 Stajner, Mitkov, and Saggion (2014)", "ref_id": null }, { "start": 1404, "end": 1424, "text": "Martin et al. (2018)", "ref_id": "BIBREF69" }, { "start": 1503, "end": 1530, "text": "Popovi\u0107, and B\u00e9chera (2016)", "ref_id": "BIBREF116" } ], "ref_spans": [], "eq_spans": [], "section": "Prediction-Based", "sec_num": "3.2.4" }, { "text": "Another group of approaches is interested in ranking sentences according to their predicted reading levels. Vajjala and Meurers, (2014a,b) showed that, in the PWKP (Zhu, Bernhard, and Gurevych 2010) data set and and earlier version of the OneStopEnglish (Vajjala and Lu\u010di\u0107 2018) corpus, even if all simplified sentences were simpler than their aligned original counterpart, some sentences in the \"simple\" section had a higher reading level than some in the \"original\" section. As such, attempting to use binary classification approaches to determine if a sentence is simple or not may not be the appropriate way to model the task. Consequently, Vajjala and Meurers (2015) proposed using pair wise ranking to assess the readability of simplified sentences. They used the same features of the document-level model of Vajjala and Meurers (2014a) , but now they attempt to learn to predict which of two given sentences is simpler than the other. Ambati, Reddy, and Steedman (2016) tested the usefulness of syntactic features extracted from an incremental parser for the task, and Howcroft and Demberg (2017) explored using more psycholinguistic features, such as idea density, surprisal, integration cost, and embedding depth.", "cite_spans": [ { "start": 108, "end": 138, "text": "Vajjala and Meurers, (2014a,b)", "ref_id": null }, { "start": 645, "end": 671, "text": "Vajjala and Meurers (2015)", "ref_id": "BIBREF121" }, { "start": 815, "end": 842, "text": "Vajjala and Meurers (2014a)", "ref_id": "BIBREF119" }, { "start": 942, "end": 976, "text": "Ambati, Reddy, and Steedman (2016)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Prediction-Based", "sec_num": "3.2.4" }, { "text": "Although not detailed in this section, some research has used METEOR (Denkowski and Lavie 2014) from the MT literature, and ROUGE (Lin 2004), borrowed from summarization research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction-Based", "sec_num": "3.2.4" }, { "text": "In this section we have described how the outputs of SS models are evaluated using both human judgments and automatic metrics. We have attempted to not only explain these methods, but also to point out their advantages and disadvantages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.3" }, { "text": "In the case of human evaluation, one important but often overlooked aspect is that it should be carried out by individuals from the same target audience of the data on which the SS model was trained. This is especially relevant when collecting simplicity judgments because of its subjective nature: What a non-native proficient adult speaker considers \"'simple\" may not hold for a native-speaking primary school student, for example. Even within the same target group, differences in simplicity needs and judgments may arise. This is why some researchers have started to focus on developing and evaluating models for personalized simplification (Bingel, Paetzold, and S\u00f8gaard 2018; Bingel, Barrett, and Klerke 2018) . In addition, we should think carefully whether the quality of a simplified text is better judged as an intrinsic feature, or if we should assess it depending on its usefulness to carry out another task. Nowadays, quality judgments focus on assessing the automatic output for what it is: is it grammatical?, does it still express the same idea?, is it easier to read? However, the goal of simplification is to modify a text so that a reader can understand it better. With that in mind, a more functional evaluation of the generated text could be more informative of the understandability of the output. An example of such type of assessment is presented in Mandya, Nomoto, and Siddharthan (2014) , where human judges had to use the automatically simplified texts in a reading comprehension test with multiple-choice questions. Then, the accuracy of their responses is used to qualify the helpfulness of the simplified texts in the particular comprehension task. This type of human evaluation could be more goal-oriented, but they are costly to create and execute.", "cite_spans": [ { "start": 645, "end": 681, "text": "(Bingel, Paetzold, and S\u00f8gaard 2018;", "ref_id": "BIBREF13" }, { "start": 682, "end": 715, "text": "Bingel, Barrett, and Klerke 2018)", "ref_id": "BIBREF12" }, { "start": 1374, "end": 1412, "text": "Mandya, Nomoto, and Siddharthan (2014)", "ref_id": "BIBREF68" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.3" }, { "text": "Automatic metrics are useful for quickly assessing models and comparing different architectures. They could even be considered more objective than humans since personal biases do not play a role. However, the metrics used in SS research are flawed. BLEU has been found to only be reliable for assessment in MT but not other Natural Language Generation tasks (Reiter 2018) , and it is not adequate for most rewriting transformations in SS (Sulem, Abend, and Rappoport 2018a) . SARI is only useful as a proxy for simplicity gain assessment, limited to lexical simplifications and shortdistance reordering despite more text transformations being possible. Commonly-used Flesch metrics were developed to assess complete documents and not sentences, which is the focus of most simplification research nowadays. Therefore, when evaluating models using these automatic scores, it is essential to keep all their particular limitations in mind, to always look at all possible metrics and try to interpret them accordingly. Overall, what is the most reliable way of automatically evaluating system outputs, at the right granularity and considering all characteristics of the task, is still an open question. We comment on some possible future directions in Section 6.", "cite_spans": [ { "start": 358, "end": 371, "text": "(Reiter 2018)", "ref_id": "BIBREF93" }, { "start": 438, "end": 473, "text": "(Sulem, Abend, and Rappoport 2018a)", "ref_id": "BIBREF109" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.3" }, { "text": "In this section, we review research on SS aiming at learning simplifications from examples. More specifically, approaches that involve learning text transformations from parallel corpora of aligned original-simplified sentences in English. Compared with approaches based on hand-crafted rules, data-driven approaches can perform multiple simplification transformations simultaneously, as well as learn very specific and complex rewriting patterns. As a result, they make it possible to model interdependencies among different transformations more naturally. Therefore, we do not include approaches to sentence simplification based on sets of hand-crafted rules, such as rules for splitting and reordering sentences (Candido Jr. et al. 2009; Siddharthan 2011; Bott, Saggion, and Mille 2012) , nor approaches that only learn lexical simplifications, that is, which target one-word replacements (see Paetzold and Specia [2017b] for a survey).", "cite_spans": [ { "start": 715, "end": 740, "text": "(Candido Jr. et al. 2009;", "ref_id": "BIBREF21" }, { "start": 741, "end": 758, "text": "Siddharthan 2011;", "ref_id": "BIBREF103" }, { "start": 759, "end": 789, "text": "Bott, Saggion, and Mille 2012)", "ref_id": "BIBREF17" }, { "start": 897, "end": 924, "text": "Paetzold and Specia [2017b]", "ref_id": "BIBREF84" } ], "ref_spans": [], "eq_spans": [], "section": "Data-Driven Approaches to Sentence Simplification", "sec_num": "4." }, { "text": "We classify data-driven approaches for SS as relying on statistical MT techniques (Section 4.1), induction of synchronous grammars (Section 4.2), semantics-assisted (Section 4.3), and neural sequence-to-sequence models (Section 4.4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data-Driven Approaches to Sentence Simplification", "sec_num": "4." }, { "text": "Several approaches treat SS as a monolingual MT task, with original and simplified as source and target languages, respectively. Whereas other translation methods exist, in this section we focus on Statistical Machine Translation (SMT). Given a sentence f in the source language, the goal of an SMT model is to produce a translation e in the target language. This is modeled using the noisy channel framework Equation (9).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "e * = arg max e\u2208E p(e| f ) = arg max e\u2208E p( f |e)p(e)", "eq_num": "(9)" } ], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "This framework relies on a translation model p( f |e) and a language model p(e). In addition, a decoder is in charge of producing the most probable e given an f . The language model is monolingual, and thus \"easier\" to generate. There are different approaches for implementing the translation model and the decoder. In practice, they all rely on a linear combination of these and additional features, which are directly optimized to maximize translation quality, rather than on the generative noisy channel model. In what follows, we review the most popular approaches and explain their applications for SS. 4.1.1 Phrase-Based Approaches. The intuition behind Phrase-Based SMT (PBSMT) is to use phrases (sequences of words) as the fundamental unit of translation. Therefore, the translation model p( f |e) depends on the normalized count of the number of times each possible phrase-pair occurs. These counts are extracted from parallel corpora and automatic phrase alignments, which, in turn, are obtained from word alignments. Decoding is a search problem: Find the sentence that maximizes the translation and language model probabilities. It could be solved using a best-first search algorithm, like A*, but exploring the entire search space of possible translations is expensive. Therefore, decoders use beam-search to only retain, at every step, the most promising states to continue the search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "Moses (Koehn et al. 2007 ) is a popular PBSMT system, freely available. 9 It provides tools for easy training, tuning, and testing of translation models based on this SMT approach. Specia (2010) was the first to use this toolkit, with no adaptations, for the simplification task. Experiments were carried out on a parallel corpus of original and manually simplified newspaper articles in Brazilian Portuguese (Caseli et al. 2009 ). The trained model mostly executes lexical simplifications and simple rewritings. However, as expected, it is overcautious and cannot perform long distance operations like subjectverb-object reordering or splitting.", "cite_spans": [ { "start": 6, "end": 24, "text": "(Koehn et al. 2007", "ref_id": "BIBREF65" }, { "start": 181, "end": 194, "text": "Specia (2010)", "ref_id": "BIBREF106" }, { "start": 409, "end": 428, "text": "(Caseli et al. 2009", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "Moses-Del: Coster and Kauchak (2011b) also used Moses as-is, and trained it on their C&K corpus, obtaining slightly better results when compared with not doing any simplification. In Coster and Kauchak (2011a) , the authors modified Moses to allow complex phrases to be aligned with NULL, thus implementing deletions during simplification. To accomplish this, they modify the word alignments before the phrase alignments are learned: (1) any complex unaligned word is now aligned with NULL, and (2) if several complex words in a set align with only one simple word, and one of the complex words is equal to the simple word, then the other complex words in the set are aligned with NULL. Their model achieves better results than a standard Moses implementation.", "cite_spans": [ { "start": 183, "end": 209, "text": "Coster and Kauchak (2011a)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "PBSMT-R: Wubben, van den Bosch, and Krahmer (2012) also used Moses but added a post-processing step. They ask the decoder to generate the 10 best simplifications (where possible), and then rank them according to their dissimilarity to the input sentence (measured by edit distance). The most dissimilar sentence is chosen as the final output, and ties are resolved using the decoder score. This trained model achieves a better BLEU score than more sophisticated approaches (Zhu, Bernhard, and Gurevych 2010; Woodsend and Lapata 2011a), explained in Section 4.2. When compared with such approaches using human evaluation, PBSMT-R is better in grammaticality and meaning preservation. However, the results are limited to paraphrasing transformations. Table 2 summarizes the performance of the models trained using the SS approaches described, as reported in the original papers. The BLEU values are not directly comparable, since each approach used a different corpus for testing. From a transformation capability point of view, PBSMT-based simplification models are able to perform substitutions, short distance reorderings, and deletions, but fail to learn more sophisticated operations (e.g., splitting) that may require more information on the structure of the sentences and relationships between their components. 4.1.2 Syntax-Based Approaches. In Syntax-Based SMT (SBSMT), the basic unit for translation is no longer a phrase, but syntactic components in parse trees. In PBSMT, the language model and the phrase alignments act as features that inform the model about how likely the generated translation is (simplification, in our case). In SBSMT we can extract more informed features, based on the structures of the parallel parse trees.", "cite_spans": [], "ref_spans": [ { "start": 749, "end": 756, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "TSM: Zhu, Bernhard, and Gurevych (2010) proposed a Tree-based Simplification Model that can perform four text transformations: splitting, dropping, reordering, and substitution (of words and phrases). Given an original sentence c, the model attempts to find a simplification s using Equation 10, with a language model P(s) and a direct translation model P(s|c).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s = arg max s P(s|c)P(s)", "eq_num": "(10)" } ], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "For estimating P(s|c), the method traverses the original sentence parse tree from top to bottom, extracting features from each node and for each of the four possible transformations. These features are transformation-specific and are stored in feature tables for each transformation. For each feature combination in each table, probabilities are calculated during training. We will use the splitting transformation to explain this process in more detail. A similar method is used for the other three transformations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "In TSM, sentence splitting is decomposed into two transformations: SEGMEN-TATION (if a sentence is to be split or not) and COMPLETION (to make the splits grammatical). The probability of a SEGMENTATION transformation is calculated using Equation 11, where w is a word in the complex sentence c, and SFT(w|c) is the probability of w in the Segmentation Feature Table (SFT) .", "cite_spans": [], "ref_spans": [ { "start": 360, "end": 371, "text": "Table (SFT)", "ref_id": null } ], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P(seg|c) = w:c SFT(w|c)", "eq_num": "(11)" } ], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "COMPLETION implies deciding whether the border word in the second split needs to the dropped, and which parts of the first split need to be copied into the second. The probability of this operation is calculated as in Equation 12, where s are the split sentences, bw is a border word in s, w is a word in s, dep is a dependency of w that is out of the scope of s, BDFT is the Border Drop Feature Table (BDFT), and CFT is the Copy Feature Table. P", "cite_spans": [], "ref_spans": [ { "start": 438, "end": 444, "text": "Table.", "ref_id": null } ], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(com|seg) = bw:s BDFT(bw|s) w:s dep:w CFT(dep)", "eq_num": "(12)" } ], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "Finally, once similar computations are done for the other three transformations, all probabilities are combined to calculate the translation model. In Equation 13where P(dp|node), P(ro|node), and P(sub|node) correspond to dropping, reordering, and substituting non-terminal nodes, and P(sub|w) is for substitutions of terminal nodes. The model is trained using the Expectation-Maximization algorithm proposed by Yamada and Knight (2001) . This algorithm requires constructing a training tree to calculate P(s|c), which corresponds to the probability of the root of the training tree. For decoding, the inside and outside probabilities are calculated for each node in the decoding tree, which is constructed in a similar fashion as the training tree. To simplify a new original sentence, the algorithm starts from the root and greedily selects the branch with highest outside probability.", "cite_spans": [ { "start": 412, "end": 436, "text": "Yamada and Knight (2001)", "ref_id": "BIBREF134" } ], "ref_spans": [], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "The proposed approach used the PWKP corpus for training and testing, obtaining higher readability scores (Flesch) than other baselines considered (Moses and the sentence compression system of Filippova and Strube [2008] with variations). Overall, TSM showed good performance for word substitution and splitting.", "cite_spans": [ { "start": 192, "end": 219, "text": "Filippova and Strube [2008]", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "TriS: Bach et al. 2011proposed a method focused on splitting an original sentence into several simpler ones. A sentence is considered simple if it is in the subject-verbobject order (SVO), with one subject, one verb, and one object. Given a sentence c that needs to be split into a set S of simple sentences, the objective is to select the set with the highest probability Equation 14.\u015c (c) = arg max \u2200S P(S|c)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "The language and translation models are combined using a log-linear model as in Equation 15, where f m (S, c) are feature functions on each sentence, and w m are model parameters to be learned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "p(S|c) = exp M m=1 w m f m (S, c) S exp M m=1 w m f m (S , c) (15)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "For decoding, their method starts by listing all noun phrases and verbs of the original sentence and generating simple sentences combining the lists in SVO form. Then, it proceeds with a k-best stack decoding algorithm: It starts with a stack of 1-simple-sentence hypotheses (a hypothesis is a complete simplification of multiple simple sentences), and at each step it pops a hypothesis of the stack and expands it (to 2-simple-sentence in the first iteration, and so on) and puts the new hypotheses in another stack, prunes them (according to some metric) and updates the original hypotheses stack. After all steps (which corresponds to the number of verbs in the sentence), it selects the k-best hypotheses in the stack. For training, they use the Margin Infused Relaxed Algorithm (Crammer and Singer 2003). For modeling, 177 feature functions were designed to capture intra-and inter-sentential information (simple counts, distance in the parse tree, readability measures, etc.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "To test their approach, a corpus of 854 sentences extracted from The New York Times and Wikipedia was created, with one manual simplification each. The authors evaluate on 100 unseen sentences and compare against the rule-based approach of Heilman and Smith (2010) . Their model achieves better Flesch-Kincaid Grade Level and ROUGE scores.", "cite_spans": [ { "start": 240, "end": 264, "text": "Heilman and Smith (2010)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "SBSMT (PPDB + ): Xu et al. (2016) proposed optimizing a SBSMT framework with rule-based features and tuning metrics specifically tailored for lexical simplification. Two new \"light weight\" metrics are also introduced: FKBLEU and SARI, both described in Section 3.2. The proposed simplification model also relies on paraphrasing rules available in the PPDB, which are expressed as a Synchronous Context-Free Grammar (SCFG). The authors also added nine new features to each rule in the PPDB (each rule already contains 33). These new features are simplification-specific, for example: length in characters, length in words, number of syllables, among others.", "cite_spans": [ { "start": 25, "end": 41, "text": "Xu et al. (2016)", "ref_id": "BIBREF133" } ], "ref_spans": [], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "These modifications were implemented in the SBSMT toolkit Joshua (Post et al. 2013) and performed experiments using TurkCorpus (described in Section 2.3) on three versions of the SBSMT system, changing the tuning metric (BLEU, FKBLEU, and SARI). Evaluations using human judgments show that all three models achieved better grammatically, meaning preservation, and simplicity gain than PBSMT-R (Wubben, van den Bosch, and Krahmer 2012). Table 3 summarizes the performance of the syntax-based models trained using the SS approaches described. These values are not directly comparable, because each approach used a different corpus for testing. In the case of the models based on Joshua and PPDB, not surprisingly, each achieves the highest score according to the metric for which it was optimized. However, SBSMT (PPDB + SARI) seems to be the overall best. From a transformations capability point of view, TSM and TriS are capable of performing splitting, which is an advantage over SBSMT variations that only generate paraphrases.", "cite_spans": [ { "start": 65, "end": 83, "text": "(Post et al. 2013)", "ref_id": "BIBREF90" } ], "ref_spans": [ { "start": 436, "end": 443, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Monolingual Statistical Machine Translation", "sec_num": "4.1" }, { "text": "In this approach, SS is modeled as a tree-to-tree rewriting problem. Approaches typically follow a two-step process: (1) use parallel corpora of aligned original-simplified sentence pairs to extract a set of tree transformation rules, and then (2) learn how to select which rule(s) to apply to unseen sentences to generate the best simplified output. This is analogous to how a SBSMT approach works: The rules would be the features, and the decoder applies the learned model deciding how to use these rules. In what follows, we first provide some brief preliminary explanations on synchronous grammars, and then proceed to explain how grammar-induction-based approaches have implemented each of the aforementioned steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar Induction", "sec_num": "4.2" }, { "text": "A Context-Free Grammar (CFG) is a set of productions or rewriting rules that describe how to generate strings in a formal language. Synchronous Context-Free Grammars (SCFGs) are a generalization of CFGs able to generate pairs of related strings and not just single strings (Chiang 2006) . In a SCFG, each production has two hand-sides (source and target) that are related. For example, we show a SCFG for a sentence in English and its translation to Japanese (Chiang 2006) :", "cite_spans": [ { "start": 273, "end": 286, "text": "(Chiang 2006)", "ref_id": "BIBREF24" }, { "start": 459, "end": 472, "text": "(Chiang 2006)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Preliminaries.", "sec_num": "4.2.1" }, { "text": "S \u2192 NP 1 VP 2 |NP 1 VP 2 VP \u2192 V 1 NP 2 |NP 2 V 1 NP \u2192 i|watashi wa NP \u2192 the box|hako wo V \u2192 open|akemasu", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries.", "sec_num": "4.2.1" }, { "text": "The numbers in the non-terminals serve as links between nodes in the source and target. These links are 1-to-1 and every non-terminal is always linked to another.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries.", "sec_num": "4.2.1" }, { "text": "SCFGs have the limitation of only being able to relabel and reorder sibling nodes. In contrast, Synchronous Tree Substitution Grammars (STSGs, Eisner 2003) are able to perform more long-distance swapping. In a STSG, productions are pairs of elementary trees, which are tree fragments whose leaves can be non-terminal or terminal symbols: SCFGs impose an isomorphism constraint between the aligned trees. This requirement is relaxed by the STSGs. However, to account for all the different movement patterns that could exist in a language would require powerful and, perhaps, slow grammars (Smith and Eisner 2006). Quasi-synchronous Grammars (QG, Smith and Eisner 2006) relax the isomorphism constraint further, following the intuition that one of the parallel trees is inspired by the other. This means that any node in one tree can be linked to any other node on the other tree. Observe that in STSGs, even though the linked nodes can be in any part of the frontier of the trees, they still need to have the same syntactic tag. This is not the case in QGs because anything can align to anything.", "cite_spans": [ { "start": 135, "end": 155, "text": "(STSGs, Eisner 2003)", "ref_id": null }, { "start": 640, "end": 667, "text": "(QG, Smith and Eisner 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Preliminaries.", "sec_num": "4.2.1" }, { "text": "The formalisms explained in the previous subsection have been used to automatically extract rules that convey the rewriting operations required to simplify sentences. Using these transformation rules, grammar-induction-based models then decide, given an original sentence, which rule(s) to apply and how to generate the final simplification output (often referred to as decoding).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simplification Models.", "sec_num": "4.2.2" }, { "text": "QG+ILP: Woodsend and Lapata (2011a) use QGs to induce rewrite rules that can perform sentence splitting, word substitution, and deletion. From word alignments between source and target sentences, they first align non-terminal nodes where more than one child node aligns. From these constituent alignments, they extract syntactic simplification rules. However, if a pair of trees have the same syntactic structure but differ only because of a lexical substitution, a more general rule is extracted considering only the words and their part-of-speech tags. To create the rules for sentence splitting, a source sentence is aligned with two consecutive target sentences (named main and auxiliary). They then select a split node, which is a source node that \"contributes\" (i.e., has aligned children nodes) to both main and auxiliary targets. This results in a rule with three components: the source node, the node in the target main sentence, and the phrase structure in the entire auxiliary sentence. Some examples of these rules are: With these transformation rules, Woodsend and Lapata (2011a) then use Integer Linear Programming (ILP) to find the best candidate simplification. During decoding, the original sentence's parse tree is traversed from top to bottom, applying at each node all the simplification rules that match. If more than one rule matches, each candidate simplification is added to the target tree. As a result, a \"super tree\" is created, which contains all possible simplifications of each node of the source sentence. Then, an ILP program decides which nodes should be kept and which would be removed. The objective function of the ILP considers a penalty for substitutions and rewrites (favors the more common transformations with less penalty), and tries to reduce the number of words and syllables. The ILP has constraints to ensure grammaticality (if a phrase node is chosen, the node it depends on is also chosen), coherence (if one partition of a split sentence is chosen, the other partition is also chosen), and always one (and only one) simplification per sentence. The authors trained two models: one extracting rules from the AlignedLP corpus (AlignILP) and the other using the RevisionWL corpus (RevILP). For evaluation, they used the test split from the PWKP instances. RevILP was their best model, achieving the closest scores to the references using both Flesh-Kincaid and human judgments on simplicity, grammaticality, and meaning preservation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simplification Models.", "sec_num": "4.2.2" }, { "text": "T3+Rank: Paetzold and Specia (2013) extract candidate tree rewriting rules using T3 (Cohn and Lapata 2009) , an abstractive sentence compression model that uses STSGs for deletion, reordering, and substitution. Using word-aligned parallel sentences, the model maps the word alignment into a constituent-level alignment between the source and target trees by adapting the alignment template method of Och and Ney (2004) . These constituent alignments are then generalized (i.e., aligned nodes are replaced with links) to extract rules. This generalization is performed by a recursive algorithm that attempts to find the minimal most general set of synchronous rules. The recursive depth allowed by the algorithm determines the specificity of the rules.", "cite_spans": [ { "start": 84, "end": 106, "text": "(Cohn and Lapata 2009)", "ref_id": "BIBREF25" }, { "start": 400, "end": 418, "text": "Och and Ney (2004)", "ref_id": "BIBREF80" } ], "ref_spans": [], "eq_spans": [], "section": "Simplification Models.", "sec_num": "4.2.2" }, { "text": "Once the rules have been extracted, Paetzold and Specia (2013) divide them into two sets: one with purely syntactic transformations, and the other with purely lexical transformations. This process has two goals: (1) to filter out rules that are not simplification transformations according to some pre-established criteria on what type of information the rule contains, and (2) to be able to explore syntactic and lexical simplification individually. Given an original sentence, the proposed approach applies both sets of rewriting rules separately. The candidate simplifications are then ranked, in each set, in order to determine the best syntactic and the best lexical simplifications. For ranking, they measure the perplexity of the output using a language model built from SEW. Evaluation results using human judgments on simplicity, grammaticality, and meaning preservation showed that candidate simplifications were only encouraging for lexical simplification, almost half of the automatically simplified sentences were considered simpler, over 80% were considered grammatical, and a little over 50% were identified as meaning preserving.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simplification Models.", "sec_num": "4.2.2" }, { "text": "SimpleTT: Feblowitz and Kauchak (2013) proposed an approach similar to T3 (Cohn and Lapata 2009) using STSGs. They modified the rule extraction process to reduce the number of candidate rules that need to be generalized. Also, instead of controlling the rules' specificity with the recursion depth, SimpleTT augments the rules with more information, such as head's lexicalization and part-of-speech tag.", "cite_spans": [ { "start": 10, "end": 38, "text": "Feblowitz and Kauchak (2013)", "ref_id": "BIBREF38" }, { "start": 74, "end": 96, "text": "(Cohn and Lapata 2009)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Simplification Models.", "sec_num": "4.2.2" }, { "text": "During decoding, the model starts by trying to match the more specific rules up to the most general. If no rule matches, then the source parse tree is just copied. The model generates the 10,000 most probable simplifications for a given sentence according to the STSG grammar, and the best one is determined using a log-linear combination of features, such as rule probability and output length. For training and testing, the authors used the C&K-1 corpus, and compared their model against PBSMT-R and Moses-Del. Using human evaluation, SimpleTT obtains the highest scores for simplicity and grammaticality among the tested models, with values comparable to those for human simplifications. However, it obtains the lowest score in meaning preservation, presumably because SimpleTT tends to simplify by deleting a sentence's elements. This approach does not explicitly model sentence splitting. Table 4 summarizes the performance of the models trained with the SS approaches described. Unfortunately, results are not directly comparable. Overall, grammarinduction-based approaches, because of their pipeline architecture, offer more flexibility on how the rules are learned and how they are applied, as compared with end-toend approaches. Even though Woodsend and Lapata (2011a) were the only ones who attempted model splitting, the other approaches could be modified in a similar way, since the formalisms allow it.", "cite_spans": [ { "start": 1250, "end": 1277, "text": "Woodsend and Lapata (2011a)", "ref_id": "BIBREF130" } ], "ref_spans": [ { "start": 894, "end": 901, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Simplification Models.", "sec_num": "4.2.2" }, { "text": "Narayan and Gardent (2014) argue that the simplification transformation of splitting is semantics-driven. In many cases, splitting occurs when an entity takes part in two (or more) distinct events described in a single sentence. For example, in Sentence (1), bricks is involved in two events: \"being resistant to cold\" and \"enabling the construction of permanent buildings.\"", "cite_spans": [ { "start": 12, "end": 26, "text": "Gardent (2014)", "ref_id": "BIBREF76" } ], "ref_spans": [], "eq_spans": [], "section": "Semantics-Assisted", "sec_num": "4.3" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantics-Assisted", "sec_num": "4.3" }, { "text": "Original: Being more resistant to cold, bricks enabled the construction of permanent buildings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantics-Assisted", "sec_num": "4.3" }, { "text": "(2) Simplified: Bricks were more resistant to cold. Bricks enabled the construction of permanent buildings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantics-Assisted", "sec_num": "4.3" }, { "text": "Even though deciding when and where to split can be determined by syntax (e.g., sentences with relative or subordinate clauses), constructing the second sentence in the split by adding the shared element with the first split should be accomplished by semantic information, because we need to identify the entity involved in both events.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantics-Assisted", "sec_num": "4.3" }, { "text": "Hybrid: Narayan and Gardent (2014) built an SS model by combining semanticsdriven splitting and deletion, with PBSMT-based substitution and reordering. The general idea is to use semantic roles information to identify the events in a given original sentence, and those events would determine how to split the sentence. Also, deletion would be directed by this information, since mandatory arguments for each identified verbal predicate should not be deleted. For substitution of complex words/phrases and reordering, the authors rely on a PBSMT-based model.", "cite_spans": [ { "start": 8, "end": 34, "text": "Narayan and Gardent (2014)", "ref_id": "BIBREF76" } ], "ref_spans": [], "eq_spans": [], "section": "Semantics-Assisted", "sec_num": "4.3" }, { "text": "The proposed method first uses Boxer (Curran, Clark, and Bos 2007) to obtain the semantic representation of the original sentences. From there, candidate splitting pairs are selected from events that share a common core semantic role (e.g., agent and patient). The probability of a candidate being a split is determined by the semantic roles associated with it. The probability of deleting a node is determined by its semantic relations to the split events. Additionally, the probabilities for substitution and reordering are determined by a PBSMT system. For training, they used the Expectation-Maximization algorithm of Yamada and Knight (2001) , in a similar fashion to Zhu, Bernhard, and Gurevych (2010), but calculating probabilities over the semantic graph produced by Boxer instead of a syntactic parse tree.", "cite_spans": [ { "start": 37, "end": 66, "text": "(Curran, Clark, and Bos 2007)", "ref_id": "BIBREF31" }, { "start": 622, "end": 646, "text": "Yamada and Knight (2001)", "ref_id": "BIBREF134" } ], "ref_spans": [], "eq_spans": [], "section": "Semantics-Assisted", "sec_num": "4.3" }, { "text": "The SS model is trained and tested using the PWKP corpus. Sentences for which Boxer failed to extract a semantic representation were excluded during training, and directly passed to the PBSMT system in testing. For evaluation, the model is compared against QG+ILP, PBSMT-R, and TSM. Hybrid performs splits closer in proportion to those of the references. It also achieves the highest BLEU score and smaller edit distance to references. With human evaluation, Hybrid obtains the highest score in simplicity, and is a close second to PBSMT-R for grammaticality and meaning preservation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantics-Assisted", "sec_num": "4.3" }, { "text": "UNSUP: Narayan and Gardent (2016) propose a method that does not require aligned original-simplified sentences to train a TS model. Their approach first uses a context-aware lexical simplifier (Biran, Brody, and Elhadad 2011) that learns simplification rules from articles of EW and SEW. Given an original sentence, these rules are applied and the best combination of simplifications is found using dynamic programming. Then, they use Boxer to extract the semantic representation of the sentence and identify the events/predicates. After that, they estimate the maximum likelihood of the sequences of semantic role sets that would result after each possible split (i.e., subsequence of events). To compute these probabilities, they only rely on data from SEW. Finally, they use an ILP to determine which phrases in the sentence should be deleted, similarly to the compression model of Filippova and Strube (2008) .", "cite_spans": [ { "start": 7, "end": 33, "text": "Narayan and Gardent (2016)", "ref_id": "BIBREF77" }, { "start": 193, "end": 225, "text": "(Biran, Brody, and Elhadad 2011)", "ref_id": "BIBREF15" }, { "start": 885, "end": 912, "text": "Filippova and Strube (2008)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Semantics-Assisted", "sec_num": "4.3" }, { "text": "For evaluation, they use the test set of PWKP, and compare against TSM, QG+ILP, PBSMT-R, and Hybrid. The proposed unsupervised pipeline achieves results comparable to those of the other models, but it is only better than TSM in terms of BLEU score. It produces more splits than Hybrid, but also more than the reference. There is no analysis about the correctness of the splits produced. Using human evaluation, UNSUP achieves the highest values in simplicity and grammaticality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantics-Assisted", "sec_num": "4.3" }, { "text": "EvLex:\u0160tajner and Glava\u0161 (2017) introduce an SS model that can perform sentence splitting, content reduction, and lexical simplifications. For the first two transformations, they build on Glava\u0161 and\u0160tajner (2013) identifying events in a given sentence. Each identified event and its arguments constitute a new simpler sentence (splitting). Information that does not correspond to any of the events is discarded (content reduction). For event identification, they use EVGRAPH (Glava\u0161 and\u0160najder 2015) , which relies on a supervised classifier to identify events and a set of manually-constructed rules to identify the arguments. Finally, the pipeline also incorporates an unsupervised lexical simplification model (Glava\u0161 and tajner 2015) . The authors carried out tests to determine whether the order of the components affects the resulting simplifications. They found that the differences were minimal.", "cite_spans": [ { "start": 475, "end": 499, "text": "(Glava\u0161 and\u0160najder 2015)", "ref_id": null }, { "start": 713, "end": 737, "text": "(Glava\u0161 and tajner 2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Semantics-Assisted", "sec_num": "4.3" }, { "text": "The proposed architecture is compared with QG+ILP (Woodsend and Lapata 2011a), testing in two data sets, one with news stories (NEWS) and one with Wikipedia sentences (WIKI). EvLex achieves the highest FRE score in the NEWS data set, while QG+ILP is the best in this metric in WIKI. Regarding human evaluations, their model achieves the best grammaticality and simplicity scores for both data sets. Table 5 summarizes the results of the models trained with the SS approaches presented in this section. Not all models are directly comparable because they were tested in different corpora. The semantics-aided models presented resemble, in part, the research of Bach et al. (2011) explained in Section 4.1.2, focused on sentence splitting. In that work, splitting is based on preserving a SVO order in each split, which could be considered as an agent-verb-patient structure. These findings suggest that the splitting operation requires a more tailored modeling, different from standard MT-based sequence-to-sequence approaches. -and-Rephrase. Narayan et al. (2017) introduce a new task called split-andrephrase, focused on splitting a sentence into several others, and making the necessary changes to ensure grammaticality. No deletions should be performed so as to preserve meaning. The authors use the WEBSPLIT data set (described in Section 2.3) to train and test five models for the split-and-rephrase task: (1) Hybrid (Narayan and Gardent 2014);", "cite_spans": [ { "start": 1027, "end": 1063, "text": "-and-Rephrase. Narayan et al. (2017)", "ref_id": null } ], "ref_spans": [ { "start": 399, "end": 406, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Semantics-Assisted", "sec_num": "4.3" }, { "text": "(2) Seq2Seq, which is an encoder-decoder with local-p attention (Luong, Pham, and Manning 2015) ; (3) MultiSeq2Seq, which is a multi-source sequence-to-sequence model (Zoph and Knight 2016) that takes as input the original sentence and its MR triples; and (4) one that models the problem in two steps: first learn to split, and then learn to rephrase. In this last model, the splitting step uses the original sentence and its MR to split the latter into several MR sets. However, two variations are explored for the rephrasing step: (1) Split-MultiSeq2Seq learns to rephrase from the split MRs and the original sentence in a multi-source fashion, while (2) Split-Seq2Seq only uses the split MRs and rephrases based on a sequence-to-sequence model. All models were automatically evaluated using multi-reference BLEU, the average number of simple sentences per complex sentence, and the average number of output words per output simple sentence. Split-Seq2Seq achieved the best scores in all the metrics. This result supports the idea that the split-and-rephrase task is better treated in a pipeline. One could also hypothesize that the semantic information in the MRs helps in the splitting step. However, according to the paper, the authors \"strip off named-entities and properties from each triple and only keep the tree skeleton\" when learning to split. This suggests that it may not be the semantic information itself (i.e., the properties), but the groupings of semantically related elements in each triple that helps perform the splits. Aharoni and Goldberg (2018) focus on the text-to-text setup of the task, that is, without using the MR information. They first propose a new split of the data set after determining that around 90% of unique simple sentences in the original development and test sets also appeared in the training set. With the new data splits, they train a vanilla sequence-to-sequence model with attention, and incorporate a copy mechanism inspired by work in abstractive text summarization See, Liu, and Manning 2017) . This model achieves the best performance in both the original and new data set splits, but with a low BLEU score of 24.97.", "cite_spans": [ { "start": 64, "end": 95, "text": "(Luong, Pham, and Manning 2015)", "ref_id": "BIBREF67" }, { "start": 167, "end": 189, "text": "(Zoph and Knight 2016)", "ref_id": "BIBREF138" }, { "start": 1542, "end": 1569, "text": "Aharoni and Goldberg (2018)", "ref_id": "BIBREF1" }, { "start": 2017, "end": 2044, "text": "See, Liu, and Manning 2017)", "ref_id": "BIBREF101" } ], "ref_spans": [], "eq_spans": [], "section": "Split", "sec_num": "4.3.1" }, { "text": "Botha et al. 2018argue that poor performance on the task may be due to WEBSPLIT not being suitable for training models. According to the authors, because WEBSPLIT was derived from the WEBNLG data set, it only contains artificial sentence splits created from the RDF triples. As such, they introduce WikiSplit, a new data set for split-andrephrase based on English Wikipedia edit histories (see Sec. 2). Botha et al. use the same model as Aharoni and Goldberg (2018) to experiment with different combinations of training data: WEBSPLIT only, WikiSplit only, and both. The evaluation is performed on WEBSPLIT. Results show that training using WikiSplit only or both improves performance on the task in around 30 BLEU points.", "cite_spans": [ { "start": 438, "end": 465, "text": "Aharoni and Goldberg (2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Split", "sec_num": "4.3.1" }, { "text": "In this approach, SS is modeled as a sequence-to-sequence problem, and tackled normally with an attention-based encoder-decoder architecture (Bahdanau, Cho, and Bengio 2014) . The encoder projects the source sentence into a set of continuous vector representations from which the decoder generates the target sentence. A major advantage of this approach is that it allows training of end-to-end models without needing to extract features or estimate individual model components, such as the language model. In addition, all simplification transformations can be learned simultaneously, instead of developing individual mechanisms as in previous research.", "cite_spans": [ { "start": 141, "end": 173, "text": "(Bahdanau, Cho, and Bengio 2014)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Sequence-to-Sequence", "sec_num": "4.4" }, { "text": "Architectures. Most models are based on Recurrent Neural Networks (RNNs) with long short term memory units (LSTMs, Hochreiter and Schmidhuber 1997) . Given an original source sentence X = (x 1 , x 2 , . . . , x |X| ), the model learns to predict its simplified version, Y = (y 1 , y 2 , . . . , y |Y| ). It uses an encoder that transforms the source sentence X into a sequence of hidden states (h S 1 , h S 2 , . . . , h S |X| ), from which the decoder generates one word y t+1 at the time in target Y. The generation process is conditioned on all the words generated so far y 1:t and a dynamic context vector c t , which also encodes the source sentence:", "cite_spans": [ { "start": 107, "end": 147, "text": "(LSTMs, Hochreiter and Schmidhuber 1997)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P( Y|X) = |Y| t=1 P( y t |y 1:t\u22121 , X)", "eq_num": "(16)" } ], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P( y t+1 |y 1:t , X) = softmax( g(h T t , c t ))", "eq_num": "(17)" } ], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "where g(\u2022) is a neural network with one hidden layer and parametrized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g(h T t , c t ) = W o tanh(U h h T t + W h c t )", "eq_num": "(18)" } ], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "where W o \u2208 R |V|\u00d7d , U h \u2208 R d\u00d7d , and W h \u2208 R d\u00d7d ; |V| is the output vocabulary size and d is the hidden unit size. h T t is the hidden state of the decoder LSTM that summarizes y 1:t (what has been generated so far):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h T t = LSTM(y t , h T t\u22121 )", "eq_num": "(19)" } ], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "The dynamic context vector c t is a weighted sum of the hidden states of the source sentence, whose weights \u03b1 t i are determined by an attention mechanism:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c t = |X| i=1 \u03b1 t i h S i \u03b1 t i = exp(h T t \u2022 h S i ) i exp(h T t \u2022 h S i )", "eq_num": "(20)" } ], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "NTS: Nisioi et al. 2017introduced the first Neural Text Simplification approach using the encoder-decoder with attention architecture provided by OpenNMT (Klein et al. 2017) . They experimented with using the default system, and also with combining pre-trained word2vec word embeddings (Mikolov et al. 2013) with locally trained ones. They also generated two candidate hypotheses for each beam size, and used BLEU and SARI to determine which hypothesis to choose from the n-best list of candidates. EW-SEW was used for training, and TurkCorpus for validation and testing. When compared against PBSMT-R and SBSMT (PPDB+SARI), NTS with its default features achieved the highest grammaticality and meaning preservation scores in human evaluation. SBSMT (PPDB+SARI) was still the best using SARI scores. Overall, NTS is able to perform simplifications limited to paraphrasing and deletion transformations. It is also apparent that choosing the second hypothesis results in less conservative simplifications.", "cite_spans": [ { "start": 154, "end": 173, "text": "(Klein et al. 2017)", "ref_id": "BIBREF63" }, { "start": 286, "end": 307, "text": "(Mikolov et al. 2013)", "ref_id": "BIBREF73" } ], "ref_spans": [], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "NematuSS: Alva-Manchego et al. (2017) also tested this standard neural architecture for SS, but used the implementation provided by Nematus (Sennrich et al. 2017). They experimented with different types of original-simplified sentence alignments extracted from the Newsela corpus. When experimenting with all possible sentence alignments, the model tended to be too aggresive, mostly performing deletions. When using only 1-to-1 alignments, the model became more conservative, and the simplifications performed were restricted to deletions and one-word replacements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "targeTS: Inspired by the work of Johnson et al. (2017) on multilingual neural MT, enriched the encoder's input with information about the target audience and the (predicted) simplification transformations to be performed. Concretely, an artificial token was added to the beginning of the input sentences indicating (1) the grade level of the simplification instance, and/or (2) one of four possible text transformations: identical, elaboration, splitting, or joining. At test time, the text transformation token is either predicted (using a simple features-based naive Bayes classifier) or an oracle label is used. They experimented using the standard neural architecture available in OpenNMT and data from the Newsela corpus. Results showed improvements in BLEU, SARI, and Flesch scores when using this extra information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "NSELSTM: Vu et al. (2018) used Neural Semantic Encoders (NSEs, Munkhdalai and Yu 2017) instead of LSTMs for the encoder. At any encoding time step, a NSE has access to all the tokens in the input sequence, and is thus able to capture more context information while encoding the current token, instead of only relying on the previous hidden state. Their approach is tested on PWKP, TurkCorpus, and Newsela. Two models are presented, one tuned using BLEU (NSELSTM-B) and one using SARI (NSELSTM-S). When compared against other models, NSELSTM-B achieved the best BLEU scores in the Newsela and TurkCorpus data sets, while NSELSTM-S was second-best on SARI scores in Newsela and PWKP. According to human evaluation, NSELSTM-B has the best grammaticality for Newsela and PWKP, while NSELSTM-S is the best in meaning preservation and simplicity for PWKP and TurkCorpus. 4.4.2 Modifying the Training Method. Without significantly changing the standard RNNbased architecture described before, some research has experimented with alternative learning algorithms with which the models are trained.", "cite_spans": [ { "start": 9, "end": 25, "text": "Vu et al. (2018)", "ref_id": "BIBREF125" } ], "ref_spans": [], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "DRESS: Zhang and Lapata (2017) use the standard attention-based encoder-decoder as an agent within a Reinforcement Learning (RL) architecture (Figure 2 ). An advantage of this approach is that the model can be trained end-to-end using SS-specific metrics.", "cite_spans": [ { "start": 7, "end": 30, "text": "Zhang and Lapata (2017)", "ref_id": "BIBREF136" } ], "ref_spans": [ { "start": 142, "end": 151, "text": "(Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "The agent reads the original sentence and takes a series of actions (words in the vocabulary) to generate the simplified output. After that, it receives a reward that scores the output according to its simplicity, relevance (meaning preservation), and fluency (grammaticality). To reward simplicity, they calculate SARI in both the expected direction and in reverse (using the output as reference, and the reference as output) to counteract the effect of having noisy data and a single reference; the reward is then the weighted sum of both values. To reward relevance, they compute the cosine similarity between the vector representations (obtained using a LSTM) of the source sentence and the predicted output. To reward fluency, they calculate the probability of the predicted output using an LSTM language model trained on simple sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "For learning, the authors used the REINFORCE algorithm (Williams 1992) , whose goal is to find an agent that maximizes the expected reward. As such, the training loss is given by the negative expected reward:", "cite_spans": [ { "start": 55, "end": 70, "text": "(Williams 1992)", "ref_id": "BIBREF129" } ], "ref_spans": [], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(\u03b8) = \u2212E (\u0177 1 ,...,\u0177 |\u0176| ) \u223c P RL (\u2022|X) [r(\u0177 1 , . . . ,\u0177 |\u0176| )]", "eq_num": "(21)" } ], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "where P RL is the policy, given in our case by the distribution produced by the encoder-decoder Equation 17and r(\u2022) is the reward function. The authors followed Ranzato et al. (2016) in first pre-training the agent by minimizing the negative loglikelihood of the training source-target pairs, in order to avoid starting the process with a random policy. Then, for each target sequence, they used the same learning process to train the first L tokens, and applied the RL algorithm in the remaining token.", "cite_spans": [ { "start": 161, "end": 182, "text": "Ranzato et al. (2016)", "ref_id": "BIBREF92" } ], "ref_spans": [], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "Even though the model as-is can learn lexical simplifications, the authors state that these are not always correct. As such, the model is modified to learn them explicitly. An encoder-decoder is trained in a parallel original-simplified corpus to obtain probabilistic word alignments (attention scores \u03b1 t ) that help determine whether a word should or should not be simplified. For these lexical simplifications to take context into consideration, they are integrated into the RL model using linear interpolation following Equation 22, where P LS is the probability of simplifying a word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P(y t |y 1:t\u22121 , X) = (1 \u2212 \u03b7)P RL (y t |y 1:t\u22121 , X) + \u03b7P LS (y t |X, \u03b1 t )", "eq_num": "(22)" } ], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "The model that only uses the RL algorithm (DRESS) and the one that also incorporates the explicit lexical simplifications (DRESS-LS) are trained and tested on three different data sets: PWKP, WikiLarge (with TurkCorpus for testing), and Newsela. The authors compared their models against PBSMT-R, Hybrid, SBSMT (PPDB+SARI), and a standard encoder-decoder with attention (EncDecA). In PWKP, DRESS has the lowest FKGL followed by DRESS-LS. In the TurkCorpus, DRESS and DRESS-LS are second best in FKGL and third best in SARI. In Newsela, DRESS-LS achieves the highest BLEU score. Overall, DRESS and DRESS-LS obtained better scores than EncDecA, with DRESS-LS being the best of the three. It is worth noting that even though there were examples of sentence splitting in the training corpora (e.g., PWKP), the authors do not report on their models being able to perform it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "PointerCopy+MTL: Guo, Pasunuru, and Bansal (2018) worked with SS within a Multi-Task Learning (MTL) framework. Considering SS as the main task, they incorporated two auxiliary tasks to improve the model's performance: paraphrase generation and entailment generation. The former helps with inducing word and phrase replacements, reorderings, and deletions; while the latter ensures that the generated simplified output logically follows the original sentence. The proposed MTL architecture implements multi-level soft sharing (Figure 3 ). Based on observations by Belinkov et al. (2017) , lower-level layers in the encoder/decoder (i.e., that are closer to the input/output) are shared among tasks focused on word representations and syntacticlevel information (i.e., SS and paraphrasing); whereas higher-level layers are shared among tasks focused on semantics and meaning (i.e., SS and entailment). In addition, their RNN-based model is enhanced with a pointer-copy mechanism (See, Liu, and Manning 2017) , which allows deciding at decoding time whether to copy a token from the input or generate one.", "cite_spans": [ { "start": 17, "end": 49, "text": "Guo, Pasunuru, and Bansal (2018)", "ref_id": "BIBREF48" }, { "start": 563, "end": 585, "text": "Belinkov et al. (2017)", "ref_id": "BIBREF11" }, { "start": 977, "end": 1005, "text": "(See, Liu, and Manning 2017)", "ref_id": "BIBREF101" } ], "ref_spans": [ { "start": 525, "end": 534, "text": "(Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "When training main and auxiliary tasks in parallel, a concern within MTL is how to determine the appropriate number of iterations on each task relative to the others. This is normally handled using a static hyperparameter. In contrast, Guo, Pasunuru, and Bansal (2018) proposed learning this mixing ratio dynamically using a multi-armed bandits based controller. Basically, at each round, the controller selects a task based on some noise value estimates, observes \"rewards\" for the selected task (in their case, the reward was the negative validation loss of the main task), and switches accordingly.", "cite_spans": [ { "start": 236, "end": 268, "text": "Guo, Pasunuru, and Bansal (2018)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "The proposed model was trained and tested using PWKP, WikiLarge (with TurkCorpus as test set), and Newsela for SS; the SNLI (Bowman et al. 2015) and the MultiNLI (Williams, Nangia, and Bowman 2018) corpora for entailment generation; and ParaNMT (Wieting and Gimpel 2018) for paraphrase generation. Using automatic metrics, PointerCopy+MTL achieved the highest SARI score only in the Newsela corpus. With human judgments, their model scored as the best in simplicity. 4.4.3 Adding External Knowledge. The previously described models attempted to learn how to simplify only using information from the training data sets. Zhao et al. (2018) argued that the relatively small size of these data sets prevents models from generalizing well, considering the vast amount of possible simplification transformations that exist. Therefore, they proposed to include human-curated paraphrasing rules from Simple Paraphrase Database (SPPDB; Pavlick and Callison-Burch 2016) into a neural encoderdecoder architecture. This intuition is similar to Xu et al. (2016) , who incorporated those rewriting rules into a SBSMT-based model. In addition, the authors moved from the RNN-based architecture to one based on the Transformer (Vaswani et al. 2017) .", "cite_spans": [ { "start": 124, "end": 144, "text": "(Bowman et al. 2015)", "ref_id": "BIBREF18" }, { "start": 162, "end": 197, "text": "(Williams, Nangia, and Bowman 2018)", "ref_id": "BIBREF128" }, { "start": 619, "end": 637, "text": "Zhao et al. (2018)", "ref_id": "BIBREF137" }, { "start": 1032, "end": 1048, "text": "Xu et al. (2016)", "ref_id": "BIBREF133" }, { "start": 1211, "end": 1232, "text": "(Vaswani et al. 2017)", "ref_id": "BIBREF123" } ], "ref_spans": [], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "The rewriting rules from SPPDB were incorporated into the model using two mechanisms. In Deep Critic Sentence Simplification (DCSS), the model uses a new loss function that maximizes the probability of generating the simplified form of a word, while minimizing the probability of generating its original one. In Deep Memory Augmented Sentence Simplification (DMASS), the model has a built-in memory that stores the rules from SPPDB in the form context vectors calculated from the hidden states of the encoder, and corresponding generated outputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "The model was trained only using WikiLarge, and tested on TurkCorpus and Newsela. The authors evaluated using both mechanisms, DCSS and DMASS, independently, as well as in conjunction. Then compared to other models, DMASS+DCSS achieved the highest SARI score in both test sets. They also estimated the correctness of rule utilization based on ground-truth from SPPDB, and showed that their models also improved compared to previous work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RNN-Based", "sec_num": "4.4.1" }, { "text": "Architectures. Surya et al. 2019 proposed an unsupervised approach for developing a simplification system. Their motivation was to design an architecture that could be exploited to train SS models for languages or domains that do not have large resources of parallel original-simplified instances. Their proposal is based on a modified auto encoder that uses a shared encoder E and two dedicated decoders: one for generating complex sentences (G d ) and one for simple sentences (G s ). In addition, their model relies on Discriminator and Classifier modules. The Discriminator determines if a given context vector sequence (from either complex or simple sentences) is close to one extracted from simple sentences in the data set. It interacts with G s using an adversarial loss function L adv , in a similar fashion as GANs (Goodfellow et al. 2014) . The Classifier is in charge of diversification by ensuring, through loss function L div , that both G d and G s attend differently to the hidden representations generated by the shared encoder. Two additional loss functions, L rec and L denoi , are used for reconstructing sentences and denoising, respectively. The full architecture can be seen in Figure 4 .", "cite_spans": [ { "start": 15, "end": 32, "text": "Surya et al. 2019", "ref_id": "BIBREF110" }, { "start": 820, "end": 849, "text": "GANs (Goodfellow et al. 2014)", "ref_id": null } ], "ref_spans": [ { "start": 1201, "end": 1209, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Unsupervised", "sec_num": "4.4.4" }, { "text": "The proposed model (UNTS) was trained using an English Wikipedia dump that was partitioned into Complex and Simple sets using a threshold based on Flesch Reading Ease scores. They also used 10,000 sentence pairs from EW-SEW (Hwang et al. 2015) and WebSplit (Narayan et al. 2017) data sets to train a model (UNTS+10K) with minimal supervision. Their models were compared against unsupervised systems from the MT literature , as well as SS models like NTS (Nisioi et al. 2017) and SBSMT (Xu et al. 2016) , and using TurkCorpus as test data. When evaluated using automatic metrics, SBSMT scored the highest on SARI, but both UNTS and UNTS+10K were not far from the supervised models. This same behavior was observed with human evaluations. Even though the unsupervised model was trained using instances of sentence splitting from WebSplit, the authors do not report testing it on data for that specific text transformation. 4.4.5 Simplification as Sequence Labeling. Alva-Manchego et al. (2017) model SS as a Sequence Labeling problem, identifying simplification transformations at word or phrase level. They use the token-level annotation algorithms of MASSAlign (Paetzold, Alva-Manchego, and Specia 2017) to automatically generate annotated data from which an LSTM learns to predict simplification transformations; more specifically, deletions and replacements. During decoding, words labeled to be deleted are just not included in the output. To produce replacements, they use the lexical simplifier of Paetzold and Specia (2017a) . The proposed approach is compared against MT-based models: Moses (Koehn et al. 2007 ), Nematus (Sennrich et al. 2017 , and NTS+word2vec (with default settings) using data from the Newsela corpus. Alva-Manchego et al. (2017) achieve the highest SARI score in the test set, and best simplicity score with human judgments. This approach is inspired in the abstractive sentence compression model of Bingel and S\u00f8gaard (2016) , who propose a tree labeling approach to remove or paraphrase syntactic units in the dependency tree of a given sentence, using a Conditional Random Fields predictor.", "cite_spans": [ { "start": 224, "end": 243, "text": "(Hwang et al. 2015)", "ref_id": "BIBREF52" }, { "start": 257, "end": 278, "text": "(Narayan et al. 2017)", "ref_id": "BIBREF78" }, { "start": 485, "end": 501, "text": "(Xu et al. 2016)", "ref_id": "BIBREF133" }, { "start": 964, "end": 991, "text": "Alva-Manchego et al. (2017)", "ref_id": "BIBREF3" }, { "start": 1161, "end": 1203, "text": "(Paetzold, Alva-Manchego, and Specia 2017)", "ref_id": "BIBREF84" }, { "start": 1503, "end": 1530, "text": "Paetzold and Specia (2017a)", "ref_id": "BIBREF83" }, { "start": 1598, "end": 1616, "text": "(Koehn et al. 2007", "ref_id": "BIBREF65" }, { "start": 1617, "end": 1649, "text": "), Nematus (Sennrich et al. 2017", "ref_id": null }, { "start": 1729, "end": 1756, "text": "Alva-Manchego et al. (2017)", "ref_id": "BIBREF3" }, { "start": 1928, "end": 1953, "text": "Bingel and S\u00f8gaard (2016)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised", "sec_num": "4.4.4" }, { "text": "Most sequence-to-sequence approaches for training SS models could be considered as black boxes with respect to which simplification transformations should be applied to a given sentence. That is a desirable feature for a holistic approach to SS, where the rewriting operations interact with each other, and are not necessarily applied in isolation (e.g., a sentence can be split, and some of its components deleted/reordered simultaneously). However, it could also be desirable to have a more modular approach to the problem: to first determine which simplifications should be performed in a given sentence, and then decide how to handle each transformation independently (potentially using a different approach for each operation). Approaches based on labeling could be helpful in such cases. A disadvantage, however, is collecting quality annotated data from which to learn. In addition, some simplification transformations are hard to predict (e.g., insertion of words that do not come from the original sentence). Table 6 summarizes the performance of the models trained with the SS approaches described. In this case, some of the values can be compared on the test set used. Of all the sequence-to-sequence models tested in TurkCorpus, DMASS-DCSS obtains the highest SARI score, with NSELSTM-B achieving the best BLEU. Both approaches used WikiLarge for training. Regarding the transformations that they can perform, sequenceto-sequence models seem to be able to perform substitutions, deletions, and reorderings, just like previous MT-based approaches. None of the papers reports if these architectures are able to perform sentence splitting.", "cite_spans": [], "ref_spans": [ { "start": 1018, "end": 1025, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Unsupervised", "sec_num": "4.4.4" }, { "text": "In this section we have presented a summary of research in data-driven automatic SS. This review has helped to understand the benefits and shortcomings of each approach to the task. Traditionally, SS has been reduced to four text transformations: substituting complex words or phrases, deleting or reordering sentence components, and splitting a complex sentence into several simpler ones (Zhu, Bernhard, and Gurevych 2010; Narayan and Gardent 2014) . Table 7 lists the surveyed SS models (grouped by approach), the techniques each of them explores, and the simplification transformations that they can perform, considering the four traditional rewriting operations established.", "cite_spans": [ { "start": 389, "end": 423, "text": "(Zhu, Bernhard, and Gurevych 2010;", "ref_id": null }, { "start": 424, "end": 449, "text": "Narayan and Gardent 2014)", "ref_id": "BIBREF76" } ], "ref_spans": [ { "start": 452, "end": 459, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "4.5" }, { "text": "Overall, SMT-based methods can perform substitutions, short-distance reorderings, and deletions, but fail to produce quality splits unless explicitly modeled using syntactic Table 6 Performance of sequence-to-sequence sentence simplification models as reported by their authors. information or coupled with more expensive processes, such as semantic analysis.", "cite_spans": [], "ref_spans": [ { "start": 174, "end": 181, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "4.5" }, { "text": "Grammar-based approaches can model splits (and syntactic changes in general) more naturally, but the critical process of selecting which rule(s) to apply, in which order, and how to determine the best simplification output is complex. In this respect, sequence labeling seems more straightforward, and offers some flexibility in how each identified transformation could be handled individually. However, there is no available manually produced corpus with the required annotations that could allow studying the advantages and limitations of this approach. Additionally, there is little research on dealing with discourse-related issues caused by the rewriting transformations, or on considering more document-level constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "So far in this article we have explained how several SS models were implemented, and have commented on their performance based on the results reported by their authors. However, one problem we encountered was comparing these models against one another objectively, since most authors used different corpora or (implementations of) metrics for testing their models. Some researchers have realized this problem and it is now more common to have access to their models' outputs on some data sets. Also, some have reimplemented SS approaches that were not made available by their original authors. In this section, we take advantage of this fact to use publicly available outputs of SS models on commonly used data sets, and measure their simplification performance using the same set of metrics and test data. As shown in Section 3, in order to automatically evaluate the output of an SS model, we normally use MT-inspired metrics (e.g., BLEU), readability metrics (e.g., Flesch Kincaid), and simplicity metrics (e.g., SARI). Automatic metrics are easy to compute, Table 7 Summary of sentence-level text simplification approaches: SMT-based (first section), grammar-based (second section), semantics-assisted (third section), and neural sequence-to-sequence (fourth section). The transformations listed are the ones the authors acknowledge that their models can perform. Transformations with * are found in some of the outputs, but not explicitly modeled by the authors.", "cite_spans": [], "ref_spans": [ { "start": 1062, "end": 1069, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Benchmarking Sentence Simplification Models", "sec_num": "5." }, { "text": "Specia ( but they only provide overall performance scores that cannot explain specific strengths and weaknesses of an SS approach. Therefore, we propose to also evaluate SS models based on how effective they are at executing specific simplification transformations. We show how this per-transformation assessment contributes to a better understanding of the automatic scores, and to an improved comparison between different SS models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Approach Transformations", "sec_num": null }, { "text": "In this section, we describe the test sets for which we collected SS models' outputs. After that, we specify how we compare them using a few of the automatic metrics previously described and automatic identifications of a set of simplification transformations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Setting", "sec_num": "5.1" }, { "text": "Characteristics of the tests sets on which SS models' outputs were collected. An instance corresponds to a source sentence with one or more possible references. Each reference can be composed of one or more sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 8", "sec_num": null }, { "text": "Test data set Instances Alignment type References PWKP 93 1-to-1 1 7 1-to-N 1 TurkCorpus 359 1-to-1 8 5.1.1 Data sets. We compare the SS models' outputs in two commonly used test sets extracted from corpora based on EW and SEW: PWKP (Zhu, Bernhard, and Gurevych 2010) and TurkCorpus (Xu et al. 2016) . Both test sets contain automatically generated original-simplified sentence alignments (see Sections 2.1.1 and 2.3 for details). Table 8 lists some of their characteristics. The 1-to-N alignments in PWKP mean that some instances of sentence splitting are present in the data set. This is a limitation of TurkCorpus that only contains 1-to-1 alignments, mostly providing instances of paraphrasing and deletion operations. On the other hand, each original sentence in TurkCorpus has eight simplified references produced through crowdsourcing. This allows us to more confidently use metrics that rely on multiple references, like SARI. We do not use the Newsela corpus in our benchmark because researchers are prohibited from publicly releasing models' outputs on these data. 5.1.2 Overall Performance Comparison. We first compare the models' outputs using automatic metrics so as to obtain an overall measure of simplification quality. We calculate BLEU, SARI, SAMSA, and FKGL. We compute the scores for these metrics using EASSE (Alva-Manchego et al. 2019), 10 a Python package for single access to (re)implementations of these metrics. Specifically, it uses SACREBLEU (Post 2018) 11 to calculate BLEU, a re-implementation of SARI's corpus-level version in Python (it was originally available in Java), a slightly modified version of the original SAMSA implementation 12 for improved execution speed, and a re-implementation of FKGL based on publicly available scripts 13 that fixes some edge case inconsistencies.", "cite_spans": [ { "start": 283, "end": 299, "text": "(Xu et al. 2016)", "ref_id": "BIBREF133" }, { "start": 1470, "end": 1484, "text": "(Post 2018) 11", "ref_id": null } ], "ref_spans": [ { "start": 431, "end": 438, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Table 8", "sec_num": null }, { "text": "Comparison. We are also interested in an in-depth study of the simplification capabilities of each model. In particular, we want to determine which simplification transformations each model performs more effectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation-Based Performance", "sec_num": "5.1.3" }, { "text": "In order to identify the simplification transformations that a model performed, it would be ideal to have corpora with such type of annotations and compare against them. However, there is no available simplification corpus with such a type of information. As a work-around, we use the annotator module of MASSAlign (Paetzold, Alva-Manchego, and Specia 2017) . This tool provides algorithms to automatically label the simplification transformations carried out in aligned original-simplified sentences.", "cite_spans": [ { "start": 315, "end": 357, "text": "(Paetzold, Alva-Manchego, and Specia 2017)", "ref_id": "BIBREF84" } ], "ref_spans": [], "eq_spans": [], "section": "Transformation-Based Performance", "sec_num": "5.1.3" }, { "text": "Based on word alignments, the algorithms attempt to identify copies, deletions, movements, and replacements. Alva-Manchego et al. (2017) tested the quality of the automatically produced labels, comparing them with manual annotations for 100 sentences from the Newsela corpus. These annotations were performed by four proficient speakers of English. For 30 of those sentences annotated by the four annotators, the pairwise inter-annotator agreement between annotators yielded an average kappa value of 0.57. For all labels (excluding copies), the algorithms achieved a micro-averaged F1 score of 0.61, being especially effective at identifying deletions and replacements.", "cite_spans": [ { "start": 109, "end": 136, "text": "Alva-Manchego et al. (2017)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Transformation-Based Performance", "sec_num": "5.1.3" }, { "text": "MASSAlign's annotation algorithms were integrated into EASSE, and are used to generate two sets of automatic word-level annotations: (1) between the original sentences and their reference simplifications, and (2) between the original sentences and their automatic simplifications produced by an SS system. Considering (1) as reference labels, we calculate the F1 score of each transformation in (2) to estimate their correctness. When more than one reference simplification exists, we calculate the pertransformation F1 scores of the output against each reference, and then keep the highest one as the sentence-level score. The corpus-level scores are the average of sentence-level scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation-Based Performance", "sec_num": "5.1.3" }, { "text": "For the PWKP test set, the models that have publicly available outputs are: Moses (released by Zhu, Bernhard, and Gurevych 2010), PBSMT-R, QG+ILP (released by Narayan and Gardent 2014), Hybrid, TSM, UNSUP, EncDecA, DRESS, and DRESS-LS. Overall scores using standard metrics are shown in Table 9 sorted by SARI.", "cite_spans": [], "ref_spans": [ { "start": 287, "end": 294, "text": "Table 9", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Comparing Models in the PWKP Test Set", "sec_num": "5.2" }, { "text": "According to the values of the automatic metrics, Hybrid is the model that produces the simplest output as measured by SARI, followed by Moses. If we consider BLEU as indicative of grammaticality, Moses produces the most fluent output, followed closely by Hybrid. This is not surprising since MT-based models, in general, tend to produce well-formed sentences. Also, TSM achieves the lowest FKGL, which seems to be indicative of shorter output, rather than simpler output (its SARI score is in the middle of the pack). We also note that grammaticality has no impact on FKGL values; that is, a text with low grammaticality can still have a good FKGL score. Therefore, since DRESS obtains the lowest BLEU score, its FKGL value is not reliable. In terms of SAMSA, the best model is QG+ILP, followed by TSM and Hybrid. Because this metric focuses on assessing sentence splitting, it is expected that the models that explicitly perform this transformation scored best. From these results, we could conclude that Hybrid is the overall best model, since it achieves the highest SARI, BLEU, and SAMSA scores close to the highest, and a FKGL not too far from the reference. It could be surprising to see that the SAMSA score for the reference is one of the lowest. This is explained by the way the metric is computed. Because it relies on word alignments, if the simplification significantly changes the original sentence, then these alignments are not possible, resulting in a low score. For example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparing Models in the PWKP Test Set", "sec_num": "5.2" }, { "text": "(3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparing Models in the PWKP Test Set", "sec_num": "5.2" }, { "text": "Original: Genetic engineering has expanded the genes available to breeders to utilize in creating desired germlines for new crops.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparing Models in the PWKP Test Set", "sec_num": "5.2" }, { "text": "(4) Reference: New plants were created with genetic engineering. This is the reason why SAMSA should only be used for evaluating instances of sentence splitting that do not perform significant rephrasings of the sentence. Table 10 presents the results of our transformation-based performance measures for models' outputs on PWKP. From Table 9 , we saw that Hybrid got the highest SARI score. With the results on Table 10 we can understand better why that happened. Because SARI is a metric aimed mostly at lexical simplification, it rewards replacements and making small changes to the original sentence. In this test set, Moses and Hybrid's replacements and copying operations are among the most accurate. EncDecA, which obtained the worst SARI score, is only average in replacement and copy, and has the lowest deletion accuracy, which points out to them mostly repeating the original sentence without much modifications. Finally, DRESS and DRESS-LS are the best ones in deleting content, which explains their low (and \"good\") FKGL scores.", "cite_spans": [], "ref_spans": [ { "start": 222, "end": 230, "text": "Table 10", "ref_id": null }, { "start": 335, "end": 342, "text": "Table 9", "ref_id": "TABREF8" }, { "start": 412, "end": 420, "text": "Table 10", "ref_id": null } ], "eq_spans": [], "section": "Comparing Models in the PWKP Test Set", "sec_num": "5.2" }, { "text": "The models evaluated on this test set are almost the same ones as before, except for Moses, QG+ILP, TSM, and UNSUP, for which we could not find available outputs on this test set. However, we now also include SBSMT (PPDB+SARI) and NTS+SARI. Because the TurkCorpus has multiple simplified references for each original sentence, we use all of them for measuring BLEU and SARI. For calculating reference values, we sample one of the eight human references for each instance as others have done (Zhang and Lapata 2017) . When reporting SAMSA scores, we only use the 70 sentences of TurkCorpus that also appear in HSplit. 14 This allows us to compute reference scores for instances that contain structural simplifications (i.e., sentence splits). We calculate SAMSA for each of the four manual simplifications in HSplit, and choose the highest as an upper-bound. Results are presented in Table 11 sorted by SARI score. In this test set, the models that used external knowledge from Simple PPDB achieved the highest scores in terms of SARI: DMASS-DCSS and SBSMT (PPDB+SARI). The most fluent model according to BLEU is PBSMT-R, closely followed by DRESS-LS. This could be due to MT-based models being capable of generating grammatical output of high quality. Contrary to what was observed in the PWKP test set, Hybrid achieved the worst scores in SARI and SAMSA. This could be explained by the low FKGL score. It appears that Hybrid tends to modify the original sentence much more than other models, potentially by deleting content and producing shorter outputs. In this test set, this behavior is penalized since most references were only rewritten considering paraphrases of words and phrases. Table 12 presents the results of our transformation-based performance measures for models' outputs on the TurkCorpus test set. Similarly to the PWKP test set, the effectiveness results on TurkCorpus support the scores of the automatic metrics (Table 11) . SBSMT (PPDB+SARI) is the best at performing replacements, which explains its high SARI score. PBSMT-R obtained the best BLEU score, which is explained by the aboveaverage copy transformations. Even though Hybrid achieved the lowest FKGL value, it is the best in deletions, has a below-average copying, and does not produce accurate replacements. This, again, suggests that a low FKGL score does not necessarily indicate good simplifications. Finally, the origin of the TurkCorpus set itself could explain some of these results. According to Xu et al. (2016) , the participants performing the simplifications were instructed to mostly produce paraphrases, that is, mostly replacements with virtually no deletions. As such, copying is a significant operation and, therefore, models that are good at performing it reflect better the characteristics of the human simplifications in this data set.", "cite_spans": [ { "start": 491, "end": 514, "text": "(Zhang and Lapata 2017)", "ref_id": "BIBREF136" }, { "start": 2486, "end": 2502, "text": "Xu et al. (2016)", "ref_id": "BIBREF133" } ], "ref_spans": [ { "start": 883, "end": 891, "text": "Table 11", "ref_id": "TABREF9" }, { "start": 1689, "end": 1697, "text": "Table 12", "ref_id": "TABREF1" }, { "start": 1932, "end": 1942, "text": "(Table 11)", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Comparing Models in the TurkCorpus Test Set", "sec_num": "5.3" }, { "text": "In this article we presented a survey of research in Text Simplification, focusing on models that perform the task at sentence-level. We limited our review to models that learn to produce simplifications by using parallel corpora of original-simplified sentences. We reviewed the main resources exploited by these approaches, detailed how models are trained using these data sets, and explained how they are evaluated. Finally, we compared several SS models using standard overall-performance metrics, and proposed a new operation-specific method for qualifying the simplification transformations that each SS model performs. The former analysis provided us with insights about the simplification capabilities of each approach, which help better explain the initial automatic scores. Based on our review, we suggest the following as areas that are worth exploring.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Directions", "sec_num": "6." }, { "text": "Corpora Diversity. Most data sets used in data-driven SS research are based on EW and SEW. Its quality has been questioned (see Section 2.1), but its public availability and shareability makes it popular for research purposes. The Newsela corpus offers the advantage of being produced by professionals, which ensures a higher quality across all texts available. However, the fact that common splits of the data cannot be publicly shared hinders the development and objective comparison of models that use it. We believe that our research area would benefit from data sets that combine the positive features of both: high-quality professionally produced data that can be publicly shared. In addition, it would be desirable that these new data sets be as diverse as possible in terms of application domains, target audiences, and text transformations realized. Finally, it would also be valuable to follow Xu et al. (2016) by collecting multiple simplification references per simplification, for a fairer evaluation.", "cite_spans": [ { "start": 904, "end": 920, "text": "Xu et al. (2016)", "ref_id": "BIBREF133" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Directions", "sec_num": "6." }, { "text": "Explanation Generation. Even though most current SS approaches do not focus on specific simplification transformations, it is safe to say that they tackle the four main ones: deletion, substitution, reordering, and splitting (see Table 7 ). However, by definition, simplifying a text could also involve further explaining complex terms or concepts. This is not merely replacing a word or phrase for a simpler synonym or its definition, but to elaborate on the concept in a natural way that keeps the text grammatical, is meaning preserving, and is simple. This is an important research area in SS where limited work has been published (Damay et al. 2006; Watanabe et al. 2009; Kandula, Curtis, and Zeng-Treitler 2010; Eom, Dickinson, and Sachs 2012) .", "cite_spans": [ { "start": 635, "end": 654, "text": "(Damay et al. 2006;", "ref_id": "BIBREF32" }, { "start": 655, "end": 676, "text": "Watanabe et al. 2009;", "ref_id": "BIBREF126" }, { "start": 677, "end": 717, "text": "Kandula, Curtis, and Zeng-Treitler 2010;", "ref_id": "BIBREF58" }, { "start": 718, "end": 749, "text": "Eom, Dickinson, and Sachs 2012)", "ref_id": null } ], "ref_spans": [ { "start": 230, "end": 237, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Conclusions and Future Directions", "sec_num": "6." }, { "text": "Personalized Simplification. All SS approaches reviewed in this article are general purpose, that is, without a specific target audience. This is because current SS research focuses on how to learn the simplification operations that are realized in the corpus being used, and not on the end-user of the model's output. The simplification needs of a non-native speaker are different from those of a person with autism or with a lowliteracy level. One possible solution could be to create target-audience-specific corpora to learn from. However, even within the same target group, individuals have specific simplification needs and preferences. We believe it would be meaningful to develop approaches that are capable of handling the specific needs of its user, and possibly learn from its interactions as a way to generate more helpful simplifications for that particular person. Document-level Approaches. Most TS models focus on simplifying sentences individually, and there is little research on tackling the problem with a document-level perspective. Woodsend and Lapata (2011b) and Mandya, Nomoto, and Siddharthan (2014) produce simplifications at the sentence-level and try to globally optimize readability scores or length of the document. However, Siddharthan (2003) points out that syntactic alterations to sentences (especially, splitting) can affect the rhetorical relations between them, which can only be resolved going beyond sentence boundaries. This is an exciting area of research, since simplifying a complete document is a more real use-case scenario for a simplification model. This line of research should begin with identifying what makes document simplification different from sentence simplification. It is likely that transformations that span multiple sentences are performed, which could never be tackled by a sentence-level model. In addition, proper corpora should be curated for training and testing of data-driven models, and new evaluation methodologies should be devised. The Newsela corpus is a resource that could be exploited in this regard. So far, it has only been used for sentence-level TS, even though it contains originalsimplified aligned documents, with versions in several simplification levels.", "cite_spans": [ { "start": 1054, "end": 1081, "text": "Woodsend and Lapata (2011b)", "ref_id": "BIBREF130" }, { "start": 1086, "end": 1124, "text": "Mandya, Nomoto, and Siddharthan (2014)", "ref_id": "BIBREF68" }, { "start": 1255, "end": 1273, "text": "Siddharthan (2003)", "ref_id": "BIBREF103" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Directions", "sec_num": "6." }, { "text": "Sentence Joining. Most current SS models perform sentence compression, in that they may delete part of the content of a sentence that could be regarded as unimportant. However, as presented in Section 1.2, studies have shown that humans tend to join sentences together while simplifying a text, and perform a form of abstractive summarization. No state-of-the-art TS model considers this type of operation while transforming a text, perhaps because they only process one sentence at a time. Incorporating sentence joining could help in developing a document-level perspective into current TS systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Directions", "sec_num": "6." }, { "text": "Simplification Evaluation. For automatic evaluation of SS models, there are currently only two simplification-specific metrics: SARI (focused on paraphrasing) and SAMSA (focused on sentence splitting). However, as mentioned in Section 1.2, humans perform several more transformations that we are not currently measured when assessing a model's output. The transformation-specific evaluation presented in this article is merely a method for better understanding what the SS models are doing, but it is not a metric in its whole sense. We believe that more work needs to be done in improving how we evaluate and compare SS models automatically. Research on Quality Estimation has shown promising results on using reference-less metrics to evaluate generated outputs, allowing the automatic assessment to speed-up and scale. This line of work has started to be applied for simplification (\u0160tajner et al. 2016; Martin et al. 2018) , and we believe it needs to be explored further. In addition, human-based evaluation has been limited to three criteria: grammaticality, meaning preservation, and simplicity. Are these criteria enough? Would they still be relevant if we moved to a document-level perspective?", "cite_spans": [ { "start": 885, "end": 906, "text": "(\u0160tajner et al. 2016;", "ref_id": null }, { "start": 907, "end": 926, "text": "Martin et al. 2018)", "ref_id": "BIBREF69" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Directions", "sec_num": "6." }, { "text": "Would assessing the usefulness of the simplifications for target users (Mandya, Nomoto, and Siddharthan 2014 ) be a more reliable quality measure? We believe these are questions that need to be addressed.", "cite_spans": [ { "start": 71, "end": 108, "text": "(Mandya, Nomoto, and Siddharthan 2014", "ref_id": "BIBREF68" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Directions", "sec_num": "6." }, { "text": "Text Simplification is a research area with significant application potential. It can have a meaningful impact in people's lives and help create a more inclusive society. With the development of new Natural Language Processing technologies (especially neural-based models), it is starting to receive more attention in recent years. However, there are still several open questions that pose challenges to our research community. We hope that this article has helped provide an understanding of what has been done so far in the area and, more importantly, has motivated more people to advance the current state of the art.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Directions", "sec_num": "6." }, { "text": "https://simple.wikipedia.org. 2 https://wikipedia.org.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Wiktionary is a free dictionary in the format of a wiki so that everyone can add and edit word definitions. Available at https://en.wiktionary.org.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://newsela.com/data/. 5 https://github.com/neosyon/SimpTextAlign.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/ghpaetzold/massalign.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/dhfbk/simpitiki.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": ".2.3 Simplification Metrics. SARI (System output Against References and Input sentence) was introduced byXu et al. (2016) as a means to measure \"how good\" the words added, deleted, and kept by a simplification model are. This metric compares the output of an SS model against multiple simplification references and the original sentence.The intuition behind SARI is to reward models for adding n-grams that occur in any of the references but not in the input, to reward keeping n-grams both in the output and in the references, and to reward not over-deleting n-grams. SARI is the arithmetic mean of n-gram precisions and recalls for add, keep, and delete; the higher the final value, the better.Xu et al. (2016) show that SARI correlates with human judgments of simplicity gain. As such, this metric has become the standard measure for evaluating and comparing SS models' outputs.Considering a model output O, the input sentence I, references R, and # g (\u2022) as a binary indicator of occurrence of n-grams g in a given set, we first calculate n-gram precision p(n) and recall r(n) for the three operations listed (add, keep, and delete):p add (n) = g\u2208O min(# g (O\u2229\u012a),# g (R)) g\u2208O # g (O\u2229\u012a) , # g (O \u2229\u012a) = max(# g (O) \u2212 # g (I), 0)r add (n) = g\u2208O min(# g (O\u2229\u012a),# g (R)) g\u2208O # g (R\u2229\u012a), # g (R \u2229\u012a) = max(# g (R) \u2212 # g (I), 0)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In Quality Estimation, the goal is to evaluate an output translation without comparing it to a reference. For a comprehensive review of this area of research, please refer toSpecia, Scarton, and Paetzold (2018).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.statmt.org/moses/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/feralvam/easse. 11 https://github.com/mjpost/sacreBLEU. 12 https://github.com/eliorsulem/SAMSA. 13 https://github.com/mmautner/readability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "At the time of this submission only a subset of 70 sentences had been released from HSplit. However, the full corpus will soon be available in EASSE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Universal conceptual cognitive annotation (UCCA)", "authors": [ { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "228--238", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abend, Omri and Ari Rappoport. 2013. Universal conceptual cognitive annotation (UCCA). In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 228-238, Sofia.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Split and rephrase: Better evaluation and stronger baselines", "authors": [ { "first": "Roee", "middle": [], "last": "Aharoni", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "719--724", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aharoni, Roee and Yoav Goldberg. 2018. Split and rephrase: Better evaluation and stronger baselines. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 719-724, Melbourne.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A corpus analysis of simple account texts and the proposal of simplification strategies: First steps towards text simplification systems", "authors": [ { "first": "Sandra", "middle": [ "M" ], "last": "Alu\u00edsio", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "A", "middle": [ "S" ], "last": "Thiago", "suffix": "" }, { "first": "Erick", "middle": [ "G" ], "last": "Pardo", "suffix": "" }, { "first": "Helena", "middle": [ "M" ], "last": "Maziero", "suffix": "" }, { "first": "Renata", "middle": [ "P M" ], "last": "Caseli", "suffix": "" }, { "first": "", "middle": [], "last": "Fortes", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 26th Annual ACM International Conference on Design of Communication, SIGDOC '08", "volume": "", "issue": "", "pages": "15--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alu\u00edsio, Sandra M., Lucia Specia, Thiago A. S. Pardo, Erick G. Maziero, Helena M. Caseli, and Renata P. M. Fortes. 2008. A corpus analysis of simple account texts and the proposal of simplification strategies: First steps towards text simplification systems. In Proceedings of the 26th Annual ACM International Conference on Design of Communication, SIGDOC '08, pages 15-22, Lisbon.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning how to simplify from explicit labeling of complexsimplified text pairs", "authors": [ { "first": "", "middle": [], "last": "Alva-Manchego", "suffix": "" }, { "first": "Joachim", "middle": [], "last": "Fernando", "suffix": "" }, { "first": "Gustavo", "middle": [], "last": "Bingel", "suffix": "" }, { "first": "Carolina", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Scarton", "suffix": "" }, { "first": "", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "295--305", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alva-Manchego, Fernando, Joachim Bingel, Gustavo Paetzold, Carolina Scarton, and Lucia Specia. 2017. Learning how to simplify from explicit labeling of complex- simplified text pairs. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 295-305, Taipei.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "EASSE: Easier Automatic Sentence Simplification Evaluation", "authors": [ { "first": "", "middle": [], "last": "Alva-Manchego", "suffix": "" }, { "first": "Louis", "middle": [], "last": "Fernando", "suffix": "" }, { "first": "Carolina", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Scarton", "suffix": "" }, { "first": "", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations", "volume": "", "issue": "", "pages": "49--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alva-Manchego, Fernando, Louis Martin, Carolina Scarton, and Lucia Specia. 2019. EASSE: Easier Automatic Sentence Simplification Evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 49-54, Hong Kong.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "An Analysis of Crowdsourced Text Simplifications", "authors": [ { "first": "Marcelo", "middle": [], "last": "Amancio", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2014, "venue": "Third Workshop on Predicting and Improving Text Readability for Target Reader Populations", "volume": "", "issue": "", "pages": "123--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amancio, Marcelo and Lucia Specia. 2014. An Analysis of Crowdsourced Text Simplifications. In Third Workshop on Predicting and Improving Text Readability for Target Reader Populations, PITR 2014, pages 123-130, Gothenburg.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Assessing relative sentence complexity using an incremental CCG parser", "authors": [ { "first": "Bharat", "middle": [], "last": "Ambati", "suffix": "" }, { "first": "Siva", "middle": [], "last": "Ram", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1051--1057", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ambati, Bharat Ram, Siva Reddy, and Mark Steedman. 2016. Assessing relative sentence complexity using an incremental CCG parser. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1051-1057, San Diego, CA.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Unsupervised statistical machine translation", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3632--3642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Artetxe, Mikel, Gorka Labaka, and Eneko Agirre. 2018. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632-3642, Brussels.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "TriS: A statistical sentence simplifier with log-linear models and margin-based discriminative training", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": ";", "middle": [], "last": "Vancouver", "suffix": "" }, { "first": "", "middle": [], "last": "Bach", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2011, "venue": "Proceedings of 5th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "474--482", "other_ids": {}, "num": null, "urls": [], "raw_text": "Artetxe, Mikel, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. In Proceedings of the 6th International Conference on Learning Representations, Vancouver. Bach, Nguyen, Qin Gao, Stephan Vogel, and Alex Waibel. 2011. TriS: A statistical sentence simplifier with log-linear models and margin-based discriminative training. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 474-482, Chiang Mai.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Sentence alignment for monolingual comparable corpora", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Noemie", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, EMNLP '03", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barzilay, Regina and Noemie Elhadad. 2003. Sentence alignment for monolingual comparable corpora. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, EMNLP '03, pages 25-32, Stroudsburg, PA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "What do neural machine translation models learn about morphology?", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "861--872", "other_ids": {}, "num": null, "urls": [], "raw_text": "Belinkov, Yonatan, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861-872, Vancouver.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Predicting misreadings from gaze in children with reading difficulties", "authors": [ { "first": "Joachim", "middle": [], "last": "Bingel", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Barrett", "suffix": "" }, { "first": "Sigrid", "middle": [], "last": "Klerke", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "24--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bingel, Joachim, Maria Barrett, and Sigrid Klerke. 2018. Predicting misreadings from gaze in children with reading difficulties. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 24-34, New Orleans, LA.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Lexi: A tool for adaptive, personalized text simplification", "authors": [ { "first": "Joachim", "middle": [], "last": "Bingel", "suffix": "" }, { "first": "Gustavo", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "245--258", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bingel, Joachim, Gustavo Paetzold, and Anders S\u00f8gaard. 2018. Lexi: A tool for adaptive, personalized text simplification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 245-258, Santa Fe, NM.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Text simplification as tree labeling", "authors": [ { "first": "Joachim", "middle": [], "last": "Bingel", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "337--343", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bingel, Joachim and Anders S\u00f8gaard. 2016. Text simplification as tree labeling. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 337-343, Berlin.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Putting it simply: A context-aware approach to lexical simplification", "authors": [ { "first": "Or", "middle": [], "last": "Biran", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Brody", "suffix": "" }, { "first": "Noemie", "middle": [], "last": "Elhadad", "suffix": "" }, { "first": ";", "middle": [ "A" ], "last": "Faruqui", "suffix": "" }, { "first": "John", "middle": [], "last": "Alex", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "47", "issue": "", "pages": "87--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Biran, Or, Samuel Brody, and Noemie Elhadad. 2011. Putting it simply: A context-aware approach to lexical simplification. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 496-501, Portland, OR. Botha, Jan A., Manaal Faruqui, John Alex, Jason Baldridge, and Dipanjan Das. 2018. Learning to split and rephrase from Wikipedia edit history. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 732-737, Brussels. Bott, Stefan and Horacio Saggion. 2011a. Spanish text simplification: An exploratory study. Procesamiento del Lenguaje Natural, 47:87-95.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "An unsupervised alignment algorithm for text simplification corpus construction", "authors": [ { "first": "Stefan", "middle": [], "last": "Bott", "suffix": "" }, { "first": "Horacio", "middle": [], "last": "Saggion", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Workshop on Monolingual Text-To-Text Generation, MTTG '11", "volume": "", "issue": "", "pages": "20--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bott, Stefan and Horacio Saggion. 2011b. An unsupervised alignment algorithm for text simplification corpus construction. In Proceedings of the Workshop on Monolingual Text-To-Text Generation, MTTG '11, pages 20-26, Stroudsburg, PA.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Text simplification tools for Spanish", "authors": [ { "first": "Stefan", "middle": [], "last": "Bott", "suffix": "" }, { "first": "Horacio", "middle": [], "last": "Saggion", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Mille", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation, LREC'12", "volume": "", "issue": "", "pages": "1665--1671", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bott, Stefan, Horacio Saggion, and Simon Mille. 2012. Text simplification tools for Spanish. In Proceedings of the Eight International Conference on Language Resources and Evaluation, LREC'12, pages 1665-1671, Istanbul.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "632--642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bowman, Samuel R., Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Design and annotation of the first Italian corpus for text simplification", "authors": [ { "first": "Dominique", "middle": [], "last": "Brunato", "suffix": "" }, { "first": "Felice", "middle": [], "last": "Dell'orletta", "suffix": "" }, { "first": "Giulia", "middle": [], "last": "Venturi", "suffix": "" }, { "first": "Simonetta", "middle": [], "last": "Montemagni", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brunato, Dominique, Felice Dell'Orletta, Giulia Venturi, and Simonetta Montemagni. 2015. Design and annotation of the first Italian corpus for text simplification. In Proceedings of the 9th", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Linguistic Annotation Workshop, LAW IX", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "31--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linguistic Annotation Workshop, LAW IX, pages 31-41, Denver, CO.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Supporting the adaptation of texts for poor literacy readers: A text simplification editor for Brazilian Portuguese", "authors": [ { "first": "", "middle": [], "last": "Candido", "suffix": "" }, { "first": "Erick", "middle": [], "last": "Arnaldo", "suffix": "" }, { "first": "Caroline", "middle": [], "last": "Maziero", "suffix": "" }, { "first": "", "middle": [], "last": "Gasperin", "suffix": "" }, { "first": "A", "middle": [ "S" ], "last": "Thiago", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Pardo", "suffix": "" }, { "first": "Sandra", "middle": [ "M" ], "last": "Specia", "suffix": "" }, { "first": "", "middle": [], "last": "Aluisio", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications, BEA '09", "volume": "", "issue": "", "pages": "34--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Candido Jr., Arnaldo, Erick Maziero, Caroline Gasperin, Thiago A. S. Pardo, Lucia Specia, and Sandra M. Aluisio. 2009. Supporting the adaptation of texts for poor literacy readers: A text simplification editor for Brazilian Portuguese. In Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications, BEA '09, pages 34-42, Boulder, CO.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Practical simplification of English newspaper text to assist aphasic readers", "authors": [ { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Guido", "middle": [], "last": "Minnen", "suffix": "" }, { "first": "Yvonne", "middle": [], "last": "Canning", "suffix": "" }, { "first": "Siobhan", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "John", "middle": [], "last": "Tait", "suffix": "" } ], "year": 1998, "venue": "Proceedings of AAAI-98 Workshop on Integrating Artificial Intelligence and Assistive Technology", "volume": "", "issue": "", "pages": "7--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carroll, John, Guido Minnen, Yvonne Canning, Siobhan Devlin, and John Tait. 1998. Practical simplification of English newspaper text to assist aphasic readers. In Proceedings of AAAI-98 Workshop on Integrating Artificial Intelligence and Assistive Technology, pages 7-10, Madison, WI.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Building a Brazilian Portuguese parallel corpus of original and simplified texts", "authors": [ { "first": "Helena", "middle": [ "M" ], "last": "Caseli", "suffix": "" }, { "first": "F", "middle": [], "last": "Tiago", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "", "middle": [], "last": "Specia", "suffix": "" }, { "first": "A", "middle": [ "S" ], "last": "Thiago", "suffix": "" }, { "first": "Caroline", "middle": [], "last": "Pardo", "suffix": "" }, { "first": "Sandra", "middle": [ "M" ], "last": "Gasperin", "suffix": "" }, { "first": "R", "middle": [], "last": "Aluisio ; Chandrasekar", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Doran", "suffix": "" }, { "first": "B", "middle": [], "last": "Srinivas", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 10th Conference on Intelligent Text Processing and Computational Linguistics", "volume": "", "issue": "", "pages": "1041--1044", "other_ids": {}, "num": null, "urls": [], "raw_text": "Caseli, Helena M., Tiago F. Pereira, Lucia Specia, Thiago A. S. Pardo, Caroline Gasperin, and Sandra M. Aluisio. 2009. Building a Brazilian Portuguese parallel corpus of original and simplified texts. In Proceedings of the 10th Conference on Intelligent Text Processing and Computational Linguistics, pages 59-70, Mexico. Chandrasekar, R., Christine Doran, and B. Srinivas. 1996. Motivations and methods for text simplification. In Proceedings of the 16th Conference on Computational Linguistics, COLING '96, pages 1041-1044, Copenhagen.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "An introduction to synchronous grammars. Tutorial at ACL", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chiang, David. 2006. An introduction to synchronous grammars. Tutorial at ACL 2006. Available at http://www3.nd.edu/ dchiang/papers/synchtut.pdf.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Sentence compression as tree transduction", "authors": [ { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2009, "venue": "Journal of Artificial Intelligence Research", "volume": "34", "issue": "", "pages": "637--674", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohn, Trevor and Mirella Lapata. 2009. Sentence compression as tree transduction. Journal of Artificial Intelligence Research, 34:637-674.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "An abstractive approach to sentence compression", "authors": [ { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2013, "venue": "ACM Transactions on Intelligent Systems and Technology", "volume": "4", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohn, Trevor and Mirella Lapata. 2013. An abstractive approach to sentence compression. ACM Transactions on Intelligent Systems and Technology, 4(3):41:1-41:35.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Learning to simplify sentences using Wikipedia", "authors": [ { "first": "William", "middle": [], "last": "Coster", "suffix": "" }, { "first": "David", "middle": [], "last": "Kauchak", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Workshop on Monolingual Text-To-Text Generation, MTTG '11", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Coster, William and David Kauchak. 2011a. Learning to simplify sentences using Wikipedia. In Proceedings of the Workshop on Monolingual Text-To-Text Generation, MTTG '11, pages 1-9, Portland, OR.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Ultraconservative Online Algorithms for Multiclass Problems", "authors": [ { "first": "William", "middle": [], "last": "Coster", "suffix": "" }, { "first": "David", "middle": [], "last": "Kauchak", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers", "volume": "2", "issue": "", "pages": "951--991", "other_ids": {}, "num": null, "urls": [], "raw_text": "Coster, William and David Kauchak. 2011b. Simple English Wikipedia: A new text simplification task. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers -Volume 2, HLT '11, pages 665-669, Stroudsburg, PA. Crammer, Koby and Yoram Singer. 2003. Ultraconservative Online Algorithms for Multiclass Problems. Journal of Machine Learning Research, 3:951-991.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A linguistic analysis of simplified and authentic texts", "authors": [ { "first": "", "middle": [], "last": "Mcnamara", "suffix": "" } ], "year": 2007, "venue": "The Modern Language Journal", "volume": "91", "issue": "1", "pages": "15--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "McNamara. 2007. A linguistic analysis of simplified and authentic texts. The Modern Language Journal, 91(1):15-30.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Linguistically motivated large-scale NLP with C&C and Boxer", "authors": [ { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07", "volume": "", "issue": "", "pages": "33--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Curran, James R., Stephen Clark, and Johan Bos. 2007. Linguistically motivated large-scale NLP with C&C and Boxer. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07, pages 33-36, Prague.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "SIMTEXT text simplification of medical literature", "authors": [ { "first": "Jerwin", "middle": [], "last": "Damay", "suffix": "" }, { "first": "S", "middle": [], "last": "Jan", "suffix": "" }, { "first": "Gerard", "middle": [], "last": "Jaime", "suffix": "" }, { "first": "D", "middle": [], "last": "Lojico", "suffix": "" }, { "first": "Kimberly", "middle": [], "last": "", "suffix": "" }, { "first": "Amanda", "middle": [ "L" ], "last": "Lu", "suffix": "" }, { "first": "Dex", "middle": [ "B" ], "last": "Tarantan", "suffix": "" }, { "first": "Ethel", "middle": [ "C" ], "last": "Ong", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 3rd National Natural Language Processing Symposium -Building Language Tools and Resources", "volume": "", "issue": "", "pages": "34--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Damay, Jerwin Jan S., Gerard Jaime D. Lojico, Kimberly Amanda L. Lu, Dex B. Tarantan, and Ethel C. Ong. 2006. SIMTEXT text simp- lification of medical literature. In Proceedings of the 3rd National Natural Language Processing Symposium -Building Language Tools and Resources, pages 34-38, Manila.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Meteor universal: Language specific translation evaluation for any target language", "authors": [ { "first": "De", "middle": [], "last": "Belder", "suffix": "" }, { "first": "Jan", "middle": [], "last": "", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the EACL 2014 Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "376--380", "other_ids": {}, "num": null, "urls": [], "raw_text": "De Belder, Jan and Marie-Francine Moens. 2010. Text simplification for children. In Proceedings of the SIGIR 2010 Workshop on Accessible Search Systems, pages 19-26, Geneva. Denkowski, Michael and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation, pages 376-380, Baltimore, MD.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "The use of a psycholinguistic database in the simplification of text for aphasic readers. Linguistic Databases", "authors": [ { "first": "Siobhan", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "John", "middle": [], "last": "Tait", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "161--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "Devlin, Siobhan and John Tait. 1998. The use of a psycholinguistic database in the simplification of text for aphasic readers. Linguistic Databases, 161-173, Stanford, CA.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Learning non-isomorphic tree mappings for machine translation", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "316--325", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eisner, Jason. 2003. Learning non-isomorphic tree mappings for machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics - Volume 2, ACL '03, pages 205-208, Sapporo. Eom, Soojeong, Markus Dickinson, and Rebecca Sachs. 2012. Sense-specific lexical information for reading assistance. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 316-325, Montr\u00e9al.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "An evaluation of syntactic simplification rules for people with autism", "authors": [ { "first": "Richard", "middle": [], "last": "Evans", "suffix": "" }, { "first": "Constantin", "middle": [], "last": "Orasan", "suffix": "" }, { "first": "Iustin", "middle": [], "last": "Dornescu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations, PIT 2014", "volume": "", "issue": "", "pages": "131--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "Evans, Richard, Constantin Orasan, and Iustin Dornescu. 2014. An evaluation of syntactic simplification rules for people with autism. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations, PIT 2014, pages 131-140, Gothenburg.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Comparing methods for the syntactic simplification of sentences in information extraction", "authors": [ { "first": "Richard", "middle": [ "J" ], "last": "Evans", "suffix": "" } ], "year": 2011, "venue": "Literary and Linguistic Computing", "volume": "26", "issue": "4", "pages": "371--388", "other_ids": {}, "num": null, "urls": [], "raw_text": "Evans, Richard J. 2011. Comparing methods for the syntactic simplification of sentences in information extraction. Literary and Linguistic Computing, 26(4):371-388.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Sentence simplification as tree transduction", "authors": [ { "first": "Dan", "middle": [], "last": "Feblowitz", "suffix": "" }, { "first": "David", "middle": [], "last": "Kauchak", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Second Workshop on Predicting and Improving Text Readability for Target Reader Populations", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feblowitz, Dan and David Kauchak. 2013. Sentence simplification as tree transduction. In Proceedings of the Second Workshop on Predicting and Improving Text Readability for Target Reader Populations, pages 1-10, Sofia.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Dependency tree based sentence compression", "authors": [ { "first": "Katja", "middle": [], "last": "Filippova", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Strube", "suffix": "" } ], "year": 1948, "venue": "Proceedings of the Fifth International Natural Language Generation Conference, INLG '08", "volume": "32", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Filippova, Katja and Michael Strube. 2008. Dependency tree based sentence compression. In Proceedings of the Fifth International Natural Language Generation Conference, INLG '08, pages 25-32, Stroudsburg, PA. Flesch, Rudolph. 1948. A new readability yardstick. Journal of Applied Psychology, 32(3):221.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "PPDB: The paraphrase database", "authors": [ { "first": "Juri", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2013, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "758--764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ganitkevitch, Juri, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proceedings of NAACL-HLT, pages 758-764, Atlanta, GA.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Creating training corpora for NLG microplanners", "authors": [ { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Shimorina", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "179--188", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gardent, Claire, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating training corpora for NLG micro- planners. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 179-188, Vancouver.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Construction and evaluation of event graphs", "authors": [ { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "Jan\u0161najner", "middle": [], "last": "", "suffix": "" } ], "year": 2015, "venue": "Natural Language Engineering", "volume": "21", "issue": "4", "pages": "607--652", "other_ids": {}, "num": null, "urls": [], "raw_text": "Glava\u0161, Goran and Jan\u0160najner. 2015. Construction and evaluation of event graphs. Natural Language Engineering, 21(4):607-652.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Event-centered simplification of news stories", "authors": [ { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "", "middle": [], "last": "Sanja\u0161tajner", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Student Research Workshop associated with RANLP 2013", "volume": "", "issue": "", "pages": "71--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Glava\u0161, Goran and Sanja\u0160tajner. 2013. Event-centered simplification of news stories. In Proceedings of the Student Research Workshop associated with RANLP 2013, pages 71-78, Hissar.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Simplifying lexical simplification: Do we need simplified corpora?", "authors": [ { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "", "middle": [], "last": "Sanja\u0161tajner", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "63--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Glava\u0161, Goran and Sanja\u0160tajner. 2015. Simplifying lexical simplification: Do we need simplified corpora? In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 63-68, Beijing.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Simple or complex? Assessing the readability of basque texts", "authors": [ { "first": "", "middle": [], "last": "Gonzalez-Dios", "suffix": "" }, { "first": "Mar\u00eda", "middle": [], "last": "Itziar", "suffix": "" }, { "first": "Arantza", "middle": [], "last": "Jes\u00fas Aranzabe", "suffix": "" }, { "first": "Haritz", "middle": [], "last": "D\u00edaz De Ilarraza", "suffix": "" }, { "first": ";", "middle": [], "last": "Salaberri", "suffix": "" }, { "first": "", "middle": [], "last": "Dublin", "suffix": "" }, { "first": "", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Ian", "suffix": "" }, { "first": "Mehdi", "middle": [], "last": "Pouget-Abadie", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Mirza", "suffix": "" }, { "first": "David", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Sherjil", "middle": [], "last": "Warde-Farley", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Ozair", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Courville", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "2672--2680", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gonzalez-Dios, Itziar, Mar\u00eda Jes\u00fas Aranzabe, Arantza D\u00edaz de Ilarraza, and Haritz Salaberri. 2014. Simple or complex? Assessing the readability of basque texts. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 334-344, Dublin. Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27. Curran Associates, Inc., pages 2672-2680.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Japanese news simplification: Task design, data set construction, and analysis of simplified text", "authors": [ { "first": "Isao", "middle": [], "last": "Goto", "suffix": "" }, { "first": "Hideki", "middle": [], "last": "Tanaka", "suffix": "" }, { "first": "Tadashi", "middle": [], "last": "Kumano", "suffix": "" } ], "year": 2015, "venue": "Proceedings of Machine Translation Summit XV", "volume": "1", "issue": "", "pages": "17--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goto, Isao, Hideki Tanaka, and Tadashi Kumano. 2015. Japanese news simplification: Task design, data set construction, and analysis of simplified text. In Proceedings of Machine Translation Summit XV, Vol. 1: MT Researchers' Track, pages 17-31, Miami, FL.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Incorporating copying mechanism in sequence-to-sequence learning", "authors": [ { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Zhengdong", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "O", "middle": [ "K" ], "last": "Victor", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1631--1640", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gu, Jiatao, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631-1640, Berlin.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Dynamic multi-level multi-task learning for sentence simplification", "authors": [ { "first": "Han", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Ramakanth", "middle": [], "last": "Pasunuru", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "462--476", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guo, Han, Ramakanth Pasunuru, and Mohit Bansal. 2018. Dynamic multi-level multi-task learning for sentence simplification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 462-476, Santa Fe, NM.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Source sentence simplification for statistical machine translation", "authors": [ { "first": "Eva", "middle": [], "last": "Hasler", "suffix": "" }, { "first": "Adri", "middle": [], "last": "De Gispert", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Stahlberg", "suffix": "" }, { "first": "Aurelien", "middle": [], "last": "Waite", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Byrne", "suffix": "" } ], "year": 2017, "venue": "Computer Speech & Language", "volume": "45", "issue": "C", "pages": "221--235", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hasler, Eva, Adri de Gispert, Felix Stahlberg, Aurelien Waite, and Bill Byrne. 2017. Source sentence simplification for statistical machine translation. Computer Speech & Language, 45(C):221-235.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Extracting simplified statements for factual question generation", "authors": [ { "first": "Michael", "middle": [], "last": "Heilman", "suffix": "" }, { "first": "Noah", "middle": [ "A ;" ], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Hershcovich", "suffix": "" }, { "first": "Omri", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Abend", "suffix": "" }, { "first": "", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heilman, Michael and Noah A. Smith. 2010. Extracting simplified statements for factual question generation. In Proceedings of the Third Workshop on Question Generation, pages 11-20, Pittsburgh, PA. Hershcovich, Daniel, Omri Abend, and Ari Rappoport. 2017. A transition-based directed acyclic graph parser for UCCA. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1127-1138, Vancouver. Hochreiter, Sepp and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Psycholinguistic models of sentence processing improve sentence readability ranking", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Howcroft", "suffix": "" }, { "first": "Vera", "middle": [], "last": "Demberg", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter", "volume": "1", "issue": "", "pages": "958--968", "other_ids": {}, "num": null, "urls": [], "raw_text": "Howcroft, David M. and Vera Demberg. 2017. Psycholinguistic models of sentence processing improve sentence readability ranking. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 958-968, Valencia.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Aligning sentences from standard Wikipedia to simple Wikipedia", "authors": [ { "first": "William", "middle": [], "last": "Hwang", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "211--217", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hwang, William, Hannaneh Hajishirzi, Mari Ostendorf, and Wei Wu. 2015. Aligning sentences from standard Wikipedia to simple Wikipedia. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 211-217, Denver, CO.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "The distribution of the flora in the alpine zone", "authors": [ { "first": "Paul", "middle": [], "last": "Jaccard", "suffix": "" } ], "year": 1912, "venue": "The New Phytologist", "volume": "11", "issue": "2", "pages": "37--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaccard, Paul. 1912. The distribution of the flora in the alpine zone. The New Phytologist, 11(2):37-50.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Sentence reduction for automatic text summarization", "authors": [ { "first": "Hongyan", "middle": [], "last": "Jing", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Sixth Conference on Applied Natural Language Processing, ANLC '00", "volume": "", "issue": "", "pages": "310--315", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing, Hongyan. 2000. Sentence reduction for automatic text summarization. In Proceedings of the Sixth Conference on Applied Natural Language Processing, ANLC '00, pages 310-315, Stroudsburg, PA.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "authors": [], "year": null, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "339--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Building a monolingual parallel corpus for text simplification using sentence similarity based on alignment between word embeddings", "authors": [ { "first": "Tomoyuki", "middle": [], "last": "Kajiwara", "suffix": "" }, { "first": "Mamoru", "middle": [], "last": "Komachi", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "1147--1158", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kajiwara, Tomoyuki and Mamoru Komachi. 2016. Building a monolingual parallel corpus for text simplification using sentence similarity based on alignment between word embeddings. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1147-1158, Osaka.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "A semantic and syntactic text simplification tool for health content", "authors": [ { "first": "Sasikiran", "middle": [], "last": "Kandula", "suffix": "" }, { "first": "Dorothy", "middle": [], "last": "Curtis", "suffix": "" }, { "first": "Qing", "middle": [], "last": "Zeng-Treitler", "suffix": "" } ], "year": 2010, "venue": "AMIA Annual Symposium Proceedings", "volume": "", "issue": "", "pages": "366--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kandula, Sasikiran, Dorothy Curtis, and Qing Zeng-Treitler. 2010. A semantic and syntactic text simplification tool for health content. In AMIA Annual Symposium Proceedings, pages 366-370, Washington, DC.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Improving text simplification language modeling using unsimplified text data", "authors": [ { "first": "David", "middle": [], "last": "Kauchak", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1537--1546", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kauchak, David. 2013. Improving text simplification language modeling using unsimplified text data. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1537-1546, Sofia.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Derivation of new readability formulas (automated readability index, fog count and Flesch reading ease formula) for Navy enlisted personnel", "authors": [ { "first": "J", "middle": [ "P" ], "last": "Kincaid", "suffix": "" }, { "first": "R", "middle": [ "P" ], "last": "Fishburne", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Rogers", "suffix": "" }, { "first": "B", "middle": [ "S" ], "last": "Chissom", "suffix": "" } ], "year": 1975, "venue": "Chief of Naval Technical Training: Naval Air Station Memphis", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kincaid, J. P., R. P. Fishburne, R. L. Rogers, and B. S. Chissom. 1975. Derivation of new readability formulas (automated readability index, fog count and Flesch reading ease formula) for Navy enlisted personnel, Technical Report 8-75, Chief of Naval Technical Training: Naval Air Station Memphis.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Building a German/simple German parallel corpus for automatic text simplification", "authors": [ { "first": "David", "middle": [], "last": "Klaper", "suffix": "" }, { "first": "Sarah", "middle": [], "last": "Ebling", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Volk", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Second Workshop on Predicting and Improving Text Readability for Target Reader Populations", "volume": "", "issue": "", "pages": "11--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klaper, David, Sarah Ebling, and Martin Volk. 2013. Building a German/simple German parallel corpus for automatic text simplification. In Proceedings of the Second Workshop on Predicting and Improving Text Readability for Target Reader Populations, pages 11-19, Sofia.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Text simplification for information-seeking applications", "authors": [ { "first": "Beata", "middle": [], "last": "Klebanov", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Beigman", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Knight", "suffix": "" }, { "first": "", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of On the Move to Meaningful Internet Systems 2004: CoopIS, DOA, and ODBASE", "volume": "3290", "issue": "", "pages": "735--747", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klebanov, Beata Beigman, Kevin Knight, and Daniel Marcu. 2004. Text simplification for information-seeking applications. In Proceedings of On the Move to Meaningful Internet Systems 2004: CoopIS, DOA, and ODBASE, LNCS, number 3290, Springer Berlin Heidelberg, Berlin Heidelberg, pages 735-747.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "OpenNMT: Open-source toolkit for neural machine translation", "authors": [ { "first": "Guillaume", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yuntian", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Senellart", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klein, Guillaume, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. CoRR, abs/1701.02810.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "DSIM, a Danish parallel corpus for text simplification", "authors": [ { "first": "Sigrid", "middle": [], "last": "Klerke", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation, LREC'12", "volume": "", "issue": "", "pages": "4015--4018", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klerke, Sigrid and Anders S\u00f8gaard. 2012. DSIM, a Danish parallel corpus for text simplification. In Proceedings of the Eight International Conference on Language Resources and Evaluation, LREC'12, pages 4015-4018, Istanbul.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn, Philipp, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [], "year": 2004, "venue": "Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07, pages 177-180, Stroudsburg, PA. Lin, Chin Yew. 2004. ROUGE: A package for automatic evaluation of summaries. In Workshop on Text Summarization Branches Out, pages 74-81, Barcelona.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "Effective approaches to attention-based neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "", "middle": [], "last": "Lisbon", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Mallinson", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "1", "issue": "", "pages": "881--893", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luong, Thang, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon. Mallinson, Jonathan, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 881-893, Valencia.", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "Lexico-syntactic text simplification and compression with typed dependencies", "authors": [ { "first": "Angrosh", "middle": [], "last": "Mandya", "suffix": "" }, { "first": "Tadashi", "middle": [], "last": "Nomoto", "suffix": "" }, { "first": "Advaith", "middle": [], "last": "Siddharthan", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "1996--2006", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mandya, Angrosh, Tadashi Nomoto, and Advaith Siddharthan. 2014. Lexico-syntactic text simplification and compression with typed dependencies. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1996-2006, Dublin.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "Reference-less quality estimation of text simplification systems", "authors": [ { "first": "Louis", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Humeau", "suffix": "" }, { "first": "Pierre-Emmanuel", "middle": [], "last": "Mazar\u00e9", "suffix": "" }, { "first": "\u00c9ric", "middle": [], "last": "De", "suffix": "" }, { "first": "La", "middle": [], "last": "Clergerie", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 1st Workshop on Automatic Text Adaptation (ATA)", "volume": "", "issue": "", "pages": "29--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin, Louis, Samuel Humeau, Pierre-Emmanuel Mazar\u00e9,\u00c9ric de La Clergerie, Antoine Bordes, and Beno\u00eet Sagot. 2018. Reference-less quality estimation of text simplification systems. In Proceedings of the 1st Workshop on Automatic Text Adaptation (ATA), pages 29-38, Tilburg.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "Facilitating reading comprehension through text structure manipulation, Bolt", "authors": [ { "first": "Jana", "middle": [ "M" ], "last": "Mason", "suffix": "" }, { "first": "Janet", "middle": [ "R" ], "last": "Kendall", "suffix": "" } ], "year": 1978, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mason, Jana M. and Janet R. Kendall. 1978. Facilitating reading comprehension through text structure manipulation, Bolt, Beranek and Newman, Inc., Cambridge, MA; Illinois University, Urbana. Center for the Study of Reading.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "Automated Evaluation of Text and Discourse with Coh-Metrix", "authors": [ { "first": "Danielle", "middle": [ "S" ], "last": "Mcnamara", "suffix": "" }, { "first": "C", "middle": [], "last": "Arthur", "suffix": "" }, { "first": "Philip", "middle": [ "M" ], "last": "Graesser", "suffix": "" }, { "first": "Zhiqiang", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "", "middle": [], "last": "Cai", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "McNamara, Danielle S., Arthur C. Graesser, Philip M. McCarthy, and Zhiqiang Cai. 2014. Automated Evaluation of Text and Discourse with Coh-Metrix. Cambridge University Press, New York, NY.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "Character n-gram tokenization for European language text retrieval", "authors": [ { "first": "Paul", "middle": [], "last": "Mcnamee", "suffix": "" }, { "first": "James", "middle": [], "last": "Mayfield", "suffix": "" } ], "year": 2004, "venue": "Information Retrieval", "volume": "7", "issue": "1-2", "pages": "73--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mcnamee, Paul and James Mayfield. 2004. Character n-gram tokenization for European language text retrieval. Information Retrieval, 7(1-2):73-97.", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of Workshop at", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikolov, Tomas, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In Proceedings of Workshop at 2013", "links": null }, "BIBREF75": { "ref_id": "b75", "title": "Confidencedriven rewriting for improved translation", "authors": [ { "first": "Shachar", "middle": [], "last": "Mirkin", "suffix": "" }, { "first": "Sriram", "middle": [], "last": "Venkatapathy", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Dymetman", "suffix": "" } ], "year": 2013, "venue": "XIV MT Summit", "volume": "", "issue": "", "pages": "257--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mirkin, Shachar, Sriram Venkatapathy, and Marc Dymetman. 2013. Confidence- driven rewriting for improved transla- tion. In XIV MT Summit, pages 257-264, Nice.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "Exploring the effects of sentence simplification on Hindi to English machine translation system", "authors": [ { "first": "Kshitij", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Ankush", "middle": [], "last": "Soni", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Dipti", "middle": [], "last": "Sharma", "suffix": "" }, { "first": ";", "middle": [], "last": "Ats-Ma ; Dublin", "suffix": "" }, { "first": "Tsendsuren", "middle": [], "last": "Munkhdalai", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Yu ; Narayan", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 Workshop on Computational Linguistics and Writing: Writing Processes and Authoring Aids", "volume": "1", "issue": "", "pages": "435--445", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mishra, Kshitij, Ankush Soni, Rahul Sharma, and Dipti Sharma. 2014. Exploring the effects of sentence simplification on Hindi to English machine translation system. In Proceedings of the Workshop on Automatic Text Simplification -Methods and Applications in the Multilingual Society (ATS-MA 2014), pages 21-29, Dublin. Munkhdalai, Tsendsuren and Hong Yu. 2017. Neural semantic encoders. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 397-407, Valencia. Napoles, Courtney and Mark Dredze. 2010. Learning simple Wikipedia: A cogitation in ascertaining abecedarian language. In Proceedings of the NAACL HLT 2010 Workshop on Computational Linguistics and Writing: Writing Processes and Authoring Aids, pages 42-50, Los Angeles, CA. Narayan, Shashi and Claire Gardent. 2014. Hybrid simplification using deep semantics and machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 435-445, Baltimore, MD.", "links": null }, "BIBREF77": { "ref_id": "b77", "title": "Unsupervised sentence simplification using deep semantics", "authors": [ { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 9th International Natural Language Generation conference", "volume": "", "issue": "", "pages": "111--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Narayan, Shashi and Claire Gardent. 2016. Unsupervised sentence simplification using deep semantics. In Proceedings of the 9th International Natural Language Generation conference, pages 111-120, Edinburgh.", "links": null }, "BIBREF78": { "ref_id": "b78", "title": "Split and rephrase", "authors": [ { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" }, { "first": "Shay", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Shimorina", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Narayan, Shashi, Claire Gardent, Shay B. Cohen, and Anastasia Shimorina. 2017. Split and rephrase. CoRR, abs/1707.06971.", "links": null }, "BIBREF79": { "ref_id": "b79", "title": "A sentence simplification system for improving relation extraction", "authors": [ { "first": "Christina", "middle": [], "last": "Niklaus", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Bermeitinger", "suffix": "" }, { "first": "Siegfried", "middle": [], "last": "Handschuh", "suffix": "" }, { "first": "Andr\u00e9", "middle": [], "last": "Freitas", "suffix": "" }, { "first": ";", "middle": [], "last": "Sanja\u0161tajner", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" }, { "first": "Liviu", "middle": [ "P" ], "last": "Dinu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations", "volume": "2", "issue": "", "pages": "85--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niklaus, Christina, Bernhard Bermeitinger, Siegfried Handschuh, and Andr\u00e9 Freitas. 2016. A sentence simplification system for improving relation extraction. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations, pages 170-174, Osaka. Nisioi, Sergiu, Sanja\u0160tajner, Simone Paolo Ponzetto, and Liviu P. Dinu. 2017. Exploring neural text simplification models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 85-91, Vancouver.", "links": null }, "BIBREF80": { "ref_id": "b80", "title": "The alignment template approach to statistical machine translation", "authors": [ { "first": "Franz", "middle": [ "Josef" ], "last": "Och", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "4", "pages": "417--449", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och, Franz Josef and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417-449.", "links": null }, "BIBREF81": { "ref_id": "b81", "title": "Basic English: A General Introduction with Rules and Grammar", "authors": [ { "first": "Charles", "middle": [], "last": "Ogden", "suffix": "" }, { "first": "", "middle": [], "last": "Kay", "suffix": "" } ], "year": 1930, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ogden, Charles Kay. 1930. Basic English: A General Introduction with Rules and Grammar. Kegan Paul, Trench, Trubner & Co.", "links": null }, "BIBREF82": { "ref_id": "b82", "title": "Text simplification as tree transduction", "authors": [ { "first": "Gustavo", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 9th Brazilian Symposium in Information and Human Language Technology", "volume": "", "issue": "", "pages": "116--125", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paetzold, Gustavo and Lucia Specia. 2013. Text simplification as tree transduction. In Proceedings of the 9th Brazilian Symposium in Information and Human Language Technology, pages 116-125, Fortaleza.", "links": null }, "BIBREF83": { "ref_id": "b83", "title": "Lexical simplification with neural ranking", "authors": [ { "first": "Gustavo", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter", "volume": "2", "issue": "", "pages": "34--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paetzold, Gustavo and Lucia Specia. 2017a. Lexical simplification with neural ranking. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 34-40, Valencia.", "links": null }, "BIBREF84": { "ref_id": "b84", "title": "Massalign: Alignment and annotation of comparable documents", "authors": [ { "first": "Gustavo", "middle": [ "H" ], "last": "Paetzold", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Fernando Alva-Manchego", "suffix": "" }, { "first": ";", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Gustavo", "middle": [ "H" ], "last": "Paetzold", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IJCNLP 2017", "volume": "60", "issue": "", "pages": "549--593", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paetzold, Gustavo H., Fernando Alva-Manchego, and Lucia Specia. 2017. Massalign: Alignment and annotation of comparable documents. In Proceedings of the IJCNLP 2017, System Demonstrations, pages 1-4, Taipei. Paetzold, Gustavo H. and Lucia Specia. 2017b. A survey on lexical simplification. Journal of Artificial Intelligence Research, 60:549-593.", "links": null }, "BIBREF85": { "ref_id": "b85", "title": "Vicinity-driven paragraph and sentence alignment for comparable corpora", "authors": [ { "first": "Gustavo", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "", "middle": [], "last": "Henrique", "suffix": "" }, { "first": "U", "middle": [ "K" ], "last": "Sheffield", "suffix": "" }, { "first": "Gustavo", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Henrique", "suffix": "" }, { "first": "", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paetzold, Gustavo Henrique. 2016. Lexical Simplification for Non-Native English Speakers. Ph.D. thesis, University of Sheffield, Sheffield, UK. Paetzold, Gustavo Henrique and Lucia Specia. 2016. Vicinity-driven paragraph and sentence alignment for comparable corpora. CoRR, abs/1612.04113.", "links": null }, "BIBREF86": { "ref_id": "b86", "title": "BLEU: A method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, pages 311-318, Philadelphia, PA.", "links": null }, "BIBREF87": { "ref_id": "b87", "title": "Natural Language Processing Tools for Reading Level Assessment and Text Simplification for Bilingual Education", "authors": [ { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pavlick, Ellie and Chris Callison-Burch. 2016. Simple PPDB: A paraphrase database for simplification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 143-148, Berlin. Petersen, Sarah E. 2007. Natural Language Processing Tools for Reading Level Assessment and Text Simplification for Bilingual Education. Ph.D. thesis, University of Washington, AAI3275902.", "links": null }, "BIBREF88": { "ref_id": "b88", "title": "Text simplification for language learners: A corpus analysis", "authors": [ { "first": "Sarah", "middle": [ "E" ], "last": "Petersen", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Speech and Language Technology for Education Workshop", "volume": "", "issue": "", "pages": "69--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Petersen, Sarah E. and Mari Ostendorf. 2007. Text simplification for language learners: A corpus analysis. In Proceedings of the Speech and Language Technology for Education Workshop, SLaTE 2007, pages 69-72, Farmington, PA.", "links": null }, "BIBREF89": { "ref_id": "b89", "title": "A call for clarity in reporting BLEU scores", "authors": [ { "first": "Matt", "middle": [], "last": "Post", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "186--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Post, Matt. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels.", "links": null }, "BIBREF90": { "ref_id": "b90", "title": "Joshua 5.0: Sparser, better, faster, server", "authors": [ { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Juri", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Orland", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Weese", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Eighth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "206--212", "other_ids": {}, "num": null, "urls": [], "raw_text": "Post, Matt, Juri Ganitkevitch, Luke Orland, Jonathan Weese, Yuan Cao, and Chris Callison-Burch. 2013. Joshua 5.0: Sparser, better, faster, server. In Proceedings of the Eighth Workshop on Statistical Machine Translation, pages 206-212, Sofia.", "links": null }, "BIBREF91": { "ref_id": "b91", "title": "The language structure of deaf children. The Volta Review", "authors": [ { "first": "S", "middle": [ "P" ], "last": "Quigley", "suffix": "" }, { "first": "D", "middle": [], "last": "Power", "suffix": "" }, { "first": "M", "middle": [], "last": "Steinkamp", "suffix": "" } ], "year": 1977, "venue": "", "volume": "79", "issue": "", "pages": "73--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quigley, S. P., D. Power, and M. Steinkamp. 1977. The language structure of deaf children. The Volta Review, 79(2):73-84.", "links": null }, "BIBREF92": { "ref_id": "b92", "title": "Sequence level training with recurrent neural networks", "authors": [ { "first": "Marc", "middle": [ "'" ], "last": "Ranzato", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Aurelio", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Auli", "suffix": "" }, { "first": "", "middle": [], "last": "Zaremba", "suffix": "" } ], "year": 2016, "venue": "4th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ranzato, Marc'Aurelio, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan.", "links": null }, "BIBREF93": { "ref_id": "b93", "title": "A structured review of the validity of BLEU", "authors": [ { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" } ], "year": 2018, "venue": "Computational Linguistics", "volume": "44", "issue": "3", "pages": "393--401", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reiter, Ehud. 2018. A structured review of the validity of BLEU. Computational Linguistics, 44(3):393-401.", "links": null }, "BIBREF94": { "ref_id": "b94", "title": "Frequent words improve readability and short words improve understandability for people with dyslexia", "authors": [ { "first": "Luz", "middle": [], "last": "Rello", "suffix": "" }, { "first": "Ricardo", "middle": [], "last": "Baeza-Yates", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Dempere-Marco", "suffix": "" }, { "first": "Horacio", "middle": [], "last": "Saggion", "suffix": "" } ], "year": 2013, "venue": "Human-Computer Interaction -INTERACT 2013: 14th IFIP TC 13 International Conference", "volume": "", "issue": "", "pages": "203--219", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rello, Luz, Ricardo Baeza-Yates, Laura Dempere-Marco, and Horacio Saggion. 2013a. Frequent words improve readability and short words improve understandability for people with dyslexia. In Human-Computer Interaction - INTERACT 2013: 14th IFIP TC 13 International Conference, pages 203-219, Cape Town.", "links": null }, "BIBREF95": { "ref_id": "b95", "title": "Dyswebxia 2.0!: More accessible text for people with dyslexia", "authors": [ { "first": "Luz", "middle": [], "last": "Rello", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Bayarri", "suffix": "" }, { "first": "Azuki", "middle": [], "last": "G\u00f2rriz", "suffix": "" }, { "first": "Ricardo", "middle": [], "last": "Baeza-Yates", "suffix": "" }, { "first": "Saurabh", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Gaurang", "middle": [], "last": "Kanvinde", "suffix": "" }, { "first": "Horacio", "middle": [], "last": "Saggion", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Bott", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Carlini", "suffix": "" }, { "first": "Vasile", "middle": [], "last": "Topac", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility, W4A '13", "volume": "25", "issue": "", "pages": "1--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rello, Luz, Clara Bayarri, Azuki G\u00f2rriz, Ricardo Baeza-Yates, Saurabh Gupta, Gaurang Kanvinde, Horacio Saggion, Stefan Bott, Roberto Carlini, and Vasile Topac. 2013b. \"Dyswebxia 2.0!: More accessible text for people with dyslexia.\" In Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility, W4A '13, pages 25:1-25:2, Rio de Janeiro.", "links": null }, "BIBREF96": { "ref_id": "b96", "title": "The effects of syntax on the reading comprehension of hearing-impaired children. The Volta Review", "authors": [ { "first": "N", "middle": [ "L" ], "last": "Robbins", "suffix": "" }, { "first": "C", "middle": [], "last": "Hatcher", "suffix": "" } ], "year": 1981, "venue": "", "volume": "83", "issue": "", "pages": "105--115", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robbins, N. L. and C. Hatcher. 1981. The effects of syntax on the reading comprehension of hearing-impaired children. The Volta Review, 83(2):105-115.", "links": null }, "BIBREF97": { "ref_id": "b97", "title": "Automatic text simplification", "authors": [ { "first": "Horacio", "middle": [], "last": "Saggion", "suffix": "" } ], "year": 2017, "venue": "Synthesis Lectures on Human Language Technologies", "volume": "10", "issue": "1", "pages": "1--137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saggion, Horacio. 2017. Automatic text simplification. Synthesis Lectures on Human Language Technologies, 10(1):1-137.", "links": null }, "BIBREF98": { "ref_id": "b98", "title": "SimPA: A sentence-level simplification corpus for the public administration domain", "authors": [ { "first": "Carolina", "middle": [], "last": "Scarton", "suffix": "" }, { "first": "Gustavo", "middle": [ "H" ], "last": "Paetzold", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "4333--4338", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scarton, Carolina, Gustavo H. Paetzold, and Lucia Specia. 2018a. SimPA: A sentence-level simplification corpus for the public administration domain. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), pages 4333-4338, Miyazaki.", "links": null }, "BIBREF99": { "ref_id": "b99", "title": "Text simplification from professionally produced corpora", "authors": [ { "first": "Carolina", "middle": [], "last": "Scarton", "suffix": "" }, { "first": "Gustavo", "middle": [ "H" ], "last": "Paetzold", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scarton, Carolina, Gustavo H. Paetzold, and Lucia Specia. 2018b. Text simplification from professionally produced corpora.", "links": null }, "BIBREF100": { "ref_id": "b100", "title": "Learning simplifications for specific target audiences", "authors": [ { "first": "", "middle": [], "last": "Miyazaki", "suffix": "" }, { "first": "Carolina", "middle": [], "last": "Scarton", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "2", "issue": "", "pages": "712--718", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), pages 3504-3510, Miyazaki. Scarton, Carolina and Lucia Specia. 2018. Learning simplifications for specific target audiences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 712-718, Melbourne.", "links": null }, "BIBREF101": { "ref_id": "b101", "title": "Get to the point: Summarization with pointer-generator networks", "authors": [ { "first": "Abigail", "middle": [], "last": "See", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "", "middle": [], "last": "Vancouver", "suffix": "" }, { "first": "", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Rico", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Marcin", "middle": [], "last": "Hitschler", "suffix": "" }, { "first": "", "middle": [], "last": "Junczys-Dowmunt", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Samuel L'aubli", "suffix": "" }, { "first": "", "middle": [], "last": "Valerio Miceli", "suffix": "" }, { "first": "Jozef", "middle": [], "last": "Barone", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Mokry", "suffix": "" }, { "first": "", "middle": [], "last": "Nadejde", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "65--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "See, Abigail, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver. Sennrich, Rico, Orhan Firat, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel L'aubli, Antonio Valerio Miceli Barone, Jozef Mokry, and Maria Nadejde. 2017. Nematus: A toolkit for neural machine translation. In Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 65-68, Valencia.", "links": null }, "BIBREF102": { "ref_id": "b102", "title": "Auditory comprehension problems in adult aphasic individuals", "authors": [ { "first": "Matthew", "middle": [], "last": "Shardlow", "suffix": "" } ], "year": 1985, "venue": "International Journal of Advanced Computer Science and Applications (IJACSA)", "volume": "9", "issue": "5", "pages": "151--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shardlow, Matthew. 2014. A survey of automated text simplification. International Journal of Advanced Computer Science and Applications (IJACSA), Special Issue on Natural Language Processing 2014, pages 58-70, West Yorkshire, UK. Shewan, Cynthia M. 1985. Auditory comprehension problems in adult aphasic individuals. Human Communication Canada, 9(5):151-155.", "links": null }, "BIBREF103": { "ref_id": "b103", "title": "Text simplification using typed dependencies: A comparison of the robustness of different generation strategies", "authors": [ { "first": "Advaith", "middle": [], "last": "Siddharthan", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 13th European Workshop on Natural Language Generation, ENLG '11", "volume": "165", "issue": "", "pages": "259--298", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddharthan, Advaith. 2003. Preserving discourse structure when simplifying text. In Proceedings of the 2003 European Natural Language Generation Workshop, ENLG 2003, pages 103-110, Budapest. Siddharthan, Advaith. 2011. Text simplification using typed dependencies: A comparison of the robustness of different generation strategies. In Proceedings of the 13th European Workshop on Natural Language Generation, ENLG '11, pages 2-11, Stroudsburg, PA. Siddharthan, Advaith. 2014. A survey of research on text simplification. ITL - International Journal of Applied Linguistics, 165(2):259-298.", "links": null }, "BIBREF104": { "ref_id": "b104", "title": "Syntactic simplification for improving content selection in multi-document summarization", "authors": [ { "first": "Advaith", "middle": [], "last": "Siddharthan", "suffix": "" }, { "first": "Ani", "middle": [], "last": "Nenkova", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2004, "venue": "Simple Wikipedia. 2017a. Wikipedia: How to write Simple English pages. From https:// simple.wikipedia.org/wiki/Wikipedia: How to write Simple English pages", "volume": "2012", "issue": "", "pages": "23--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddharthan, Advaith, Ani Nenkova, and Kathleen McKeown. 2004. Syntactic simplification for improving content selection in multi-document summarization. In Proceedings of the 20th International Conference on Computational Linguistics, COLING '04, pages 896-902, Geneva. Silveira, Sara Botelho and Antnio Branco. 2012. Enhancing multi-document summaries with sentence simplificatio. In Proceedings of the 14th International Conference on Artificial Intelligence, ICAI 2012, pages 742-748, Las Vegas, NV. Simple Wikipedia. 2017a. Wikipedia: How to write Simple English pages. From https:// simple.wikipedia.org/wiki/Wikipedia: How to write Simple English pages. Retrieved January 23, 2017. Simple Wikipedia. 2017b. Wikipedia: Simple English Wikipedia. From https://simple.wikipedia.org/wiki/ Wikipedia:Simple_English_Wikipedia. Retrieved January 23, 2017. Smith, David A. and Jason Eisner. 2006. Quasi-synchronous grammars: Alignment by soft projection of syntactic dependencies. In Proceedings of the Workshop on Statistical Machine Translation, StatMT '06, pages 23-30, New York, NY.", "links": null }, "BIBREF105": { "ref_id": "b105", "title": "A study of translation edit rate with targeted human annotation", "authors": [ { "first": "Matthew", "middle": [], "last": "Snover", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Linnea", "middle": [], "last": "Micciulla", "suffix": "" }, { "first": "John", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2006, "venue": "Proceedings of Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "223--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Snover, Matthew, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Machine Translation in the Americas, pages 223-231, Cambridge, MA.", "links": null }, "BIBREF106": { "ref_id": "b106", "title": "Translating from complex to simplified sentences", "authors": [ { "first": "L\u00facia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 9th International Conference on Computational Processing of the Portuguese Language, PROPOR'10", "volume": "", "issue": "", "pages": "30--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Specia, L\u00facia. 2010. Translating from complex to simplified sentences. In Proceedings of the 9th International Conference on Computational Processing of the Portuguese Language, PROPOR'10, pages 30-39, Porto Alegre.", "links": null }, "BIBREF107": { "ref_id": "b107", "title": "Quality estimation for machine translation", "authors": [ { "first": "L\u00facia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Carolina", "middle": [], "last": "Scarton", "suffix": "" }, { "first": "Gustavo", "middle": [ "Henrique" ], "last": "Paetzold", "suffix": "" } ], "year": 2018, "venue": "Synthesis Lectures on Human Language Technologies", "volume": "11", "issue": "1", "pages": "1--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Specia, L\u00facia, Carolina Scarton, and Gustavo Henrique Paetzold. 2018. Quality estimation for machine translation. Synthesis Lectures on Human Language Technologies, 11(1):1-162.", "links": null }, "BIBREF108": { "ref_id": "b108", "title": "Manual de simplifica\u00e7\u00e3o sint\u00e1tica para o portugu\u00eas", "authors": [ { "first": "L\u00facia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Sandra", "middle": [ "Maria" ], "last": "Alu\u00edsio", "suffix": "" }, { "first": "Thiago A. Salgueiro", "middle": [], "last": "Pardo", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Specia, L\u00facia, Sandra Maria Alu\u00edsio, and Thiago A. Salgueiro Pardo. 2008. Manual de simplifica\u00e7\u00e3o sint\u00e1tica para o portugu\u00eas, NILC-ICMC-USP, S\u00e3o Carlos, SP, Brasil. Available at http://www.nilc.icmc.usp. br/nilc/download/NILC_TR_08_06.pdf", "links": null }, "BIBREF109": { "ref_id": "b109", "title": "BLEU is not suitable for the evaluation of text simplification", "authors": [ { "first": "Elior", "middle": [], "last": "Sulem", "suffix": "" }, { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" }, { "first": ";", "middle": [], "last": "Brussels", "suffix": "" }, { "first": "", "middle": [], "last": "Sulem", "suffix": "" }, { "first": "Omri", "middle": [], "last": "Elior", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Abend", "suffix": "" }, { "first": ";", "middle": [], "last": "Rappoport", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "38--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sulem, Elior, Omri Abend, and Ari Rappoport. 2018a. BLEU is not suitable for the evaluation of text simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 738-744, Brussels. Sulem, Elior, Omri Abend, and Ari Rappoport. 2018b. Semantic structural evaluation for text simplification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 685-696, New Orleans, LA. Sun, Hong and Ming Zhou. 2012. Joint learning of a dual SMT system for paraphrase generation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers - Volume 2, ACL '12, pages 38-42, Stroudsburg, PA.", "links": null }, "BIBREF110": { "ref_id": "b110", "title": "Unsupervised neural text simplification", "authors": [ { "first": "Sai", "middle": [], "last": "Surya", "suffix": "" }, { "first": "Abhijit", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Anirban", "middle": [], "last": "Laha", "suffix": "" }, { "first": "Parag", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Sankaranarayanan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2058--2068", "other_ids": {}, "num": null, "urls": [], "raw_text": "Surya, Sai, Abhijit Mishra, Anirban Laha, Parag Jain, and Karthik Sankaranarayanan. 2019. Unsupervised neural text simplifi- cation. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 2058-2068, Florence.", "links": null }, "BIBREF111": { "ref_id": "b111", "title": "Sentence alignment methods for improving text simplification systems", "authors": [ { "first": "Sanja", "middle": [], "last": "Stajner", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Franco-Salvador", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Rosso", "suffix": "" }, { "first": "Heiner", "middle": [], "last": "Stuckenschmidt", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "97--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stajner, Sanja, Marc Franco-Salvador, Simone Paolo Ponzetto, Paolo Rosso, and Heiner Stuckenschmidt. 2017. Sentence alignment methods for improving text simplification systems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 97-102, Vancouver.", "links": null }, "BIBREF112": { "ref_id": "b112", "title": "CATS: A tool for customized alignment of text simplification corpora", "authors": [ { "first": "Sanja", "middle": [], "last": "Stajner", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Franco-Salvador", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Rosso", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "3895--3903", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stajner, Sanja, Marc Franco-Salvador, Paolo Rosso, and Simone Paolo Ponzetto. 2018. CATS: A tool for customized alignment of text simplification corpora. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), pages 3895-3903 Miyazaki.", "links": null }, "BIBREF113": { "ref_id": "b113", "title": "Leveraging event-based semantics for automated text simplification", "authors": [ { "first": "Sanja", "middle": [], "last": "Stajner", "suffix": "" }, { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" } ], "year": 2017, "venue": "Expert Systems with Applications", "volume": "82", "issue": "", "pages": "383--395", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stajner, Sanja and Goran Glava\u0161. 2017. Leveraging event-based semantics for automated text simplification. Expert Systems with Applications, 82:383-395.", "links": null }, "BIBREF114": { "ref_id": "b114", "title": "One step closer to automatic evaluation of text simplification systems", "authors": [ { "first": "Sanja", "middle": [], "last": "Stajner", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Mitkov", "suffix": "" }, { "first": "Horacio", "middle": [], "last": "Saggion", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR)", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stajner, Sanja, Ruslan Mitkov, and Horacio Saggion. 2014. One step closer to automatic evaluation of text simplification systems. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR), pages 1-10, Gothenburg.", "links": null }, "BIBREF115": { "ref_id": "b115", "title": "Can text simplification help machine translation?", "authors": [ { "first": "Sanja", "middle": [], "last": "Stajner", "suffix": "" }, { "first": "Maja", "middle": [], "last": "Popovic", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 19th Annual Conference of the European Association for Machine Translation", "volume": "", "issue": "", "pages": "230--242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stajner, Sanja and Maja Popovic. 2016. Can text simplification help machine translation? In Proceedings of the 19th Annual Conference of the European Association for Machine Translation, pages 230-242, Riga.", "links": null }, "BIBREF116": { "ref_id": "b116", "title": "Quality estimation for text simplification", "authors": [ { "first": "Sanja", "middle": [], "last": "Stajner", "suffix": "" }, { "first": "Maja", "middle": [], "last": "Popovi\u0107", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "B\u00e9chera", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Workshop on Quality Assessment for Text Simplification -LREC 2016", "volume": "", "issue": "", "pages": "15--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stajner, Sanja, Maja Popovi\u0107, and Hannah B\u00e9chera. 2016. Quality estimation for text simplification. In Proceedings of the Workshop on Quality Assessment for Text Simplification -LREC 2016, QATS 2016, pages 15-21, Portoro\u017e.", "links": null }, "BIBREF117": { "ref_id": "b117", "title": "Shared task on quality assessment for text simplification", "authors": [ { "first": "Sanja", "middle": [], "last": "Stajner", "suffix": "" }, { "first": "Maja", "middle": [], "last": "Popovi\u0107", "suffix": "" }, { "first": "Horacio", "middle": [], "last": "Saggion", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Fishel", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Workshop on Quality Assessment for Text Simplification -LREC 2016", "volume": "", "issue": "", "pages": "22--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stajner, Sanja, Maja Popovi\u0107, Horacio Saggion, Lucia Specia, and Mark Fishel. 2016. Shared task on quality assessment for text simplification. In Proceedings of the Workshop on Quality Assessment for Text Simplification -LREC 2016, QATS 2016, pages 22-31, Portoro\u017e.", "links": null }, "BIBREF118": { "ref_id": "b118", "title": "CLiC-it 2016) & Fifth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2016), Napoli. Vajjala, Sowmya and Ivana Lu\u010di\u0107", "authors": [ { "first": "Sara", "middle": [], "last": "Tonelli", "suffix": "" }, { "first": "Alessio", "middle": [], "last": "Palmero Aprosio", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Saltori", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "297--304", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tonelli, Sara, Alessio Palmero Aprosio, and Francesca Saltori. 2016. SIMPITIKI: A Simplification corpus for Italian. In Proceedings of Third Italian Conference on Computational Linguistics (CLiC-it 2016) & Fifth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2016), Napoli. Vajjala, Sowmya and Ivana Lu\u010di\u0107. 2018. OneStopEnglish corpus: A new corpus for automatic readability assessment and text simplification. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 297-304, New Orleans, LA.", "links": null }, "BIBREF119": { "ref_id": "b119", "title": "Assessing the relative reading level of sentence pairs for text simplification", "authors": [ { "first": "Sowmya", "middle": [], "last": "Vajjala", "suffix": "" }, { "first": "Detmar", "middle": [], "last": "Meurers", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "288--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vajjala, Sowmya and Detmar Meurers. 2014a. Assessing the relative reading level of sentence pairs for text simplification. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 288-297, Gothenburg.", "links": null }, "BIBREF120": { "ref_id": "b120", "title": "Readability assessment for text simplification: From analysing documents to identifying sentential simplifications", "authors": [ { "first": "Sowmya", "middle": [], "last": "Vajjala", "suffix": "" }, { "first": "Detmar", "middle": [], "last": "Meurers", "suffix": "" } ], "year": 2014, "venue": "ITL -International Journal of Applied Linguistics", "volume": "165", "issue": "2", "pages": "194--222", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vajjala, Sowmya and Detmar Meurers. 2014b. Readability assessment for text simplification: From analysing documents to identifying sentential simplifications. ITL -International Journal of Applied Linguistics, 165(2):194-222.", "links": null }, "BIBREF121": { "ref_id": "b121", "title": "Readability-based sentence ranking for evaluating text simplification", "authors": [ { "first": "Sowmya", "middle": [], "last": "Vajjala", "suffix": "" }, { "first": "Detmar", "middle": [], "last": "Meurers", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vajjala, Sowmya and Detmar Meurers. 2015. Readability-based sentence ranking for evaluating text simplification, Iowa State University.", "links": null }, "BIBREF122": { "ref_id": "b122", "title": "Beyond SumBasic: Task-focused summarization with sentence simplification and lexical expansion", "authors": [ { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "" }, { "first": "Hisami", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Ani", "middle": [], "last": "Nenkova", "suffix": "" } ], "year": 2007, "venue": "Information Processing and Management", "volume": "43", "issue": "6", "pages": "1606--1618", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vanderwende, Lucy, Hisami Suzuki, Chris Brockett, and Ani Nenkova. 2007. Beyond SumBasic: Task-focused summarization with sentence simplification and lexical expansion. Information Processing and Management, 43(6):1606-1618.", "links": null }, "BIBREF123": { "ref_id": "b123", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H.Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, Curran Associates, Inc., pages 5998-6008.", "links": null }, "BIBREF124": { "ref_id": "b124", "title": "Sentence simplification for semantic role labeling", "authors": [ { "first": "David", "middle": [], "last": "Vickrey", "suffix": "" }, { "first": "Daphne", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "344--352", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vickrey, David and Daphne Koller. 2008. Sentence simplification for semantic role labeling. In Proceedings of ACL-08: HLT, pages 344-352, Columbus, OH.", "links": null }, "BIBREF125": { "ref_id": "b125", "title": "Sentence simplification with memory-augmented neural networks", "authors": [ { "first": "Tu", "middle": [], "last": "Vu", "suffix": "" }, { "first": "Baotian", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Tsendsuren", "middle": [], "last": "Munkhdalai", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "79--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vu, Tu, Baotian Hu, Tsendsuren Munkhdalai, and Hong Yu. 2018. Sentence simplification with memory-augmented neural networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 79-85, New Orleans, LA.", "links": null }, "BIBREF126": { "ref_id": "b126", "title": "Facilita: Reading assistance for low-literacy readers", "authors": [ { "first": "Willian", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Arnaldo", "middle": [], "last": "Massami", "suffix": "" }, { "first": "", "middle": [], "last": "Candido Junior", "suffix": "" }, { "first": "Renata", "middle": [], "last": "Vin\u00edcius Rodriguez Uz\u00eada", "suffix": "" }, { "first": "Thiago Alexandre Salgueiro", "middle": [], "last": "Pontin De Mattos Fortes", "suffix": "" }, { "first": "Sandra", "middle": [ "Maria" ], "last": "Pardo", "suffix": "" }, { "first": "", "middle": [], "last": "Alu\u00edsio", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 27th ACM International Conference on Design of Communication, SIGDOC '09", "volume": "", "issue": "", "pages": "29--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Watanabe, Willian Massami, Arnaldo Candido Junior, Vin\u00edcius Rodriguez Uz\u00eada, Renata Pontin de Mattos Fortes, Thiago Alexandre Salgueiro Pardo, and Sandra Maria Alu\u00edsio. 2009. Facilita: Reading assistance for low-literacy readers. In Proceedings of the 27th ACM International Conference on Design of Communication, SIGDOC '09, pages 29-36, Bloomington, IN.", "links": null }, "BIBREF127": { "ref_id": "b127", "title": "ParaNMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations", "authors": [ { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "451--462", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wieting, John and Kevin Gimpel. 2018. ParaNMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451-462, Melbourne.", "links": null }, "BIBREF128": { "ref_id": "b128", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Williams, Adina, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, LA.", "links": null }, "BIBREF129": { "ref_id": "b129", "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "authors": [ { "first": "Ronald", "middle": [ "J" ], "last": "Williams", "suffix": "" } ], "year": 1992, "venue": "Machine Learning", "volume": "8", "issue": "", "pages": "229--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Williams, Ronald J. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229-256.", "links": null }, "BIBREF130": { "ref_id": "b130", "title": "Learning to simplify sentences with quasisynchronous grammar and integer programming", "authors": [ { "first": "Kristian", "middle": [], "last": "Woodsend", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" }, { "first": ";", "middle": [], "last": "Edinburgh", "suffix": "" }, { "first": "Kristian", "middle": [], "last": "Woodsend", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "927--932", "other_ids": {}, "num": null, "urls": [], "raw_text": "Woodsend, Kristian and Mirella Lapata. 2011a. Learning to simplify sentences with quasi- synchronous grammar and integer programming. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 409-420, Edinburgh. Woodsend, Kristian and Mirella Lapata. 2011b. WikiSimple: Automatic simplification of Wikipedia articles. In Proceedings of the 25th National Conference on Artificial Intelligence, pages 927-932, San Francisco, CA.", "links": null }, "BIBREF131": { "ref_id": "b131", "title": "Sentence simplification by monolingual machine translation", "authors": [ { "first": "Sander", "middle": [], "last": "Wubben", "suffix": "" }, { "first": "", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Bosch", "suffix": "" }, { "first": "", "middle": [], "last": "Krahmer", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", "volume": "1", "issue": "", "pages": "1015--1024", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wubben, Sander, Antal van den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers -Volume 1, ACL '12, pages 1015-1024, Stroudsburg, PA.", "links": null }, "BIBREF132": { "ref_id": "b132", "title": "Problems in current text simplification research: New data can help", "authors": [ { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Courtney", "middle": [], "last": "Napoles", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics", "volume": "3", "issue": "", "pages": "283--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu, Wei, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification research: New data can help. Transactions of the Association for Computational Linguistics, 3:283-297.", "links": null }, "BIBREF133": { "ref_id": "b133", "title": "Optimizing statistical machine translation for text simplification", "authors": [ { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Courtney", "middle": [], "last": "Napoles", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Quanze", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "401--415", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu, Wei, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401-415.", "links": null }, "BIBREF134": { "ref_id": "b134", "title": "A syntax-based statistical translation model", "authors": [ { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2001, "venue": "Andr\u00e1s Kornai, and J\u00e1nos Kertsz. 2012. A practical approach to language complexity: A Wikipedia case study", "volume": "7", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yamada, Kenji and Kevin Knight. 2001. A syntax-based statistical translation model. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics, ACL '01, pages 523-530, Stroudsburg, PA. Yasseri, Taha, Andr\u00e1s Kornai, and J\u00e1nos Kertsz. 2012. A practical approach to language complexity: A Wikipedia case study. PLOS ONE, 7(11):1-8.", "links": null }, "BIBREF135": { "ref_id": "b135", "title": "For the sake of simplicity: Unsupervised extraction of lexical simplifications from Wikipedia", "authors": [ { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Cristian", "middle": [], "last": "Danescu-Niculescu-Mizil", "suffix": "" }, { "first": "Lillian", "middle": [ "Lee" ], "last": "", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "365--368", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yatskar, Mark, Bo Pang, Cristian Danescu- Niculescu-Mizil, and Lillian Lee. 2010. For the sake of simplicity: Unsupervised extraction of lexical simplifications from Wikipedia. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 365-368, Los Angeles, CA.", "links": null }, "BIBREF136": { "ref_id": "b136", "title": "Sentence simplification with deep reinforcement learning", "authors": [ { "first": "Xingxing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "595--605", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, Xingxing and Mirella Lapata. 2017. Sentence simplification with deep reinforce- ment learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 595-605, Copenhagen.", "links": null }, "BIBREF137": { "ref_id": "b137", "title": "Integrating transformer and paraphrase rules for sentence simplification", "authors": [ { "first": "Sanqiang", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Daqing", "middle": [], "last": "He", "suffix": "" }, { "first": "Andi", "middle": [], "last": "Saptono", "suffix": "" }, { "first": "Bambang", "middle": [], "last": "Parmanto", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10", "volume": "", "issue": "", "pages": "1353--1361", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhao, Sanqiang, Rui Meng, Daqing He, Andi Saptono, and Bambang Parmanto. 2018. Integrating transformer and paraphrase rules for sentence simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3164-3173, Brussels. Zhu, Zhemin, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree- based translation model for sentence simplification. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10, pages 1353-1361, Stroudsburg, PA.", "links": null }, "BIBREF138": { "ref_id": "b138", "title": "Multi-source neural translation", "authors": [ { "first": "Barret", "middle": [], "last": "Zoph", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "30--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zoph, Barret and Kevin Knight. 2016. Multi-source neural translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 30-34, San Diego, CA.", "links": null } }, "ref_entries": { "FIGREF2": { "uris": null, "type_str": "figure", "text": "Model architecture for DRESS. Extracted fromZhang and Lapata (2017).", "num": null }, "FIGREF3": { "uris": null, "type_str": "figure", "text": "Model architecture for PointerCopy+MTL. Extracted fromGuo, Pasunuru, and Bansal (2018).", "num": null }, "FIGREF4": { "uris": null, "type_str": "figure", "text": "Model architecture for UNTS. Extracted fromSurya et al. (2019).", "num": null }, "TABREF0": { "content": "
Flesch-FKGL = 0.39number of words number of sentences+ 11.8number of syllables number of words\u2212 15.59
FRE = 206.835 \u2212 1.015number of words number of sentences\u2212 84.6number of syllables number of words(5)
", "html": null, "type_str": "table", "text": "Kincaid Grade Level (FKGL, Kincaid et al. 1975) is a recalculation of FRE, so as to correspond to grade levels in the United States Equation (6). The coefficients were derived from multiple regression procedures in reading tests of 531 Navy personnel. The lowest possible value is \u22123.40 with no upper bound. The obtained score should be interpreted in an inverse way as for FRE, so that lower values indicate a lower level of difficulty.", "num": null }, "TABREF1": { "content": "
ModelTrain Corpus Test Corpus BLEU \u2191 FKGL \u2193
Moses (Brazilian Portuguese)PorSimplesPorSimples60.75
Moses (English)C&KC&K59.87
Moses-DelC&KC&K60.46
PBSMT-RPWKPPWKP43.0013.38
", "html": null, "type_str": "table", "text": "Performance of PBSMT-based sentence simplification models as reported by their authors.", "num": null }, "TABREF2": { "content": "
TSMPWKPPWKP38.00
TriSOwnOwn7.9
SBSMT(PPDB+BLEU)TurkCorpusTurkCorpus99.0512.8826.05
SBSMT(PPDB+FKBLEU) TurkCorpusTurkCorpus74.4810.7534.18
SBSMT(PPDB+SARI)TurkCorpusTurkCorpus72.3610.9037.91
", "html": null, "type_str": "table", "text": "Performance of SBSMT-based sentence simplification models as reported by their authors.", "num": null }, "TABREF4": { "content": "
Model
QG+ILPAlignedWLPWKP34.012.36
RevisionWLPWKP42.010.92
T3+RankC&KC&K*34.2
SimpleTT C&KC&K56.4
", "html": null, "type_str": "table", "text": "Performance of grammar-based sentence simplification models as reported by their authors. A (*) indicates a test set different from the standard. Train Corpus Test Corpus BLEU \u2191 FKGL \u2193", "num": null }, "TABREF5": { "content": "
EVLEXWIKI59.8
NEWS74.7
HybridPWKPPWKP53.60
UNSUPPWKPPWKP38.47
", "html": null, "type_str": "table", "text": "Performance of semantics-assisted sentence simplification models as reported by their authors.", "num": null }, "TABREF6": { "content": "
NTSEW-SEWTurkCorpus84.5130.65
NTS+SARIEW-SEWTurkCorpus80.6937.25
DRESSWikiSmallPWKP34.537.4827.48
WikiLargeTurkCorpus77.186.5837.08
NewselaNewsela23.214.1327.37
DRESS-LSWikiSmallPWKP36.327.5527.24
WikiLargeTurkCorpus80.126.6237.27
NewselaNewsela24.304.2126.63
NSELSTM-BWikiSmallPWKP53.4217.47
WikiLargeTurkCorpus92.0233.43
NewselaNewsela26.3127.42
NSELSTM-SWikiSmallPWKP29.7229.75
WikiLargeTurkCorpus80.4336.88
NewselaNewsela22.6229.58
POINTERCOPY+MTL WikiSmallPWKP29.706.9328.24
WikiLargeTurkCorpus81.497.4137.45
NewselaNewsela11.861.3832.98
DMASS+DCSSWikiLargeTurkCorpus8.0440.45
WikiLargeNewsela5.1727.28
", "html": null, "type_str": "table", "text": "Train Corpus Test Corpus BLEU \u2191 FKGL \u2193 SARI \u2191", "num": null }, "TABREF8": { "content": "
Reference100.00100.0029.918.07
Hybrid54.6753.9436.0410.29
Moses48.9955.8334.5311.58
DRESS-LS40.4436.3229.438.52
DRESS40.0434.5328.928.40
TSM39.0237.6937.396.40
UNSUP38.4138.2835.817.75
PBSMT-R35.4946.3135.6312.26
QG+ILP35.2441.7641.717.08
EncDecA32.2647.9335.2812.12
", "html": null, "type_str": "table", "text": "Performance measured with automatic metrics in the PWKP test set.ModelSARI \u2191 BLEU \u2191 SAMSA \u2191 FKGL \u2193 Performance measured with transformation-specific F1 score in the PWKP test set.", "num": null }, "TABREF9": { "content": "
Reference49.8897.4154.008.76
DMASS-DCSS40.4273.2935.457.66
SBSMT(PPDB+SARI)39.9673.0841.417.89
PBSMT-R38.5681.1147.598.78
NTS+SARI37.3079.8245.528.11
DRESS-LS37.2780.1245.947.58
DRESS37.0877.1844.477.45
Hybrid31.4048.9746.685.12
Table 12
ModelDelete Move Replace Copy
PBSMT-R34.182.6423.6593.50
Hybrid49.467.371.0370.73
SBSMT-SARI28.421.2637.2192.89
NTS-SARI31.101.5823.8888.19
DRESS-LS40.311.4312.6286.76
DMASS-DCSS38.035.1034.7986.70
", "html": null, "type_str": "table", "text": "Performance measured with automatic metrics in the TurkCorpus test set.TurkCorpus HSplitModel SARI \u2191 BLEU \u2191 SAMSA \u2191 FKGL \u2193 Performance measured with transformation-specific F1 score in the TurkCorpus test set.", "num": null } } } }