{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:41:11.102941Z" }, "title": "Semantic Structural Decomposition for Neural Machine Translation", "authors": [ { "first": "Elior", "middle": [], "last": "Sulem", "suffix": "", "affiliation": {}, "email": "eliors@seas.upenn.edu" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Building on recent advances in semantic parsing and text simplification, we investigate the use of semantic splitting of the source sentence as preprocessing for machine translation. We experiment with a Transformer model and evaluate using large-scale crowd-sourcing experiments. Results show a significant increase in fluency on long sentences on an English-to-French setting with a training corpus of 5M sentence pairs, while retaining comparable adequacy. We also perform a manual analysis which explores the tradeoff between adequacy and fluency in the case where all sentence lengths are considered. 1", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Building on recent advances in semantic parsing and text simplification, we investigate the use of semantic splitting of the source sentence as preprocessing for machine translation. We experiment with a Transformer model and evaluate using large-scale crowd-sourcing experiments. Results show a significant increase in fluency on long sentences on an English-to-French setting with a training corpus of 5M sentence pairs, while retaining comparable adequacy. We also perform a manual analysis which explores the tradeoff between adequacy and fluency in the case where all sentence lengths are considered. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In this paper, we apply a semantic decomposition approach for Neural Machine Translation (NMT) and demonstrate that it can tackle two of the main limitations of state-of-the-art NMT. The first is the translation of long sentences, which is a recurrent issue arising in NMT evaluation (Sutskever et al., 2014; Pouget-Abadie et al., 2014; Su et al., 2018; Currey and Heafield, 2018) . The second limitation is that current research in NMT mostly focuses on translating single sentences to single sentences, and is evaluated accordingly. However, Li and Nenkova (2015) showed that using several sentences to translate a source sentence is sometimes the preferable option. Therefore, the simplicity of the output could be an important quality marker for translation.", "cite_spans": [ { "start": 89, "end": 94, "text": "(NMT)", "ref_id": null }, { "start": 284, "end": 308, "text": "(Sutskever et al., 2014;", "ref_id": "BIBREF32" }, { "start": 309, "end": 336, "text": "Pouget-Abadie et al., 2014;", "ref_id": "BIBREF23" }, { "start": 337, "end": 353, "text": "Su et al., 2018;", "ref_id": "BIBREF26" }, { "start": 354, "end": 380, "text": "Currey and Heafield, 2018)", "ref_id": "BIBREF8" }, { "start": 544, "end": 565, "text": "Li and Nenkova (2015)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In our model, each source sentence is split (or decomposed) into semantic units, namely scenes, building on the Direct Semantic Splitting algorithm (DSS; Sulem et al., 2018b ) that uses the Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013) scheme for semantic representation. Scenes are then translated separately and concatenated for generating the final translation output, which may consist of several sentences.", "cite_spans": [ { "start": 154, "end": 173, "text": "Sulem et al., 2018b", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our main experiments use the state-of-the-art Transformer model (Vaswani et al., 2017) in English-to-French settings. We also include experiments with other MT architectures and training set sizes, and evaluate our results using the crowdsourcing protocol of Graham et al. (2016) ( \u00a74) . We obtain a significant increase in fluency on sentences longer than 30 words on the newstest2014 test corpus for English-to-French translation, with a training corpus of 5M sentence pairs, without degrading adequacy. Considering all sentence lengths, we observe a tradeoff between fluency and adequacy. We explore it using a manual analysis, suggesting that the decrease in adequacy is partly due to the loss of cohesion resulting from the splitting ( \u00a76).", "cite_spans": [ { "start": 64, "end": 86, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF33" }, { "start": 259, "end": 285, "text": "Graham et al. (2016) ( \u00a74)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We then proceed to investigate the case of simulated low-resource settings as well as the effect of other sentence splitting methods, including Splitand-Rephrase models (Aharoni and Goldberg, 2018; Botha et al., 2018 ) ( \u00a77). The latter yield considerably lower scores than the use of simple semantic rules, supporting the case for corpusindependent simplification rules.", "cite_spans": [ { "start": 169, "end": 197, "text": "(Aharoni and Goldberg, 2018;", "ref_id": "BIBREF2" }, { "start": 198, "end": 216, "text": "Botha et al., 2018", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Sentence segmentation for MT. Segmenting sentences into sub-units, based on punctuation and syntactic structures, and recombining their output has been explored by a number of statistical MT works (Xiong et al., 2009; Goh and Sumita, 2011; Sudoh et al., 2010) . In NMT, Pouget-Abadie et al. (2014) segmented the source using ILP, tackling English-to-French neural translation. They con-cluded that segmentation improves overall translation quality but quality may decrease if the segmented fragments are not well-formed. The concatenation may sometimes degrade fluency and result in errors in punctuation and capitalization. Kuang and Xiong (2016) attempted to find split positions such that no reordering will be necessary in the target side for Chinese-English. We differ from these approaches in using a separate text simplification module that can be applied to different kinds of MT systems, and using a semanticallymotivated segmentation. Moreover, we allow the final output to be composed of several sentences, taking into account the structural simplicity aspect of translation quality (Li and Nenkova, 2015) . Text Simplification for MT. Sentence splitting, which goes beyond segmentation and denotes the conversion of one sentence into one or several sentences, is the main structural operation studied in Text Simplification (TS). While MT preprocessing was one of the main motivations for the first automatic simplification system (Chandrasekar et al., 1996) , only few works empirically explored the usefulness of simplification techniques for MT. Mishra et al. (2014) used sentence splitting as a preprocessing step for Hindi-to-English translation with a dependency parser and additional modules for gerunds and shared arguments.\u0160tajner and Popovi\u0107 (2016) performed structural and lexical simplification as part of a preprocessing step for English-to-Serbian MT. Manual correction is carried out before translation.\u0160tajner and Popovi\u0107 (2018) investigated the use of TS as a processing step for NMT, focusing on syntax-based rules that address relative clauses (Siddhathan, 2011) for English-to-German and English-to-Serbian translation. Investigating the translation of 106 out of 1000 sentences that have been modified by simplification, they find that the automatic simplification of English relative clauses can improve translation only if simplifications are quality-controlled or corrected in post-processing. We differ from this work in using semantic rules and by translating independently each of the obtained sentences.", "cite_spans": [ { "start": 197, "end": 217, "text": "(Xiong et al., 2009;", "ref_id": "BIBREF35" }, { "start": 218, "end": 239, "text": "Goh and Sumita, 2011;", "ref_id": "BIBREF11" }, { "start": 240, "end": 259, "text": "Sudoh et al., 2010)", "ref_id": "BIBREF28" }, { "start": 625, "end": 647, "text": "Kuang and Xiong (2016)", "ref_id": "BIBREF17" }, { "start": 1094, "end": 1116, "text": "(Li and Nenkova, 2015)", "ref_id": "BIBREF19" }, { "start": 1443, "end": 1470, "text": "(Chandrasekar et al., 1996)", "ref_id": "BIBREF6" }, { "start": 1561, "end": 1581, "text": "Mishra et al. (2014)", "ref_id": "BIBREF20" }, { "start": 2075, "end": 2093, "text": "(Siddhathan, 2011)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "UCCA (Universal Cognitive Conceptual Annotation; Abend and Rappoport, 2013) is a semantic annotation scheme rooted in typological and cognitive linguistic theory (Dixon, 2010b,a; Langacker, 2008) . It aims to represent the main semantic phe- nomena in the text, abstracting away from syntax. Formally, UCCA structures are directed acyclic graphs whose nodes (or units) correspond either to the leaves of the graph or to several elements viewed as a single entity according to some semantic or cognitive consideration. A scene is UCCA's notion of an event or a frame, and is a unit that corresponds to a movement, an action or a state which persists in time. Every scene contains one main relation, which can be either a Process or a State. Scenes may contain one or more Participants, interpreted in a broad sense to include locations and destinations. For example, the sentence \"John went home\" has a single scene whose Process is \"went\". The two Participants are \"John\" and \"home\".", "cite_spans": [ { "start": 162, "end": 178, "text": "(Dixon, 2010b,a;", "ref_id": null }, { "start": 179, "end": 195, "text": "Langacker, 2008)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Decomposition", "sec_num": "3" }, { "text": "Scenes can provide additional information about an established entity (Elaborator scenes), commonly participles or relative clauses. For example, \"(child) who went home\" is an Elaborator scene in \"The child who went home is John\". A scene may also be a Participant in another scene. For example, \"John went home\" in the sentence: \"He said John went home\". In other cases, scenes are annotated as parallel scenes (H), which are flat structures and may include a Linker (L), as in: \"When L [he arrives] H , [he will call them] H \".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Decomposition", "sec_num": "3" }, { "text": "For UCCA parsing, we use TUPA, a transitionbased parser (Hershcovich et al., 2017 ) (specifically, the TUPA BiLST M model).", "cite_spans": [ { "start": 56, "end": 81, "text": "(Hershcovich et al., 2017", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Decomposition", "sec_num": "3" }, { "text": "We build on the DSS rule-based semantic splitting method (Sulem et al., 2018b) , and use Rule #1 which targets parallel scenes. We further explore the use of the additional kinds of scenes in Section 7 for less conservative sentence splitting. In Rule #1, parallel scenes of a given sentence are extracted, split into different sentences and concatenated according to the order of appearance. More formally, given a decomposition of a sentence S into parallel scenes Sc 1 , Sc 2 , \u2022 \u2022 \u2022 Sc n (indexed by the order of the first token), we obtain the following rule, where \"|\" is the sentence delimiter:", "cite_spans": [ { "start": 57, "end": 78, "text": "(Sulem et al., 2018b)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Decomposition", "sec_num": "3" }, { "text": "S \u2212\u2192 Sc1|Sc2| \u2022 \u2022 \u2022 |Scn", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Decomposition", "sec_num": "3" }, { "text": "As UCCA allows argument sharing between scenes, the rule may duplicate the same sub-span of S across sentences. For example, the rule will convert \"He came back home and played the piano\" into \"He came back home\"|\"He played the piano.\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Decomposition", "sec_num": "3" }, { "text": "Using UCCA-based sentence splitting in our model is motivated by the corpus-based analysis presented in Sulem et al. (2015) where it is shown that a scene in English is generally translated to a scene in French.", "cite_spans": [ { "start": 104, "end": 123, "text": "Sulem et al. (2015)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Decomposition", "sec_num": "3" }, { "text": "Corpora We experiment on the full English-French training data provided in the WMT setting (Bojar et al., 2014) , which corresponds to about 39M sentence pairs after cleaning. 2 We refer to this setting as the FullTrain Setting. We also experiment on the LessTrain Setting where less training data is involved by removing the large UN Corpus and the 10 9 French-English Corpus from the training data, obtaining a new training corpus of about 5M sentence pairs. The development set is Newstest 2013, that consists of 3000 sentences. The test set is Newstest2014, consisting of 3003 sentences.", "cite_spans": [ { "start": 91, "end": 111, "text": "(Bojar et al., 2014)", "ref_id": "BIBREF3" }, { "start": 176, "end": 177, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Systems To investigate the use of semantic structural decomposition for NMT, we propose a twostep method. First, the original sentence is split into several sentences the DSS rule (see \u00a7 3), implemented with the UCCA software. 3 . Then, each of the obtained sentences is translated separately by the OpenNMT-py implementation of the Transformer (Vaswani et al., 2017) . 4 The translated sentences are concatenated to form the final output. We name the combined system Transformer Sem-Split and compare it to the Transformer Baseline, where no splitting is performed. The pipeline architecture is summarized in Figure 1 .", "cite_spans": [ { "start": 345, "end": 367, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF33" }, { "start": 370, "end": 371, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 610, "end": 618, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "The Transformer is trained for 200K training steps, both in the FullTrain and the LessTrain settings. The development data was used for selecting the model with the highest accuracy (where perplexity was used in cases of ties). The system was evaluated on the development data every 10K steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "For comparison, we also implement our system in the case where the Transformer is replaced by another NMT system, namely a two-layers LSTM model and the Moses phrase-based machine translation system (Koehn et al., 2007) . The neural model, also implemented with OpenNMT-py, is trained and validated in the same way as the Transformer. For Moses, the default model is used in a single setting (LessTrain) with MGIZA word alignment, 5 and KenLM language model (Heafield, 2011) using the monolingual data provided in WMT 2014, and MERT tuning on the development set. Here too we compare the combined systems to baseline systems which do not perform decomposition.", "cite_spans": [ { "start": 199, "end": 219, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF16" }, { "start": 431, "end": 432, "text": "5", "ref_id": null }, { "start": 458, "end": 474, "text": "(Heafield, 2011)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "In addition to the limitations of BLEU evaluation (Papineni et al., 2002) in the context of MT (Callison-Burch et al., 2006 , and much subsequent work), BLEU may correlate negatively with output quality in cases that involve sentence splitting (Sulem et al., 2018a) . We therefore evaluate using crowdsourcing, and follow the protocol proposed by Graham et al. (2016) . Evaluation was carried out using Amazon Mechanical Turk. 6 See Appendix A for a detailed description.", "cite_spans": [ { "start": 50, "end": 73, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF22" }, { "start": 95, "end": 123, "text": "(Callison-Burch et al., 2006", "ref_id": "BIBREF5" }, { "start": 244, "end": 265, "text": "(Sulem et al., 2018a)", "ref_id": "BIBREF30" }, { "start": 347, "end": 367, "text": "Graham et al. (2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Using Crowdsourcing", "sec_num": "4.1" }, { "text": "The results in both FullTrain and LessTrain settings are presented in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 77, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "In terms of fluency, LessTrain Transformer Sem-Split ranks first in this setting and significantly outperforms the corresponding baseline system (52.5 vs. 42.5, p < 10 \u22124 ). 7 For Moses too, the use of semantic sentence splitting increases fluency (40.2 vs. 38.1), but not significantly. On the other hand, where splitting is used as preprocessing, adequacy scores decrease. In particular, LessTrain Transformer Baseline significantly outperforms the SemSplit counterpart (47.5 vs. 39.8, p < 10 \u22124 ). For sentences longer than 30, SemSplit Transformer in the LessTrain setting significantly outperforms the baseline in terms of fluency (52.1 vs. 39.6, p = 0.02), with only a non-significant (small) degradation in adequacy (41.7 vs. 40.1, p = 0.46).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "To further zoom in on the obtained adequacy scores, we decompose adequacy into two dimensions: preservation of semantic content in the level of scenes and the cohesion of the text (i.e., whether the different scenes are cohesively linked together). To do so, we manually annotate a sample of 150 sentences from the original test set with a similar proportion of sentences in different length categories as the original corpus, and assess the semantic preservation at the scene-level for each of the extracted scenes, as well as the sentence-level cohesion (see Appendix B for the protocol).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manual Analysis", "sec_num": "6" }, { "text": "For LessTrain, we find that 66.2% of the scenes are deemed equally preserved by the SemSplit and Baseline systems. On the other hand, 20.9% of the scenes are better preserved by the baseline and 10.7% of the scenes are better preserved by the SemSplit system. Averaging over scenes that belong to the same sentence, we find that 68% of the sentences are either better preserved by SemSplit or equally preserved. Regarding cohesion, Sem-Split and the Baseline have a comparable cohesion for 59% of the sentences. The Baseline has a better cohesion for 36% of the sentences, while it is improved by SemSplit in 5% of the cases. The analysis suggests that cohesion has a central role in the decrease (and the non-increase for long sentences) of the adequacy scores. Therefore the tradeoff between adequacy and fluency observed when all sentence lengths are considered can be explained by a tradeoff between the cohesion and structural simplicity aspects of translation quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manual Analysis", "sec_num": "6" }, { "text": "The different aspects of the translation quality are further illustrated in Table 3 , where two input and output examples are presented, focusing on the LessTrain setting. In example (1), the SemSplit output is similar to the Baseline one at the lexical level but differs in its structure, the SemSplit system behaving as a cross-lingual simplifier at the structural level. On the other hand, linkers such as \"so\" are not translated in the case of SemSplit. In example (2), the word \"interference\" is correctly translated by SemSplit, while it is translated into \"ing\u00e9rence\" (\"intervention\") in French, which is wrong in this context.", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 83, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Manual Analysis", "sec_num": "6" }, { "text": "We first explore the performance of the proposed system in low-resource machine translation, by following the approach of Hoang et al. (2018) and randomly select 1M and 100K sentence pairs from the entire English-French training set, defining the 1MTrain and 100KTrain settings respectively. Tuning and testing remain as before.", "cite_spans": [ { "start": 122, "end": 141, "text": "Hoang et al. (2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Additional Experiments", "sec_num": "7" }, { "text": "The resulted raw scores for the 1MTrain and 100KTrain settings are presented in Appendix D, Table 4 . We observe that while in 1MTrain, the SemSplit models obtain low results compared to the respective baselines, the SemSplit models obtain higher fluency in 100KTrain, though not significantly.", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 99, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Additional Experiments", "sec_num": "7" }, { "text": "Second, to further explore the sentence splitting component, we replicate our model, separating both parallel and embedded scenes before the (1) Input:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional Experiments", "sec_num": "7" }, { "text": "Hamas has defended its use of tunnels in the fight against Israel, stating that the aim was to capture Israeli soldiers so they could be exchanged for Palestinian prisoners.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional Experiments", "sec_num": "7" }, { "text": "Output: Le Hamas a d\u00e9fendu son utilisation de tunnels dans la lutte contre Isra\u00ebl, affirmant que l'objectif\u00e9tait de capturer des soldats isra\u00e9liens afin qu'ils puissent\u00eatre\u00e9chang\u00e9s contre des prisonniers palestiniens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline (LessTrain)", "sec_num": null }, { "text": "Literal translation: Hamas has defended its use of tunnels in the fight against Israel, stating that the aim was to capture Israeli soldiers so they could be exchanged for Palestinian prisoners.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline (LessTrain)", "sec_num": null }, { "text": "Output: Le Hamas a d\u00e9fendu son utilisation de tunnels dans la lutte contre Isra\u00ebl. Le Hamas a d\u00e9clar\u00e9 que l'objectif\u00e9tait de capturer des soldats isra\u00e9liens. Ils pourraient\u00eatre\u00e9chang\u00e9s contre des prisonniers palestiniens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SemSplit (LessTrain)", "sec_num": null }, { "text": "Literal translation: Hamas has defended its use of tunnels in the fight against Israel. Hamas stated that the aim was to capture Israeli soldiers. They could be exchanged for Palestinian prisoners.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SemSplit (LessTrain)", "sec_num": null }, { "text": "(2) Input: Douglas Kidd of the National Association of Airline Passengers said he believes interference from the devices is genuine even if the risk is minimal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SemSplit (LessTrain)", "sec_num": null }, { "text": "Baseline LessTrainOutput: Douglas., de l'Association nationale des compagnies a\u00e9riennes, a d\u00e9clar\u00e9 qu'il consid\u00e9rait que l'ing\u00e9rence des appareils\u00e9tait r\u00e9elle, m\u00eame si le risque\u00e9tait minimal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SemSplit (LessTrain)", "sec_num": null }, { "text": "Literal translation: Douglas., from the Association National of the companies airline, claimed that he believed that the intervention of the devices was genuine, even if the risk is minimal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SemSplit (LessTrain)", "sec_num": null }, { "text": "Output: Douglas., de l'Association nationale des compagnies a\u00e9riennes, a d\u00e9clar\u00e9 qu' il estimait que l'interf\u00e9rence avec les appareils\u00e9tait r\u00e9elle. Le risque est minimal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SemSplit (LessTrain)", "sec_num": null }, { "text": "Literal translation: Douglas., from the Association national of the companies airline, claimed that he believed the interference with the devices was genuine. The risk is minimal. translation. We use Rule #2 from the DSS system (Sulem et al., 2018b) addressing Elaborator scenes (See Appendix C), which we further extend to also include Participant scenes. We denote the resulting system with Transformer SemSplit 1+2 . We also compare the model with two additional sentence splitting systems, where DSS is replaced with the Seq2Seq Copy 512 model for Split-and-Rephrase (Aharoni and Goldberg, 2018) trained on the WEB-SPLIT corpus (Narayan et al., 2017 ) (version 1.0), and the same model trained on the WikiSplit corpus (Botha et al., 2018) . Each of the obtained new sentences is translated by the FullTrain Transformer system. Finally the translated sentences are directly concatenated. The resulting systems are denoted with Transformer NeuralWEB-SPLIT and Tranformer NeuralWiki-Split. The results for the FullTrain and LessTrain settings are presented in Table 2 . As in the case where only the first rule is used, adequacy scores decrease following splitting. On the other hand, in this case the SemSplit models do not have higher fluency scores than their corresponding baselines, probably because of the more aggressive splitting compared to #Rule 1 alone. For both adequacy and fluency, the Split-and-Rephrase models obtain very low scores. Observing their outputs, we find many wrong splits and word repetitions at the splitting phase, which affects the final output. As this trend is not observed on the standard WEB-SPLIT test corpus, these results may suggest a domain adap-tation effect, which supports the case for corpusindependent sentence splitting.", "cite_spans": [ { "start": 228, "end": 249, "text": "(Sulem et al., 2018b)", "ref_id": "BIBREF31" }, { "start": 632, "end": 653, "text": "(Narayan et al., 2017", "ref_id": "BIBREF21" }, { "start": 722, "end": 742, "text": "(Botha et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 1061, "end": 1068, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "SemSplit (LessTrain)", "sec_num": null }, { "text": "This work investigates the application of semantic structural decomposition for NMT, proposing an intermediate way between sentence segmentation used in MT and TS preprocessing, where each of the semantic components is separately translated. Using the Transformer and large-scale crowd-sourcing evaluation, we obtain an increase in fluency on long sentences on an English-to-French setting without significantly lowering adequacy. We further observe increased fluency when evaluating on all the sentences, albeit at the cost of adequacy. Future work concerns the recombination of the output sentences, inserting the linkage between them, so as not to lose semantic content.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "We would like to thank the annotators for participating in our evaluation experiments and in the UCCA annotation. This work was partially supported by the Israel Science Foundation (grant No.929/17) and by the HUJI Cyber Security Research Center in conjunction with the Israel National Cyber Bureau in the Prime Minister's Office.", "cite_spans": [ { "start": 181, "end": 198, "text": "(grant No.929/17)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Aknowledgments", "sec_num": null }, { "text": "Low resource setting ( \u00a77): In each of the adequacy and fluency experiments, 12 systems are involved: the Transformer SemSplit, LSTM Sem-Split and Moses SemSplit systems and their corresponding baselines, each implemented in both 1MTrain and 100KTrain settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aknowledgments", "sec_num": null }, { "text": "Splitting exploration setting ( \u00a77): In each of the adequacy and fluency experiments, we include 12 systems: 6 Transformer systems, namely Transformer SemSplit 1+2 in both FullTrain and LessTrain settings, the corresponding baselines, as well as the two neural splitting systems, 4 LSTM systems (LSTM SemSplit 1+2 in the two settings and the corresponding baselines) and 2 phrasebased systems (Moses in the LessTrain setting and its corresponding baseline).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aknowledgments", "sec_num": null }, { "text": "Cleaning, tokenization and truecasing as well as detokenization and detruecasing of the outputs are performed using the Moses tools: http://www.statmt.org/moses/.3 https://github.com/danielhers/ucca", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/OpenNMT/OpenNMT-py 5 https://github.com/moses-smt/mgiza 6 https://www.mturk.com/ 7 Significance is computed using the Wilcoxon one-sided rank sum test applied on the standardized scores, followingGraham et al. (2016).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.mturk.com/ 9 https://github.com/ygraham/ crowd-alone", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We follow the protocol proposed by Graham et al. (2016) for evaluation via Amazon Mechan-ical Turk 8 and use their pre-processing and postprocessing software 9 . Adequacy (where the output is compared to the reference) and fluency (where only the output appears) are evaluated independently, according to a 100-point slider, in different experiments. Each of the experiments is composed of 10 HITs (Human Intelligence Tasks) where each HIT includes 100 French sentences which are compared to the reference in the case of adequacy and separately evaluated in the case of fluency. These 100 sentences include 70 MT system outputs extracted randomly from the test set, 10 reference translations, corresponding to 10 of the 70 system outputs, 10 bad reference translations, corresponding to a different 10 of the 70 system outputs and 10 repeat MT system outputs, drawn from the remaining 50 of the original 70 system outputs. The role of the references, bad references and repeat outputs is to control the quality of the evaluation and to not consider ratings from annotators who don't pass the threshold, based on these two main assumptions (see (Graham et al., 2016) for more details): A: When a consistent judge is presented with a set of assessments for translations from two systems, one of which is known to produce better translations than the other, the score sample of the better system will be significantly greater than that of the inferior system. B: When a consistent judge is presented with a set of repeat assessments, the score sample across the initial presentations will not be significantly different from the score sample across the second presentations. We here require that each HIT will be answered by 10 different annotators, who are self-assessed native French speakers.Main setting ( \u00a74 and \u00a75): In each of the two crowdsourcing experiments, which correspond respectively to the evaluation of adequacy and fluency in the case where only Rule #1 is applied, we include 10 systems: 4 Transformer systems, namely Transformer SemSplit in both FullTrain and LessTrain settings and the corresponding baselines; 4 LSTM systems (LSTM SemSplit in the two settings and the corresponding baselines) and 2 phrase-based systems (Moses in the LessTrain setting and its corresponding baseline).", "cite_spans": [ { "start": 35, "end": 55, "text": "Graham et al. (2016)", "ref_id": "BIBREF12" }, { "start": 158, "end": 159, "text": "9", "ref_id": null }, { "start": 1144, "end": 1165, "text": "(Graham et al., 2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Appendix A: Crowdsourcing Evaluation Protocol", "sec_num": null }, { "text": "We perform a manual result analysis of an extract of the data, using the following protocol. First, we sub-sample 150 sentences from the original test set (3003 sentences), such that it includes the same proportion of sentences that contain 0 to 10 words (18% of sentences), 10 to 20 words (35% of the sentences), 20 to 30 words (28%), 30 to 40 words (14%), 40 to 50 words (4%) and more than 50 words (1%), as in the original corpus. Then, to abstract away from possible parsing errors, the new corpus is manually annotated by a single expert UCCA annotator using the UCCAApp annotation tool . For each of the 150 sentences, each scene segmentation (according to the UCCA manual annotation) is compared to the Transformer SemSplit output and the Transformer Baseline output for this sentence by a another annotator with high proficiency in both English and French (one of the authors of the paper) to analyze the relative preservation of the input scenes in the two systems. We use a 3 point Likert scale for the comparison, assessing if the SemSplit scene preservation is worse, similar or better, compared to the baseline. In the same way, the cohesion of the outputs (defined as the links between their different parts) is also compared using a 3 point Likert scale.Appendix C: Rule #2 in Direct Semantic Splitting (Sulem et al., 2018b) Minimal Centers in UCCA (Abend and Rappoport, 2013) : With respect to units which are not scenes, the category Center denotes the semantic head. For example, \"dogs\" is the center of the expression \"big brown dogs\", and \"box\" is the center of \"in the box\". There could be more than one Center in a unit, for example in the case of coordination, where all conjuncts are Centers. Sulem et al. (2018b) defined the minimal center of a UCCA unit u to be the UCCA graph's leaf reached by starting from u and iteratively selecting the child tagged as Center.Rule #2: Given a sentence S, the second rule extracts Elaborator scenes and corresponding minimal centers. The Elaborator scenes are then concatenated to the original sentence where the embedded scenes, except for the minimal center they elaborate are removed. Pronouns such as \"who\", \"which\" and \"that\" are also removed.Formally, ifare the Elaborator scenes of S and their corresponding minimal centers, the rewrite iswhere S \u2212 A is S without the unit A. For example, in the case of Elaborator scenes, this rule converts the sentence \"He observed the planet which has 14 known satellites\" to \"He observed the planet| Planet has 14 known satellites.\".After the extraction of Parallel scenes and Elaborator scenes, the resulting simplified Parallel scenes are placed before the Elaborator scenes. ", "cite_spans": [ { "start": 1318, "end": 1339, "text": "(Sulem et al., 2018b)", "ref_id": "BIBREF31" }, { "start": 1364, "end": 1391, "text": "(Abend and Rappoport, 2013)", "ref_id": "BIBREF0" }, { "start": 1717, "end": 1737, "text": "Sulem et al. (2018b)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Appendix B: Manual Analysis Protocol", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Universal Conceptual Cognitive Annotation (UCCA)", "authors": [ { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2013, "venue": "Proc. of ACL-13", "volume": "", "issue": "", "pages": "228--238", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omri Abend and Ari Rappoport. 2013. Universal Con- ceptual Cognitive Annotation (UCCA). In Proc. of ACL-13, pages 228-238.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "UCCAApp: Web-application for syntactic and semantic phrase-based annotation", "authors": [ { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" }, { "first": "Shai", "middle": [], "last": "Yerushalmi", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2017, "venue": "Proc. of ACL'17, System Demonstrations", "volume": "", "issue": "", "pages": "109--114", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omri Abend, Shai Yerushalmi, and Ari Rappoport. 2017. UCCAApp: Web-application for syntactic and semantic phrase-based annotation. In Proc. of ACL'17, System Demonstrations, pages 109-114.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Split and rephrase: Better evaluation and a stronger baseline", "authors": [ { "first": "Roee", "middle": [], "last": "Aharoni", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2018, "venue": "Proc. of ACL'18", "volume": "", "issue": "", "pages": "719--728", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roee Aharoni and Yoav Goldberg. 2018. Split and rephrase: Better evaluation and a stronger baseline. In Proc. of ACL'18 (Short papers), pages 719-728.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Findings of the 2014 workshop on statistical machine translation", "authors": [ { "first": "Ondrej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Buck", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Leveling", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Pecina", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Herve", "middle": [], "last": "Saint-Amand", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Ale\u0161", "middle": [], "last": "Tamchyna", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "12--58", "other_ids": { "DOI": [ "10.3115/v1/W14-3302" ] }, "num": null, "urls": [], "raw_text": "Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Ale\u0161 Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 12-58, Baltimore, Maryland, USA. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning to split and rephrase from Wikipedia edit history", "authors": [ { "first": "Jan", "middle": [ "A" ], "last": "Botha", "suffix": "" }, { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "John", "middle": [], "last": "Alex", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" } ], "year": 2018, "venue": "Proc. of EMNLP'18", "volume": "", "issue": "", "pages": "732--737", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jan A. Botha, Manaal Faruqui, John Alex, Jason Baldridge, and Dipanjan Das. 2018. Learning to split and rephrase from Wikipedia edit history. In Proc. of EMNLP'18 (Short papers), pages 732-737.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Re-evaluating the role of BLEU in machine translation", "authors": [ { "first": "Chris", "middle": [], "last": "Callison", "suffix": "" }, { "first": "-", "middle": [], "last": "Burch", "suffix": "" }, { "first": "Miles", "middle": [], "last": "Osborne", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2006, "venue": "Proc. of EACL'06", "volume": "", "issue": "", "pages": "249--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the role of BLEU in machine translation. In Proc. of EACL'06, pages 249-256.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Motivations and methods for sentence simplification", "authors": [ { "first": "Raman", "middle": [], "last": "Chandrasekar", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Doran", "suffix": "" }, { "first": "Bangalore", "middle": [], "last": "Srinivas", "suffix": "" } ], "year": 1996, "venue": "Proc. of COLING'96", "volume": "", "issue": "", "pages": "1041--1044", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raman Chandrasekar, Christine Doran, and Bangalore Srinivas. 1996. Motivations and methods for sen- tence simplification. In Proc. of COLING'96, pages 1041-1044.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "On the properties of neural machine translation: Encoder-decoder approaches", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Proc. of Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In Proc. of Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Multisource syntactic neural machine translation", "authors": [ { "first": "Anna", "middle": [], "last": "Currey", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" } ], "year": 2018, "venue": "Proc. of EMNLP'18", "volume": "", "issue": "", "pages": "2961--2966", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Currey and Kenneth Heafield. 2018. Multi- source syntactic neural machine translation. In Proc. of EMNLP'18, pages 2961-2966.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Basic Linguistic Theory: Grammatical Topics", "authors": [ { "first": "M", "middle": [ "W" ], "last": "Robert", "suffix": "" }, { "first": "", "middle": [], "last": "Dixon", "suffix": "" } ], "year": 2010, "venue": "", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert M.W. Dixon. 2010a. Basic Linguistic Theory: Grammatical Topics, volume 2. Oxford University Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Basic Linguistic Theory: Methodology", "authors": [ { "first": "M", "middle": [ "W" ], "last": "Robert", "suffix": "" }, { "first": "", "middle": [], "last": "Dixon", "suffix": "" } ], "year": 2010, "venue": "", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert M.W. Dixon. 2010b. Basic Linguistic Theory: Methodology, volume 1. Oxford University Press.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Splitting long input sentences for phrase-based statistical machine translation", "authors": [ { "first": "Chooi-Ling", "middle": [], "last": "Goh", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2011, "venue": "Proc. of ANLP'11", "volume": "", "issue": "", "pages": "802--805", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chooi-Ling Goh and Eiichiro Sumita. 2011. Splitting long input sentences for phrase-based statistical ma- chine translation. In Proc. of ANLP'11, pages 802- 805.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Can machine translation be evaluated by the crowd alone?", "authors": [ { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Alistair", "middle": [], "last": "Moffat", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Zobel", "suffix": "" } ], "year": 2016, "venue": "Natural Language Engineering", "volume": "1", "issue": "1", "pages": "1--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2016. Can machine translation be eval- uated by the crowd alone? Natural Language Engi- neering, 1(1):1-28.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "KenLM: Faster and smaller language model queries", "authors": [ { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" } ], "year": 2011, "venue": "Proc. of the Sixth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Heafield. 2011. KenLM: Faster and smaller language model queries. In Proc. of the Sixth Work- shop on Statistical Machine Translation.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A transition-based directed acyclic graph parser for UCCA", "authors": [ { "first": "Daniel", "middle": [], "last": "Hershcovich", "suffix": "" }, { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2017, "venue": "Proc. of ACL'17", "volume": "", "issue": "", "pages": "1127--1138", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2017. A transition-based directed acyclic graph parser for UCCA. In Proc. of ACL'17, pages 1127- 1138.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Iterative backtranslation for neural machine translation", "authors": [ { "first": "Cong Duy Vu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Gholamreza", "middle": [], "last": "Haffari", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2018, "venue": "Proc. of the 2nd Workshop on Neural Machine Translation and Generation", "volume": "", "issue": "", "pages": "18--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cong Duy Vu Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative back- translation for neural machine translation. In Proc. of the 2nd Workshop on Neural Machine Translation and Generation, pages 18-24.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Moses: open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Buch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Constantin", "suffix": "" }, { "first": "Evan", "middle": [], "last": "Herbst", "suffix": "" } ], "year": 2007, "venue": "Proc. of ACL'07 on interactive poster and demonstration sessions", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Buch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: open source toolkit for statistical machine translation. In Proc. of ACL'07 on interactive poster and demon- stration sessions, pages 177-180.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Automatic long sentence segmentation for neural machine translation", "authors": [ { "first": "Shaoui", "middle": [], "last": "Kuang", "suffix": "" }, { "first": "Deyi", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2016, "venue": "Natural Language Understanding and Intelligent Applications: 5th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2016, and 24th International Conference on Computer Processing of Oriental Languages", "volume": "", "issue": "", "pages": "162--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shaoui Kuang and Deyi Xiong. 2016. Automatic long sentence segmentation for neural machine transla- tion. In Natural Language Understanding and In- telligent Applications: 5th CCF Conference on Nat- ural Language Processing and Chinese Computing, NLPCC 2016, and 24th International Conference on Computer Processing of Oriental Languages, IC- CPOL 2016, pages 162-174.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Cognitive Grammar: A Basic Introduction", "authors": [ { "first": "Ronald", "middle": [ "W" ], "last": "Langacker", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronald W. Langacker. 2008. Cognitive Grammar: A Basic Introduction. Oxford University Press, USA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Detecting content-heavy sentences: A cross-language case study", "authors": [ { "first": "Jessi", "middle": [], "last": "Junyi", "suffix": "" }, { "first": "Ani", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Nenkova", "suffix": "" } ], "year": 2015, "venue": "Proc. of EMNLP'15", "volume": "", "issue": "", "pages": "1271--1281", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junyi Jessi Li and Ani Nenkova. 2015. Detecting content-heavy sentences: A cross-language case study. In Proc. of EMNLP'15, pages 1271-1281.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Exploring the effects of sentence simplification on Hindi to English Machine Translation systems", "authors": [ { "first": "Kshitij", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Ankush", "middle": [], "last": "Soni", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Dipti Misra", "middle": [], "last": "Sharma", "suffix": "" } ], "year": 2014, "venue": "Proc. of the Workshop on Automatic Text Simplification: Methods and Applications in the Multilingual Society", "volume": "", "issue": "", "pages": "21--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kshitij Mishra, Ankush Soni, Rahul Sharma, and Dipti Misra Sharma. 2014. Exploring the effects of sentence simplification on Hindi to English Ma- chine Translation systems. In Proc. of the Workshop on Automatic Text Simplification: Methods and Ap- plications in the Multilingual Society, pages 21-29.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Split and rephrase", "authors": [ { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" }, { "first": "Shay", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Shimorina", "suffix": "" } ], "year": 2017, "venue": "Proc. of EMNLP'17", "volume": "", "issue": "", "pages": "617--627", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shashi Narayan, Claire Gardent, Shay B. Cohen, and Anastasia Shimorina. 2017. Split and rephrase. In Proc. of EMNLP'17, pages 617-627.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "BLEU: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proc. of ACL'02", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In Proc. of ACL'02, pages 311-318.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Overcoming the curse of sentence length for neural machine translation with automatic segmentation", "authors": [ { "first": "Jean", "middle": [], "last": "Pouget-Abadie", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Proc. of the Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation", "volume": "", "issue": "", "pages": "78--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jean Pouget-Abadie, Dzmitry Bahdanau, Bart van Merri\u00ebnboer, Kyunghyun Cho, and Yoshua Bengio. 2014. Overcoming the curse of sentence length for neural machine translation with automatic segmen- tation. In Proc. of the Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 78-85.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Text simplification using typed dependencies: A comparison of the robustness of different generation strategies", "authors": [ { "first": "Advaith", "middle": [], "last": "Siddhathan", "suffix": "" } ], "year": 2011, "venue": "Proc. of the 13th European Workshop on Natural Language Generation", "volume": "", "issue": "", "pages": "2--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Advaith Siddhathan. 2011. Text simplification using typed dependencies: A comparison of the robustness of different generation strategies. In Proc. of the 13th European Workshop on Natural Language Gen- eration, pages 2-11. Association of Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Can text simplification help machine translation", "authors": [ { "first": "Maja", "middle": [], "last": "Sanja\u0161tajner", "suffix": "" }, { "first": "", "middle": [], "last": "Popovi\u0107", "suffix": "" } ], "year": 2016, "venue": "Baltic J. Modern Computing", "volume": "4", "issue": "", "pages": "230--242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanja\u0160tajner and Maja Popovi\u0107. 2016. Can text simpli- fication help machine translation. Baltic J. Modern Computing, 4:230-242.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A hierarchyto-sequence attentional neural machine translation", "authors": [ { "first": "Jinsong", "middle": [], "last": "Su", "suffix": "" }, { "first": "Jiali", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Deyi", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Mingxuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Xie", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinsong Su, Jiali Zeng, Deyi Xiong, Yang Liu, Mingx- uan Wang, and Jun Xie. 2018. A hierarchy- to-sequence attentional neural machine translation.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Speech, and Language Processing", "authors": [], "year": null, "venue": "", "volume": "26", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "IEEE/ACM Transactions on Audio, Speech, and Lan- guage Processing, 26(3).", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Divide and translate: Improving long distance reordering in statisticxal machine translation", "authors": [ { "first": "Katsuhito", "middle": [], "last": "Sudoh", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Duh", "suffix": "" }, { "first": "Hajime", "middle": [], "last": "Tsukada", "suffix": "" }, { "first": "Tsutomu", "middle": [], "last": "Hirao", "suffix": "" }, { "first": "Masaaki", "middle": [], "last": "Nagata", "suffix": "" } ], "year": 2010, "venue": "Proc. of the Joint 5th Workshop on Statistical Machine Translation and MetricsMATR", "volume": "", "issue": "", "pages": "418--427", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katsuhito Sudoh, Kevin Duh, Hajime Tsukada, Tsu- tomu Hirao, and Masaaki Nagata. 2010. Divide and translate: Improving long distance reordering in statisticxal machine translation. In Proc. of the Joint 5th Workshop on Statistical Machine Transla- tion and MetricsMATR, pages 418-427.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Conceptual annotations preserve structure across translations", "authors": [ { "first": "Elior", "middle": [], "last": "Sulem", "suffix": "" }, { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2015, "venue": "Proc. of 1st Workshop on Semantics-Driven Statistical Machine Translation", "volume": "", "issue": "", "pages": "11--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elior Sulem, Omri Abend, and Ari Rappoport. 2015. Conceptual annotations preserve structure across translations. In Proc. of 1st Workshop on Semantics- Driven Statistical Machine Translation (S2Mt 2015), pages 11-22.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "BLEU is not suitable for the evaluation of text simplification", "authors": [ { "first": "Elior", "middle": [], "last": "Sulem", "suffix": "" }, { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2018, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "738--744", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elior Sulem, Omri Abend, and Ari Rappoport. 2018a. BLEU is not suitable for the evaluation of text sim- plification. In Proc. of EMNLP, pages 738-744.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Simple and effective text simplification using semantic and neural methods", "authors": [ { "first": "Elior", "middle": [], "last": "Sulem", "suffix": "" }, { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2018, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "162--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elior Sulem, Omri Abend, and Ari Rappoport. 2018b. Simple and effective text simplification using seman- tic and neural methods. In Proc. of ACL, pages 162- 173.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Qu\u00f4c", "middle": [], "last": "L\u00ea", "suffix": "" } ], "year": 2014, "venue": "Proc. of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Qu\u00f4c L\u00ea. 2014. Se- quence to sequence learning with neural networks. In Proc. of NIPS.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Proc. of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. of NIPS.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Improving machine translation of English relative clauses with automatic text simplification", "authors": [ { "first": "Maja", "middle": [], "last": "Sanja\u0161tajner", "suffix": "" }, { "first": "", "middle": [], "last": "Popovi\u0107", "suffix": "" } ], "year": 2018, "venue": "Proc. of the INLG 2018 First Workshop on Automatic Text Adaptation (ATA)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanja\u0160tajner and Maja Popovi\u0107. 2018. Improving ma- chine translation of English relative clauses with au- tomatic text simplification. In Proc. of the INLG 2018 First Workshop on Automatic Text Adaptation (ATA).", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Sub-sentence division for tree-based machine translation", "authors": [ { "first": "Hao", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Wenwen", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Haitao", "middle": [], "last": "Mi", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2009, "venue": "Proc. od ACL'09", "volume": "", "issue": "", "pages": "137--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Xiong, Wenwen Xu, Haitao Mi, Yang Liu, and Qun Liu. 2009. Sub-sentence division for tree-based machine translation. In Proc. od ACL'09 (Short pa- pers), pages 137-140.", "links": null } }, "ref_entries": { "TABREF2": { "text": "", "content": "", "type_str": "table", "num": null, "html": null }, "TABREF3": { "text": "Input and output examples for the Baseline and SemSplit system in the LessTrain setting, together with an English literal translation of the French outputs.", "content": "
", "type_str": "table", "num": null, "html": null } } } }