Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N18-1025",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:50:07.363182Z"
},
"title": "QuickEdit: Editing Text & Translations by Crossing Words Out",
"authors": [
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a framework for computerassisted text editing. It applies to translation post-editing and to paraphrasing. Our proposal relies on very simple interactions: a human editor modifies a sentence by marking tokens they would like the system to change. Our model then generates a new sentence which reformulates the initial sentence by avoiding marked words. The approach builds upon neural sequence-to-sequence modeling and introduces a neural network which takes as input a sentence along with change markers. Our model is trained on translation bitext by simulating post-edits. We demonstrate the advantage of our approach for translation postediting through simulated post-edits. We also evaluate our model for paraphrasing through a user study.",
"pdf_parse": {
"paper_id": "N18-1025",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a framework for computerassisted text editing. It applies to translation post-editing and to paraphrasing. Our proposal relies on very simple interactions: a human editor modifies a sentence by marking tokens they would like the system to change. Our model then generates a new sentence which reformulates the initial sentence by avoiding marked words. The approach builds upon neural sequence-to-sequence modeling and introduces a neural network which takes as input a sentence along with change markers. Our model is trained on translation bitext by simulating post-edits. We demonstrate the advantage of our approach for translation postediting through simulated post-edits. We also evaluate our model for paraphrasing through a user study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Computers can help humans edit text more efficiently. In particular, statistical models are used for that purpose, for instance to help correct spelling mistakes (Brill and Moore, 2000) or suggest likely completions of a sentence (Bickel et al., 2005) . In this work, we rely on statistical learning to enable a computer to rephrase a sentence by only pointing at words that should be avoided. Specifically, we consider the task of reformulating either a sentence, i.e. paraphrasing (Quirk et al., 2004) , or a translation, i.e. translation postediting (Koehn, 2009b) . Paraphrasing reformulates a sentence with different words preserving its meaning, while translation post-editing takes a candidate translation along with the corresponding source sentence and improves it.",
"cite_spans": [
{
"start": 162,
"end": 185,
"text": "(Brill and Moore, 2000)",
"ref_id": "BIBREF6"
},
{
"start": 230,
"end": 251,
"text": "(Bickel et al., 2005)",
"ref_id": "BIBREF3"
},
{
"start": 483,
"end": 503,
"text": "(Quirk et al., 2004)",
"ref_id": "BIBREF30"
},
{
"start": 553,
"end": 567,
"text": "(Koehn, 2009b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our proposal relies on very simple interactions: a human editor modifies a sentence by selecting tokens they would like the system to replace and no other feedback. Our system then generates a new sentence which reformulates the initial sen-tence by avoiding the word types from the selected tokens. Our approach builds upon neural sequence-to-sequence and introduces a neural network which takes as input a sentence along with token markers. We introduce a novel attentionbased architecture suited to this goal and propose a training procedure based on simulated post-edits on translation bitext ( \u00a73). This approach allows to get substantial modifications of the initial sentence -including deletion, reordering and insertion of multiple words -with limited user effort.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our experiments ( \u00a74) relies on large scale simulated post-edits. They show that our model outperforms our post-editing baseline by up to 5 BLEU points on WMT'14 English-German and WMT'14 German-English translation. The advantage of our method is also highlighted in monolingual settings, where we analyze the quality of the paraphrases generated by our model in a user study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Before introducing our method ( \u00a73) and its empirical evaluation ( \u00a74), we describe related work in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work builds upon previous research on neural machine translation, machine translation postediting, and computer-assisted editing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Statistical machine translation systems models automatically translate text relying on large corpora of bitext, i.e. corresponding pairs of sentences in the source and target language (Koehn, 2009a) . Recently, machine translation systems based on neural networks have emerged as an effective approach to this problem (Sutskever et al., 2014) . Neural networks are a departure from count-based translation systems, e.g. phrase-based systems, which used to dominate the field (Koehn, 2009a) .",
"cite_spans": [
{
"start": 184,
"end": 198,
"text": "(Koehn, 2009a)",
"ref_id": "BIBREF17"
},
{
"start": 318,
"end": 342,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF35"
},
{
"start": 475,
"end": 489,
"text": "(Koehn, 2009a)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2.1"
},
{
"text": "Research in Neural Machine Translation (NMT) focuses notably on identifying appropri-ate neural architecture. and Suskever et al. (2014) proposed encoder/decoder models. These models consist of a Recurrent Neural Network (RNN) mapping the source sentence sentence into a latent vector (encoder). This vector conditions an RNN language model (decoder) which generates the target sentence (Mikolov et al., 2010; Graves, 2013) . adds attention to these models, which leverages that the explanation for a given target word in generally localized around a few source words. Recently, new architectures have proposed to replace recurrent modules with convolutions (Gehring et al., 2017) or self-attention (Vaswani et al., 2017) to further increase accuracy. These architecture also perform attention at more than one decoder layer, allowing for more complex attention patterns. In this work, we build upon the architecture of Gehring et al. (2017) since this model offers a good trade-off between high accuracy and fast decoding.",
"cite_spans": [
{
"start": 114,
"end": 136,
"text": "Suskever et al. (2014)",
"ref_id": null
},
{
"start": 387,
"end": 409,
"text": "(Mikolov et al., 2010;",
"ref_id": "BIBREF25"
},
{
"start": 410,
"end": 423,
"text": "Graves, 2013)",
"ref_id": "BIBREF12"
},
{
"start": 658,
"end": 680,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 699,
"end": 721,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF36"
},
{
"start": 920,
"end": 941,
"text": "Gehring et al. (2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2.1"
},
{
"text": "Post-editing leverages a machine translation system and enable human translators to edit its output with different levels of computer assistance. This enables improving machine translation outputs with lesser effort than purely manual translation. Green et al. (2014) implement such a system relying on a phrase-based translation system. The system presents an initial translation to the user who can accept a prefix and select among the most likely postfix iteratively. Similar ideas relying on decoding with prefix constrains are common in post-translation (Langlais et al., 2000; Koehn, 2009b; Barrachina et al., 2009) . Recently, these approaches based on left-to-right decoding have been extended to neural machine translation (Peris et al., 2017) .",
"cite_spans": [
{
"start": 248,
"end": 267,
"text": "Green et al. (2014)",
"ref_id": "BIBREF13"
},
{
"start": 559,
"end": 582,
"text": "(Langlais et al., 2000;",
"ref_id": "BIBREF20"
},
{
"start": 583,
"end": 596,
"text": "Koehn, 2009b;",
"ref_id": "BIBREF18"
},
{
"start": 597,
"end": 621,
"text": "Barrachina et al., 2009)",
"ref_id": "BIBREF2"
},
{
"start": 732,
"end": 752,
"text": "(Peris et al., 2017)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Post-Editing",
"sec_num": "2.2"
},
{
"text": "Closer to our work, Marie and Max (2015) propose light-weight interactions based on accepting/rejecting spans from the output of a statistical machine translation system. The user labels each span that should appear in the final translation. Unmarked spans are assumed to be undesirable and the system removes any entries that could generate those spans from the phrase table. The phrase table is modified such that only positively marked target spans are allowed to explain the cor-responding source phrases.",
"cite_spans": [
{
"start": 20,
"end": 40,
"text": "Marie and Max (2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Post-Editing",
"sec_num": "2.2"
},
{
"text": "Compared to their work, we rely on similar interactions but we do not require the user to label every token as either accepted or rejected. The user only needs to mark a few rejections. Also, we build on a more accurate neural translation model which is not amenable to phrase table editing. Finally, our method is equally applicable to the monolingual editing of regular text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Post-Editing",
"sec_num": "2.2"
},
{
"text": "Automatic post-editing (APE) , i.e. a process which automatically modifies an MT output without human guidance , is also an active area of research. Although APE shares similarities to classical postediting, it is beyond the scope of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Post-Editing",
"sec_num": "2.2"
},
{
"text": "Computer assisted text editing has been introduced with interactive computer terminals (Irons and Djorup, 1972) . Its first achievement was to simplify the insertion, deletion, and copy of text compared to typewriters. Computers then enabled the emergence of computerized language assistance tools such as spelling correctors (Brill and Moore, 2000) or next word suggestions (Bickel et al., 2005) .",
"cite_spans": [
{
"start": 87,
"end": 111,
"text": "(Irons and Djorup, 1972)",
"ref_id": "BIBREF16"
},
{
"start": 326,
"end": 349,
"text": "(Brill and Moore, 2000)",
"ref_id": "BIBREF6"
},
{
"start": 375,
"end": 396,
"text": "(Bickel et al., 2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computer-Assisted Text Editing",
"sec_num": "2.3"
},
{
"text": "More recently, research has focused on generating paraphrases (Bannard and Callison-Burch, 2005; Mallinson et al., 2017) , compressing sentences (Rush et al., 2015) or simplifying sentences (Nisioi et al., 2017) . This type of work expands the possibilities for interactive text generation tools, like our work.",
"cite_spans": [
{
"start": 62,
"end": 96,
"text": "(Bannard and Callison-Burch, 2005;",
"ref_id": "BIBREF1"
},
{
"start": 97,
"end": 120,
"text": "Mallinson et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 190,
"end": 211,
"text": "(Nisioi et al., 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computer-Assisted Text Editing",
"sec_num": "2.3"
},
{
"text": "Related to our work, Filippova et al. (2015) considers the task of predicting which tokens can be removed from a sentence without modifying its meaning relying on a recurrent neural network. Our work pursues a different goal since our model does not predict which token to remove, as the user provides this information. Our generation is more involved as our model rephrases the sentences, which includes introducing new words, reordering text, inflecting nouns and verbs, etc. Guu et al. (2017) considers generating text with latent edits. Their goal is not to enable users to control which words need to be changed in an initial sentence but to enable sampling valid English sentences with high lexical overlap around a starting sentence. Contrary to paraphrasing, such samples might introduce negations and other changes impacting meaning. 1: QuickEdit architecture for translation post-editing. The decoder attends to both encodings, one for the source and one for the initial translation (guess) with deletion markers (X on the diagram). Our simplified schema shows one convolutional block and single-hop attention for readability.",
"cite_spans": [
{
"start": 478,
"end": 495,
"text": "Guu et al. (2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computer-Assisted Text Editing",
"sec_num": "2.3"
},
{
"text": "QuickEdit is our sequence-to-sequence model for post-editing via delete actions. This model takes as input a source sentence and an initial guess target sentence annotated with change markers. It then aims to improve upon the guess by generating a better target sentence which avoids the marked tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "QuickEdit",
"sec_num": "3"
},
{
"text": "Our model builds upon the architecture of Gehring et al. (2017) . This model is a sequence to sequence neural model with attention. Both the encoder and decoder are deep convolutional networks with residual connections. The model performs multihop attention, i.e. each layer of the decoder attends to the encoder outputs. Our architecture choice is motivated by the accuracy of this model along with its computational efficiency.",
"cite_spans": [
{
"start": 42,
"end": 63,
"text": "Gehring et al. (2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": "QuickEdit adds a second encoder to represent the annotated guess sentence. It also duplicates every attention layer to allow the decoder to attend both to the source and the guess sentences. Dual attention has been introduced recently in the context of automatic post-editing (Novak et al., 2016; Libovick\u1ef3 and Helcl, 2017) . Our work is however the first work to introduce dual attention in a multihop architecture. Figure 1 illustrates our architecture.",
"cite_spans": [
{
"start": 276,
"end": 296,
"text": "(Novak et al., 2016;",
"ref_id": "BIBREF28"
},
{
"start": 297,
"end": 323,
"text": "Libovick\u1ef3 and Helcl, 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 417,
"end": 425,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": "The encoder of the initial guess takes as input a target sentence t annotated with binary change labels c, i.e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": "g = {g i } lg i=1 where \u2200i, g i = (t i , c i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": "in which l g denotes the length of the guess, t i is an index in the target vocabulary and c i is a binary variable with 1 indicating a request to change the token by the user and 0 indicating no user preference. The first layer of the encoder maps this sequence to two embedding sequences, i.e. a sequence of target word embeddings and a sequence of positional embeddings. Compared to (Gehring et al., 2017) , we extend the positional embedding to contain two types of vectors, positional vectors associated with positions i where c i = 0 and positional vectors associated with positions i where c i = 1. Like all parameters in the system, both sets of embeddings are learned to maximize the log-likelihood of the training reference sentences conditioned on the source, annotated guess pairs.",
"cite_spans": [
{
"start": 386,
"end": 408,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": "The attention over two sentences is simple. Both source and guess encoders produce a sequence of key and value pairs. We denote the output of the source encoder as {(k s i , v s i )} ls i=1 and the output of the guess encoder as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": "{(k g i , v g i )} lg i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": ". At each decoder layer k and time step j, the decoder produces a latent state vector h k j , this vector attends to the output of the source encoder,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": "a s i = exp h k j \u2022 k s i / l exp h k j \u2022 k s l",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": "and the guess encoder,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": "a g i = exp h k j \u2022 k g i / l exp h k j \u2022 k g l .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": "This attention weights are used to summarize the values of the source i a s i v s i and the guess i a s i v g i respectively. The attention module then averages these two vectors 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": "2 i a s i v s i + 1 2 i a g i v g i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": "and uses this average instead of the source attention output in the next layer (Gehring et al., 2017) .",
"cite_spans": [
{
"start": 79,
"end": 101,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": "Our model is trained on translation bitext by simulating post-edits. Given a bitext corpus, we first train an initial translation system and we then rely on this system to translate the training corpus. This strategy results in three sentences for each example: the source, the guess (i.e. the sentence decoded from the initial system) and the reference sentence. Post-edits are simulated by marking guess tokens which do not appear in the corresponding reference sentence. The dual attention model presented in the above section is then trained. We maximize the loglikelihood of the training reference sentences y given each corresponding source sentence x and the annotated guess g, i.e. we maximize",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training & Inference",
"sec_num": "3.2"
},
{
"text": "L Train : \u03b8 \u2192 (x,y,g)\u2208Train log P (y|x, g, \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training & Inference",
"sec_num": "3.2"
},
{
"text": "where y refers to the reference sentence, x refers to the source sentence and g is the annotated guess sentence as defined above. Training relies on stochastic gradient descent (Bottou, 1991) , using Nesterov's accelerated gradient with momentum (Nesterov, 1983; Sutskever et al., 2013) . At inference time, we decode through standard leftto-right beam search (Sutskever et al., 2014) . Our decoding strategy for QuickEdit also incorporates hard constraints that prevent the decoder from outputting tokens which are marked in the guess.",
"cite_spans": [
{
"start": 177,
"end": 191,
"text": "(Bottou, 1991)",
"ref_id": "BIBREF5"
},
{
"start": 246,
"end": 262,
"text": "(Nesterov, 1983;",
"ref_id": "BIBREF26"
},
{
"start": 263,
"end": 286,
"text": "Sutskever et al., 2013)",
"ref_id": "BIBREF34"
},
{
"start": 360,
"end": 384,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training & Inference",
"sec_num": "3.2"
},
{
"text": "The extension of QuickEdit to a monolingual setting is straightforward: we remove the source encoder and the corresponding attention path. This results in a single encoder model which takes only an annotated guess as input. This model can be trained from pairs of sentences consisting of a machine translation output along with the corresponding reference sentence. Although machine translation bitext are used to create this model training data, it operates solely on target language sentences without requiring a source sentence at test time. In our experiments, we train distinct models for the monolingual setting. We do not consider sharing parameters with the translation models at this point.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Monolingual Editing",
"sec_num": "3.3"
},
{
"text": "We evaluate on three translation datasets of increasing size and we report results in both language directions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "IWSLT'14 German-English (Cettolo et al., 2014) , WMT'14 German-English (Luong et al., 2015) , and WMT'14 English-French (Bojar et al., 2014) . Our postediting baseline is our initial neural translation system, complemented with decoding constraints to disallow marked guess words to be considered in the beam. For paraphrasing, we compare our model trained on WMT'14 fr-en to the model of (Mallinson et al., 2017) on the MTC dataset (Huang et al., 2002) following their setup. We relied on WMT'14 fr-en training data motivated by its size 1 .",
"cite_spans": [
{
"start": 24,
"end": 46,
"text": "(Cettolo et al., 2014)",
"ref_id": "BIBREF7"
},
{
"start": 71,
"end": 91,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 120,
"end": 140,
"text": "(Bojar et al., 2014)",
"ref_id": "BIBREF4"
},
{
"start": 389,
"end": 413,
"text": "(Mallinson et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 433,
"end": 453,
"text": "(Huang et al., 2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "For IWSLT'14 we train on 160K sentence pairs and we validate on a random subset of 7,250 sentence-pairs held-out from the original training corpus. We test on the concatenation of tst2010, tst2011, tst2012, tst2013, dev2010 and dev2012 comprising 6,750 sentence pairs. The vocabulary for this dataset is 24k for English and 36k for German. For WMT'14 English to German and German to English, we use the same setup as Luong et al. (2015) which comprises 4.5M sentence pairs for training and we test on newstest2014. 2 We took 45k sentences out of the training set for validation purpose. As vocabulary, we learn a joint source and target byte-pair encoding (BPE) with 44k types from the training set (Sennrich et al., 2016b,a) . Note that even when using BPE, we solely rely on full word markers, i.e. all the BPE tokens of a given word carry the same binary indication (to be changed/no preference). For WMT'14 English to French and French to English (Bojar et al., 2014) , we also rely on BPE with 44k types. This dataset is larger with 35.4M sentences for training and 26k sentences for validation. We rely on newstest2014 for testing 3 .",
"cite_spans": [
{
"start": 515,
"end": 516,
"text": "2",
"ref_id": null
},
{
"start": 699,
"end": 725,
"text": "(Sennrich et al., 2016b,a)",
"ref_id": null
},
{
"start": 951,
"end": 971,
"text": "(Bojar et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "The model architecture settings are borrowed from (Gehring et al., 2017) . For IWSLT'14 deen and IWSLT'14 en-de, we rely on 4-layer encoders and 3-layer decoders, both with 256 hidden units and kernel width 3. The word embedding for source and target as well as the output matrix have 256 dimensions. For WMT'14 en-de and WMT'14 de-en, both encoders and decoders have 15 layers (9 layers with 512 hidden units, 4 layers with 1,024 units followed by 2 layers with 2,048 units). Input embeddings have 768 dimensions, output embedding have 512. For WMT'14 en-fr and WMT'14 fr-en, both encoders and decoders have 15 layers (6 layers with 512 hidden units, 4 layers with 768 units, 3 layers with 1024 units, followed by two larger layers with 2048 and 4096 units). Similar to the German model, input embeddings have 768 dimensions, output embedding have 512 dimensions. For all datasets, we decode using beam search with a beam of size 5.",
"cite_spans": [
{
"start": 50,
"end": 72,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "Our study is based on simulated post-edits, i.e. simulated token deletion actions. We start from machine translation outputs from an initial system in which we label tokens to change automatically. For initial translation, we rely on the convolutional translation system from (Gehring et al., 2017) 4 learned from the training portion of the dataset. For each system output, any word which does not belong to the reference translation is marked to be changed. We perform this operation for the train, validation and test portion of each dataset. The training and validation portion can be used for learning and developing our post-editing system. The test portion is used for evaluation. Table 1 reports our result on this task. Our QuickEdit method strongly outperforms the baseline post-editing system. Both systems access the same information, i.e. a list of deleted word types, which constrains the decoding. QuickEdit adds attention over the initial sentence with rejection marks. This has a big impact on BLEU. On the larger WMT'14 en-de benchmark, the advantage is over 5 BLEU point for both directions. We conjecture that the improvement is lower on the smaller IWSLT data due to over-fitting, i.e. the base system is excellent on the training set which reduces the post-editing opportunities on the training data, therefore limiting the amount of supervised data for training our post-editing system. We show examples of post-editing from the test set of WMT-14 de-en in Table 2 . These examples show the ability of the model to rephrase sentences avoiding the marked tokens while preserving the source meaning. Similar to our experiments on WMT'14 en-de, QuickEdit also reports large improvement with respect to the baseline model on WMT'14 en-fr, with +5.6 points (53.4 vs 47.8).",
"cite_spans": [
{
"start": 276,
"end": 300,
"text": "(Gehring et al., 2017) 4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 688,
"end": 695,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1480,
"end": 1487,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Post-editing",
"sec_num": "4.1"
},
{
"text": "One should note that the simulated edits rely on gold information, i.e. crossed-out words are always absent from the reference. Our aim is to simulate a post-editor which might have a sentence close to the reference in mind. This evaluation method allows to conduct large scale experiments without labeling burden. Conducting an interactive post-editing study requires trained editors and interface consideration beyond the scope of this initial work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-editing",
"sec_num": "4.1"
},
{
"text": "So far, our post-editing setting marked all incorrect words in the guess. We now consider a setting where the simulated post-editor performs less work by marking only a subset of these tokens. This is analogous to a hypothetical online translation service which offers a feature enabling the user to mark parts of a translation to be improved. In addition to marking only a subset of the incorrect tokens at inference time, we also train new models for which the training data also only had a subset of incorrect tokens marked. Specifically, we train three models QE25, QE50, QE100 for which either 25%, 50% or 100% of incorrect guess tokens were marked.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partial Feedback",
"sec_num": "4.2"
},
{
"text": "In this setting, we also compare with the baseline model, i.e. the initial translation system augmented with decoding constraints to avoid marked words. Figure 2 plots BLEU as a function of the number of marked words on the validation set of WMT'14 German to English. This curve is obtained by marking at most 1, 2, . . . , 8 words to be changed per sentence, taking into account that the actual number of marked word in a sentence cannot be higher than the number of guess words not present in the reference sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 153,
"end": 161,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Partial Feedback",
"sec_num": "4.2"
},
{
"text": "Compared to the baseline, there is a small advantage for QuickEdit for 1-2 marked words and a larger improvement when more words are marked. Unsurprisingly, the model trained with fewer marked words (QE25, QE50) performs better when tested with fewer marked words, while QE100 gives the largest improvement with 4 or more marked words. source, it manages to generate sentences which are closer to the reference than the initial sentences, as shown by the BLEU improvement. This shows the ability of the model to paraphrase from deletion constraints. Table 3 shows examples of the system in action from the English test set of WMT-14 fr-en. This examples show that the model can provide synonyms, e.g. essential \u2192 vital, or came after \u2192 followed. The model can also replace tenses when appropriate, e.g. have not waited \u2192 did not wait, or wrote \u2192 had written.",
"cite_spans": [],
"ref_spans": [
{
"start": 550,
"end": 557,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Partial Feedback",
"sec_num": "4.2"
},
{
"text": "Although it is not our primary goal, monolingual QuickEdit can also be used for paraphrasing by pairing it with another model to automatically generate change markers. In that case, the generative model of edit markers replaces the human instructions. Basically, given an input sentence x, the edit model generate a sequence c of binary variables, which indicates whether each word x i of x should input And while the members of Congress cannot agree on whether to continue, several States have not waited. output And while there is no way for Congress to agree on whether to go ahead, several states did not wait. input This is truly essential for our nation. output This is really vital for our nation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrasing",
"sec_num": "4.4"
},
{
"text": "input His case came after that of Corporal Glen Kirkland, who told a parliamentary committee last month that he had been pushed out before being ready because he did not meet the universality of service rule. output His case followed that of Corporal Glen Kirkland, who said to a parliamentary panel last month that he had been forced to go before he was ready because he did not meet the rule of universality of service. input Since the beginning of major fighting in Afghanistan, the army has been struggling to determine what latitude it can grant to injured soldiers who want to remain in the ranks, but who are not fit for battle. output Since the start of major battles in Afghanistan, the army has had a hard time to determine what latitude it can give to injured soldiers who want to stay in the army, but who are not capable of battling. input Mr. Snowden wrote in his letter that he had been subjected to a serious and sustained campaign of persecution , which forced him to leave his country. output Mr Snowden had written in his letter that he had suffered a severe and sustained campaign of persecution that forced him out of his homeland. input Spirit Airlines Inc. applied the first hand baggage charges three years ago, and low-cost Allegiant followed a little later. output Spirit Airlines Inc. introduced the first hand-luggage charge three years ago, and the inexpensive Allegiant followed somewhat later. input \"I've never seen such a fluid boarding procedure in my entire career\"; he says. output \"I have not seen this kind of seamless boarding in my career\"; he said.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrasing",
"sec_num": "4.4"
},
{
"text": "input As a result , there will be no more employees in the plant. output This means that there won't be any employees in the factory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrasing",
"sec_num": "4.4"
},
{
"text": "input Pierre Beaudoin , President and CEO , is confident that Bombardier will meet its target of 300 firm orders before the first aircraft enters commercial service. output Chief Executive Officer Pierre Beaudoin is confident Bombardier can meet its 300 firm order target prior to the first airplane entering commercial services. input Another 35 persons involved in trafficking were sentenced to a total of 153 years' imprisonment for drug trafficking. output Thirty-five other people involved in the traffic were punished with a total of 153 years in prison for drug-related offenses. be edited out (c i = 1) or not (c i = 0). QuickEdit then takes (x, c) and generate a sentence y that paraphrases x following the change markers c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrasing",
"sec_num": "4.4"
},
{
"text": "We use the monolingual QuickEdit model for English trained on WMT-14 fr-en for our paraphrase experiments. We rely on the simplest possible model to generate change markers: for each word type w, we estimate its probability to be reference He said that Sino-Kenyan news agencies had long-term cooperative ties and hoped that the ties could further develop in the new century. human He said the two News Agencies of China and Kenya have friendly relationship over a long period of time. He hoped that this relation could further develop in the new century. paranet He said the two news outlets in China and Kenya have amicably similar relationships to a long period of time. QuickEdit He said that the two news agencies of China and Kenya were friends for a long period of time and hoped that the relationship would continue in the new century. reference Annan urged sharon to ensure israeli forces will \"adopt military tactic and weapons that cause a minimum possible threat to safety of palestinian people and personal properties. \" human Annan called on Sharon to ensure that Israeli security forces \" use weapons and fighting methods that will cause minimum threat to the safety and property of the Palestinian civilians. \" paranet Annan called for Sharon to \" ensure that Israeli security forces deploy weapons and combat methods that endanger security and the property of Palestinian civilians. \" QuickEdit Annan calls on Sharon to \" use weapons and combat practices that will pose a minimum threat to the safety and property of Palestinian civilians. \" reference [Shuttleworth']s space travel has drawn great publicity in South Africa and won the honor of being the most important news event since Mandela's release from prison. human Shuttleworth's space journey has received enormous attention in South Africa and is praised as the most important news since the release of Nelson Mandela from prison. paranet Shuttleworth's journey has received enormous attention in South Africa and is considered the most important news since the release of Nelson Mandela. QuickEdit The Shuttleworth space trip attracted considerable attention in South Africa and is lauded as the most important news since Nelson Mandela was released from jail. edited out P (c i = 1|x i = w) on the QuickEdit training data based on relative frequency counts. For inference, we simply threshold this probability P (c i = 1|x i = w) > \u03c4 to assign change markers. \u03c4 is selected to control how bold paraphrasing should be, i.e. large \u03c4 would yield minor changes, while small \u03c4 would edit the input sentence substantially. We compare our paraphrasing approach with ParaNet (Mallinson et al., 2017) , a paraphrasing neural model based on translation pivoting 5 . We conduct our evaluation on the MTC dataset (Huang et al., 2002) following the setup introduced in the ParaNet paper. This setup consists of 75 human paraphrase pairs (excluding duplicate MTC sentences as well as erroneous paraphrases). The evaluation considers each pair of human paraphrases (x, y). Each paraphrasing model (QuickEdit and ParaNet) generates a paraphrase given x. Then human judgments are collected by showing y and three versions of x, i.e. the original version x, its paraphrase from ParaNet x (p) and its paraphrase from QuickEdit x (q) . For each example, the three sentences x, x (p) , x (q) are shuffled and do not carry any information about their origin. The assessor should label whether each version of x is a valid paraphrase of y and should rank them by fluency from 1 most fluent to 3 least fluent.",
"cite_spans": [
{
"start": 2647,
"end": 2671,
"text": "(Mallinson et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 2781,
"end": 2801,
"text": "(Huang et al., 2002)",
"ref_id": "BIBREF15"
},
{
"start": 3250,
"end": 3253,
"text": "(p)",
"ref_id": null
},
{
"start": 3290,
"end": 3293,
"text": "(q)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrasing",
"sec_num": "4.4"
},
{
"text": "We can evaluate paraphrasing performance at various levels of boldness which we control with the parameter \u03c4 . Bold paraphrasing means that the model needs to generate sentences which differ more from the input x than conservative paraphrasing. In this work, our evaluation relies on a level of boldness comparable to ParaNet 279 Average number of marked tokens BLEU Baseline QE25 QE50 QE100 Figure 2 : Post-editing results as a function of the average number of marked tokens per sentence on WMT'14 de-en validation set (45k sentences). QE25, QE50, QE100 refer to QuickEdit models trained with data where respectively 25, 50 or 100% of the guess tokens not present in the reference were marked to be changed.",
"cite_spans": [],
"ref_spans": [
{
"start": 392,
"end": 400,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Paraphrasing",
"sec_num": "4.4"
},
{
"text": "from (Mallinson et al., 2017) . Table 4 reports the results of this experiment. Accuracy measures the fraction of sentences considered valid paraphrases. Fluency measures the number of cases the paraphrase was considered more fluent or as fluent as the source sentence. Boldness measure the fraction of paraphrase tokens that were not in the source. The results highlight the advantages of QuickEdit. The paraphrases from QuickEdit are accurate for 72% of the sentences versus 56% for ParaNet. The fluency of the generation from QuickEdit ranks equally or higher than the human source sentence for 53% of the examples, which compares to 37% for ParaNet. Table 5 shows a few paraphrases from both models. These examples highlight that the boldness operating point chosen by the authors of ParaNet is rather conservative, with few edits per sentence. Nevertheless, QuickEdit advantage is clear, showing that ParaNet often forgets part of the source sentence while QuickEdit does not, e.g. could futher develop in the first example is not expressed by ParaNet but QuickEdit proposes would continue. This tendency to shorten the input can yield an opposite meaning, e.g. in the second example, ParaNet rephrases cause minimum threat as endanger while QuickEdit proposes correctly pose a minimum threat. Examples with less conservative paraphrasing are shown in Table 3 .",
"cite_spans": [
{
"start": 5,
"end": 29,
"text": "(Mallinson et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 32,
"end": 39,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 654,
"end": 661,
"text": "Table 5",
"ref_id": "TABREF4"
},
{
"start": 1357,
"end": 1364,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Paraphrasing",
"sec_num": "4.4"
},
{
"text": "This work proposes QuickEdit, a neural sequence to sequence model that allows one to edit text by simply requesting few initial tokens to be changed. From a marked sentence, the model can generate an edited sentence both in the context of machine translation post-editing (a source sentence is also provided), or in a monolingual setting. In both cases, we assess the impact of the change requests. We show that marking words not present in a hidden reference sentence allow the model to generate text closer to this reference. In the context of post-editing, we conduct simulated postedits, i.e. we mark words absent from the reference as rejected. We show that crossing out a few words per sentence can drastically improve BLEU, even on top of a strong MT system, e.g. BLEU on WMT'14-en-fr moves from 40.2 to 53.4 with QuickEdit post-editing as opposed to 47.8 for the post-editing baseline. In the context of monolingual editing, we show that our system both allow text editing and paraphrasing. For paraphrasing, we outperform a strong model (Mallinson et al., 2017) in a human evaluation on the MTC dataset, both in terms of accuracy (72% vs 53%) and fluency of the generation (53% vs 37%).",
"cite_spans": [
{
"start": 1046,
"end": 1070,
"text": "(Mallinson et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Our work opens several future directions of research. First, we want to extend our evaluation from simulated post-edits to a genuine interactive editing scenario. QuickEdit currently allows only to reject word forms for a whole sentence, not reject them in a specific context. We plan to explore this possibility. Also, QuickEdit could be a good basis for an automatic post-editing system (Chatterjee et al., 2015) . QuickEdit can be applied for multi-step editing, letting the user refine their sentence multiple time. In that case, attending to all previous versions of the sentence would be relevant. Finally, we could also consider offering a richer set of simple edit actions. For instance, we could propose span substitutions to the user, which requires a decoding stage proposing a short list of promising spans and candidate replacements.",
"cite_spans": [
{
"start": 389,
"end": 414,
"text": "(Chatterjee et al., 2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Posterior to our experiments,(Wieting and Gimpel, 2017) released an even large dataset that might be used in our setting.2 http://nlp.stanford.edu/projects/nmt 3 http://www.statmt.org/wmt14/ translation-task.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/facebookresearch/ fairseq-py.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We are thankful to the authors of ParaNet for sharing their generations for our evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Marc'Aurelio Ranzato, Sumit Chopra, Roman Novak for helpful discussions. We thank Sergey Edunov, Sam Gross, Myle Ott for writing the fairseq-py toolkit used in our experiments. We thank Jonathan Mallinson, Rico Sennrich, Mirella Lapata, for sharing ParaNet data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 .",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Paraphrasing with bilingual parallel corpora",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Bannard",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "597--604",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Bannard and Chris Callison-Burch. 2005. Para- phrasing with bilingual parallel corpora. In Pro- ceedings of the 43rd Annual Meeting on Associa- tion for Computational Linguistics. Association for Computational Linguistics, pages 597-604.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Statistical approaches to computer-assisted translation",
"authors": [
{
"first": "Sergio",
"middle": [],
"last": "Barrachina",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Civera",
"suffix": ""
},
{
"first": "Elsa",
"middle": [],
"last": "Cubel",
"suffix": ""
},
{
"first": "Shahram",
"middle": [],
"last": "Khadivi",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Lagarda",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Jes\u00fas",
"middle": [],
"last": "Tom\u00e1s",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Vidal",
"suffix": ""
},
{
"first": "Juan-Miguel",
"middle": [],
"last": "Vilar",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics",
"volume": "35",
"issue": "1",
"pages": "3--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergio Barrachina, Oliver Bender, Francisco Casacu- berta, Jorge Civera, Elsa Cubel, Shahram Khadivi, Antonio Lagarda, Hermann Ney, Jes\u00fas Tom\u00e1s, En- rique Vidal, and Juan-Miguel Vilar. 2009. Statistical approaches to computer-assisted translation. Com- putational Linguistics 35(1):3-28.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Predicting sentences using n-gram language models",
"authors": [
{
"first": "Steffen",
"middle": [],
"last": "Bickel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Haider",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Scheffer",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "193--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steffen Bickel, Peter Haider, and Tobias Scheffer. 2005. Predicting sentences using n-gram language models. In Proceedings of the conference on Hu- man Language Technology and Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 193-200.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Findings of the 2014 workshop on statistical machine translation",
"authors": [
{
"first": "Ondrej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Buck",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Leveling",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Pecina",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Herve",
"middle": [],
"last": "Saint-Amand",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Ale\u0161",
"middle": [],
"last": "Tamchyna",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Ale\u0161 Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Stochastic gradient learning in neural networks",
"authors": [
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of Neuro-N\u00eemes 91. EC2",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L\u00e9on Bottou. 1991. Stochastic gradient learning in neural networks. In Proceedings of Neuro-N\u00eemes 91. EC2, Nimes, France.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An improved error model for noisy channel spelling correction",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 38th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "286--293",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill and Robert C Moore. 2000. An improved er- ror model for noisy channel spelling correction. In Proceedings of the 38th Annual Meeting on Associ- ation for Computational Linguistics. Association for Computational Linguistics, pages 286-293.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Report on the 11th IWSLT evaluation campaign",
"authors": [
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mauro Cettolo, Jan Niehues, Sebastian St\u00fcker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th IWSLT evaluation campaign. In Proceed- ings of the International Workshop on Spoken Lan- guage Translation, Hanoi, Vietnam.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Exploring the planet of the APEs: a comparative study of state-of-the-art methods for MT automatic post-editing",
"authors": [
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Marion",
"middle": [],
"last": "Weller",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "156--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajen Chatterjee, Marion Weller, Matteo Negri, and Marco Turchi. 2015. Exploring the planet of the APEs: a comparative study of state-of-the-art meth- ods for MT automatic post-editing. In Proceedings of the 43rd Annual Meeting on Association for Com- putational Linguistics. pages 156-161.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1406.1078"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 .",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sentence compression by deletion with lstms",
"authors": [
{
"first": "Katja",
"middle": [],
"last": "Filippova",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Alfonseca",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Colmenares",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP'15)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katja Filippova, Enrique Alfonseca, Carlos Col- menares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with lstms. In Proceedings of the 2015 Conference on Empir- ical Methods in Natural Language Processing (EMNLP'15).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann Dauphin. 2017. Convolutional se- quence to sequence learning .",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Generating sequences with recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1308.0850"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 .",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Human effort and machine learnability in computer aided translation",
"authors": [
{
"first": "Spence",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sida",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Heer",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1225--1236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Spence Green, Sida I Wang, Jason Chuang, Jeffrey Heer, Sebastian Schuster, and Christopher D Man- ning. 2014. Human effort and machine learnabil- ity in computer aided translation. In EMNLP. pages 1225-1236.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Generating sentences by editing prototypes",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Guu",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Tatsunori",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Oren",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1709.08878"
]
},
"num": null,
"urls": [],
"raw_text": "Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. 2017. Generating sen- tences by editing prototypes. arXiv preprint arXiv:1709.08878 .",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multiple-translation Chinese corpus. Linguistic Data Consortium",
"authors": [
{
"first": "Shudong",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Doddington",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shudong Huang, David Graff, and George Doddington. 2002. Multiple-translation Chinese corpus. Lin- guistic Data Consortium, University of Pennsylva- nia.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A crt editing system",
"authors": [
{
"first": "T",
"middle": [],
"last": "Edgar",
"suffix": ""
},
{
"first": "Frans",
"middle": [
"M"
],
"last": "Irons",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Djorup",
"suffix": ""
}
],
"year": 1972,
"venue": "Communications of the ACM",
"volume": "15",
"issue": "1",
"pages": "16--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edgar T. Irons and Frans M. Djorup. 1972. A crt edit- ing system. Communications of the ACM 15(1):16- 20.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2009a. Statistical machine translation. Cambridge University Press.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A web-based interactive computer aided translation tool",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the ACL-IJCNLP 2009 Software Demonstrations. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "17--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2009b. A web-based interactive com- puter aided translation tool. In Proceedings of the ACL-IJCNLP 2009 Software Demonstrations. Asso- ciation for Computational Linguistics, pages 17-20.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Statistical post-editing of a rule-based machine translation system",
"authors": [
{
"first": "A-L",
"middle": [],
"last": "Lagarda",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Alabau",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Diaz-De Liano",
"suffix": ""
}
],
"year": 2009,
"venue": "North American Chapter of the Association for Computational Linguistics (NAACL). Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "217--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A-L Lagarda, Vicente Alabau, Francisco Casacu- berta, Roberto Silva, and Enrique Diaz-de Liano. 2009. Statistical post-editing of a rule-based ma- chine translation system. In North American Chap- ter of the Association for Computational Linguistics (NAACL). Association for Computational Linguis- tics, pages 217-220.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Transtype: a computer-aided translation typing system",
"authors": [
{
"first": "Philippe",
"middle": [],
"last": "Langlais",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Guy",
"middle": [],
"last": "Lapalme",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 2000 NAACL-ANLP Workshop on",
"volume": "5",
"issue": "",
"pages": "46--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philippe Langlais, George Foster, and Guy Lapalme. 2000. Transtype: a computer-aided translation typ- ing system. In Proceedings of the 2000 NAACL- ANLP Workshop on Embedded machine translation systems-Volume 5. Association for Computational Linguistics, pages 46-51.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Attention strategies for multi-source sequence-to-sequence learning",
"authors": [
{
"first": "Jind\u0159ich",
"middle": [],
"last": "Libovick\u1ef3",
"suffix": ""
},
{
"first": "Jind\u0159ich",
"middle": [],
"last": "Helcl",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.06567"
]
},
"num": null,
"urls": [],
"raw_text": "Jind\u0159ich Libovick\u1ef3 and Jind\u0159ich Helcl. 2017. Attention strategies for multi-source sequence-to-sequence learning. arXiv preprint arXiv:1704.06567 .",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.04025"
]
},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025 .",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Paraphrasing revisited with neural machine translation",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Mallinson",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "881--893",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Mallinson, Rico Sennrich, and Mirella Lap- ata. 2017. Paraphrasing revisited with neural ma- chine translation. In Proceedings of the 15th Confer- ence of the European Chapter of the Association for Computational Linguistics. volume 1, pages 881- 893.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Touchbased pre-post-editing of machine translation output",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Marie",
"suffix": ""
},
{
"first": "Aur\u00e9lien",
"middle": [],
"last": "Max",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1040--1045",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Marie and Aur\u00e9lien Max. 2015. Touch- based pre-post-editing of machine translation out- put. In EMNLP. pages 1040-1045.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
}
],
"year": 2010,
"venue": "Interspeech",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Lukas Burget, Jan Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In Inter- speech. volume 2, page 3.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A method of solving a convex programming problem with convergence rate o (1/k2)",
"authors": [
{
"first": "Yurii",
"middle": [],
"last": "Nesterov",
"suffix": ""
}
],
"year": 1983,
"venue": "Soviet Mathematics Doklady",
"volume": "27",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yurii Nesterov. 1983. A method of solving a con- vex programming problem with convergence rate o (1/k2). Soviet Mathematics Doklady 27(2).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Exploring neural text simplification models",
"authors": [
{
"first": "Sergiu",
"middle": [],
"last": "Nisioi",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "\u0160tajner",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "Liviu P",
"middle": [],
"last": "Dinu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "85--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergiu Nisioi, Sanja \u0160tajner, Simone Paolo Ponzetto, and Liviu P Dinu. 2017. Exploring neural text sim- plification models. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). volume 2, pages 85-91.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Iterative refinement for machine translation",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Novak",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roman Novak, Michael Auli, and David Grangier. 2016. Iterative refinement for machine translation .",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Interactive neural machine translation",
"authors": [
{
"first": "\u00c1lvaro",
"middle": [],
"last": "Peris",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Domingo",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
}
],
"year": 2017,
"venue": "Computer Speech & Language",
"volume": "45",
"issue": "",
"pages": "201--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u00c1lvaro Peris, Miguel Domingo, and Francisco Casacu- berta. 2017. Interactive neural machine translation. Computer Speech & Language 45:201-220.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Monolingual machine translation for paraphrase generation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Quirk, Chris Brockett, and William Dolan. 2004. Monolingual machine translation for para- phrase generation. In Proceedings of the 2004 con- ference on empirical methods in natural language processing.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization",
"authors": [
{
"first": "M",
"middle": [],
"last": "Alexander",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1509.00685"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander M Rush, Sumit Chopra, and Jason We- ston. 2015. A neural attention model for ab- stractive sentence summarization. arXiv preprint arXiv:1509.00685 .",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Edinburgh Neural Machine Translation Systems for WMT 16",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Edinburgh Neural Machine Translation Sys- tems for WMT 16. In Proc. of WMT.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Neural Machine Translation of Rare Words with Subword Units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural Machine Translation of Rare Words with Subword Units. In Proc. of ACL.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "On the importance of initialization and momentum in deep learning",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Martens",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Dahl",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2013,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "1139--1147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, James Martens, George Dahl, and Ge- offrey Hinton. 2013. On the importance of initial- ization and momentum in deep learning. In Interna- tional conference on machine learning. pages 1139- 1147.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in neural information process- ing systems. pages 3104-3112.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.03762"
]
},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762 .",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.05732"
]
},
"num": null,
"urls": [],
"raw_text": "John Wieting and Kevin Gimpel. 2017. Push- ing the Limits of Paraphrastic Sentence Em- beddings with Millions of Machine Translations. arXiv:1711.05732 .",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Figure 1: QuickEdit architecture for translation post-editing. The decoder attends to both encodings, one for the source and one for the initial translation (guess) with deletion markers (X on the diagram). Our simplified schema shows one convolutional block and single-hop attention for readability.",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"num": null,
"text": "also reports monolingual results. In that case, the system is not given the source sentence, only a sentence in the target language along with change markers. Even if the model is not given the electronic devices generally emit significantly fewer radio frequencies than previous generations. source Statt sich von der Zahlungsunf\u00e4higkeit der US-Regierung verunsichern zu lassen, konzentrierten sich Investoren auf das, was vermutlich mehr z\u00e4hlt: die Federal Reserve.guess Instead of being obscured by the US government's inability to pay, investors focused on what is probably more important: the Federal Reserve. output Rather than being insane by the United States government's insolvency, investors concentrated on what probably counts more: the Federal Reserve. source Boeing bestreitet die Zahlen von Airbus zu den Sitzma\u00dfen und sagt, es stehe nicht im Ermessen der Hersteller zu entscheiden, wie Fluggesellschaften die Balance zwischen Flugtarifen und Einrichtung gestalten. guessBoeing is denying the figures from Airbus to the seats and says that it is not left to the discretion of the manufacturers to decide how airlines are to balance air fares and set up. output Boeing is contesting Airbus's seating figures and says it is not up to manufacturers to determine how airlines balance fares and equipment.",
"content": "<table><tr><td/><td/><td colspan=\"2\">IWSLT'14</td><td colspan=\"2\">WMT'14 (de)</td><td colspan=\"2\">WMT'14 (fr)</td></tr><tr><td/><td/><td colspan=\"6\">de\u2192en en\u2192de de\u2192en en\u2192de fr\u2192en en\u2192fr</td></tr><tr><td colspan=\"2\">initial translation</td><td>27.4</td><td>24.2</td><td>29.7</td><td>25.2</td><td>37.0</td><td>40.2</td></tr><tr><td colspan=\"2\">post-edit baseline</td><td>33.0</td><td>30.2</td><td>34.6</td><td>30.7</td><td>45.4</td><td>47.8</td></tr><tr><td colspan=\"2\">post-edit QuickEdit</td><td>34.6</td><td>30.8</td><td>41.3</td><td>36.6</td><td>49.7</td><td>53.4</td></tr><tr><td colspan=\"2\">monolingual QuickEdit</td><td>29.3</td><td>26.7</td><td>39.5</td><td>34.2</td><td>47.7</td><td>51.3</td></tr><tr><td/><td colspan=\"7\">Table 1: Editing results (BLEU4) when all incorrect tokens are requested to be changed.</td></tr><tr><td colspan=\"8\">source Schauspieler Orlando Bloom hat sich zur Trennung von seiner Frau, Topmodel Mi-</td></tr><tr><td/><td>randa Kerr, ge\u00e4u\u00dfert.</td><td/><td/><td/><td/><td/></tr><tr><td>guess</td><td colspan=\"7\">Actor Orlando Bloom has spoken of the separation of his wife, Topmodel Miranda</td></tr><tr><td/><td>Kerr.</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"8\">output Actor Orlando Bloom spoke about separation from his wife, Top Model Miranda Kerr.</td></tr><tr><td colspan=\"8\">source Die heutigen elektronischen Ger\u00e4te geben im Allgemeinen wesentlich weniger</td></tr><tr><td/><td colspan=\"3\">Funkstrahlung ab als fr\u00fchere Generationen.</td><td/><td/><td/></tr><tr><td>guess</td><td colspan=\"7\">Today's electronic devices generally give far less radio radiation than previous gen-</td></tr><tr><td/><td>erations.</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">output Today's</td><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"html": null
},
"TABREF1": {
"num": null,
"text": "Post-editing examples from WMT'14 en-de. Examples originate from news sentences of the newstest2014 dataset. Strike-through text indicates the tokens marked to be changed. Bold text indicates tokens introduced by the model, i.e. tokens not present in the original guess.",
"content": "<table/>",
"type_str": "table",
"html": null
},
"TABREF2": {
"num": null,
"text": "Monolingual editing examples from the WMT'14 fr-en test set. Examples originate from news sentences of the newstest2014 dataset. Strike-through text indicates the tokens marked to be changed. Bold text indicates tokens introduced by the model, i.e. tokens not present in the original guess.",
"content": "<table><tr><td/><td colspan=\"3\">Accuracy Fluency Boldness</td></tr><tr><td>Source</td><td>100%</td><td>100%</td><td>0%</td></tr><tr><td>ParaNet</td><td>56%</td><td>37%</td><td>16%</td></tr><tr><td>QuickEdit</td><td>72%</td><td>53%</td><td>21%</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF3": {
"num": null,
"text": "Paraphrasing experiments on the MTC dataset.",
"content": "<table/>",
"type_str": "table",
"html": null
},
"TABREF4": {
"num": null,
"text": "Paraphrasing experiments on news data from the MTC dataset. Bold indicates tokens introduced by the the models, i.e. tokens which are not in the human source given as input.",
"content": "<table/>",
"type_str": "table",
"html": null
}
}
}
}