{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:38:33.771385Z" }, "title": "An Empirical Study on Multi-Task Learning for Text Style Transfer and Paraphrase Generation", "authors": [ { "first": "Pawe\u0142", "middle": [], "last": "Bujnowski", "suffix": "", "affiliation": { "laboratory": "", "institution": "Samsung R&D Institute", "location": { "settlement": "Warsaw", "country": "Poland" } }, "email": "p.bujnowski@samsung.com" }, { "first": "Kseniia", "middle": [], "last": "Ryzhova", "suffix": "", "affiliation": {}, "email": "ksenija.rijova@gmail.com" }, { "first": "Hyungtak", "middle": [], "last": "Choi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Samsung Electronics Co. Ltd", "location": { "settlement": "Seoul", "country": "Korea" } }, "email": "" }, { "first": "Katarzyna", "middle": [], "last": "Witkowska", "suffix": "", "affiliation": { "laboratory": "", "institution": "Polytechnic University of Catalonia", "location": { "settlement": "Barcelona", "country": "Spain" } }, "email": "witek.witkowska@gmail.com" }, { "first": "Jaros\u0142aw", "middle": [], "last": "Piersa", "suffix": "", "affiliation": { "laboratory": "", "institution": "Samsung R&D Institute", "location": { "settlement": "Warsaw", "country": "Poland" } }, "email": "j.piersa@samsung.com" }, { "first": "Tymoteusz", "middle": [], "last": "Krumholc", "suffix": "", "affiliation": { "laboratory": "", "institution": "Samsung R&D Institute", "location": { "settlement": "Warsaw", "country": "Poland" } }, "email": "t.krumholc@samsung.com" }, { "first": "Katarzyna", "middle": [], "last": "Beksa", "suffix": "", "affiliation": { "laboratory": "", "institution": "Samsung R&D Institute", "location": { "settlement": "Warsaw", "country": "Poland" } }, "email": "k.beksa@samsung.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The topic of this paper is neural multi-task training for text style transfer. We present an efficient method for neutral-to-style transformation using the transformer framework. We demonstrate how to prepare a robust model utilizing large paraphrases corpora together with a small parallel style transfer corpus. We study how much style transfer data is needed for a model on the example of two transformations: neutral-to-cute on internal corpus and modern-to-antique on publicly available Bible corpora. Additionally, we propose a synthetic measure for the automatic evaluation of style transfer models. We hope our research is a step towards replacing common but limited rule-based style transfer systems by more flexible machine learning models for both public and commercial usage.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "The topic of this paper is neural multi-task training for text style transfer. We present an efficient method for neutral-to-style transformation using the transformer framework. We demonstrate how to prepare a robust model utilizing large paraphrases corpora together with a small parallel style transfer corpus. We study how much style transfer data is needed for a model on the example of two transformations: neutral-to-cute on internal corpus and modern-to-antique on publicly available Bible corpora. Additionally, we propose a synthetic measure for the automatic evaluation of style transfer models. We hope our research is a step towards replacing common but limited rule-based style transfer systems by more flexible machine learning models for both public and commercial usage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The goal of text style transfer (ST 1 ) is to convert the input sentence into an output, preserving the meaning but modifying the linguistic layer (grammatical or lexical).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Style transfer is extensively studied in academic papers (Li et al., 2018a; Rao and Tetreault, 2018; Carlson et al., 2018; Jhamtani et al., 2017) and also gains popularity in commercialized chatbots. Amazon Alexa introduced styles mimicking celebrities that replace the original Alexa voice (Amazon, 2020). Samsung offers applications personalized in terms of style, e.g. Celebrity Alarm (Samsung, 2019) . Both examples offer novel user experience, though the number of available system responses seems to be limited. The two main constraints are voice generation and text content limitations. While voice generation can be implemented with voice synthesis systems, like e.g. by Jia et al. (2018) or Prenger et al. (2018) , content limitation might be resolved by flexible machine learning text ST methods: the system could transform neutral answer into one of predefined styles using a machine learning model. The major challenges are the limited amount of style data and the lack of convincing automatic evaluation measures. In our study we try to mitigate these two issues.", "cite_spans": [ { "start": 57, "end": 75, "text": "(Li et al., 2018a;", "ref_id": "BIBREF24" }, { "start": 76, "end": 100, "text": "Rao and Tetreault, 2018;", "ref_id": "BIBREF35" }, { "start": 101, "end": 122, "text": "Carlson et al., 2018;", "ref_id": "BIBREF7" }, { "start": 123, "end": 145, "text": "Jhamtani et al., 2017)", "ref_id": "BIBREF17" }, { "start": 388, "end": 403, "text": "(Samsung, 2019)", "ref_id": "BIBREF37" }, { "start": 679, "end": 696, "text": "Jia et al. (2018)", "ref_id": "BIBREF18" }, { "start": 700, "end": 721, "text": "Prenger et al. (2018)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The goal of our paper is to present an efficient method to train domain-unlimited ST models using a small style dataset. Inspired by the successful outcomes of multi-task learning (Caruana, 1997; Collobert and Weston, 2008; Johnson et al., 2017) , we propose a transformer model (Vaswani et al., 2017) that jointly solves paraphrase generation and style transfer tasks. We hypothesize that training in the multitask mode on an English large parallel corpus for paraphrasing may help preserve the input content along with successful adjustment of vocabulary and grammar to the target style, even using a small ST corpus.", "cite_spans": [ { "start": 180, "end": 195, "text": "(Caruana, 1997;", "ref_id": "BIBREF8" }, { "start": 196, "end": 223, "text": "Collobert and Weston, 2008;", "ref_id": "BIBREF10" }, { "start": 224, "end": 245, "text": "Johnson et al., 2017)", "ref_id": "BIBREF19" }, { "start": 279, "end": 301, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To verify the hypothesis and add practical value, we perform detailed tests on various sizes of text ST corpora to verify how their volume affects the results. This is a convenient approach, because we have publicly available large parallel corpora for paraphrasing, but rather small parallel corpora of style transformations we want to achieve. We perform our experiments using openly available paraphrase sources (Rao and Tetreault, 2018; Carlson et al., 2018; Quora, 2017; Wieting and Gimpel, 2018; Williams et al., 2018) and two ST corpora, one for each task: the internal \"cute person\" style corpus and the processed Bible texts corpus composed of publicly available sources. We examine two style transformations: neutral-to-cute and modern-to-antique (the latter one on different Bible translations).", "cite_spans": [ { "start": 415, "end": 440, "text": "(Rao and Tetreault, 2018;", "ref_id": "BIBREF35" }, { "start": 441, "end": 462, "text": "Carlson et al., 2018;", "ref_id": "BIBREF7" }, { "start": 463, "end": 475, "text": "Quora, 2017;", "ref_id": "BIBREF34" }, { "start": 476, "end": 501, "text": "Wieting and Gimpel, 2018;", "ref_id": "BIBREF44" }, { "start": 502, "end": 524, "text": "Williams et al., 2018)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Besides studies with various data volumes, we propose a method of creating an automatic measure. We show that simple common measures do not work if separated and instead we propose a fitted compound measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Related works 2.1 Style transfer: not too far from paraphrasing Similarly to Xu et al. (2012) we see language ST as a task composed of two linked subtasks: paraphrasing and style adjustment. In our paper we claim that the ST task consists mostly in good paraphrasing and much less in adding the target style. Following this idea, we focus on background methods for the first task -paraphrasing, and then for style transfer. Traditionally, the paraphrasing task involved rule and dictionary-based approaches (McKeown, 1983; Bolshakov and Gelbukh, 2004; Kauchak and Barzilay, 2006) . Another popular method was the statistical paraphrase generation that recombined words probabilistically in order to create new sentences (Quirk et al., 2004; Wan et al., 2005; Zhao et al., 2009) . Currently, DNNs are used for automatic paraphrasing, e.g. by sequence-to-sequence (seq2seq) models (Sutskever et al., 2014) .", "cite_spans": [ { "start": 79, "end": 95, "text": "Xu et al. (2012)", "ref_id": "BIBREF47" }, { "start": 509, "end": 524, "text": "(McKeown, 1983;", "ref_id": "BIBREF28" }, { "start": 525, "end": 553, "text": "Bolshakov and Gelbukh, 2004;", "ref_id": "BIBREF4" }, { "start": 554, "end": 581, "text": "Kauchak and Barzilay, 2006)", "ref_id": "BIBREF20" }, { "start": 722, "end": 742, "text": "(Quirk et al., 2004;", "ref_id": "BIBREF33" }, { "start": 743, "end": 760, "text": "Wan et al., 2005;", "ref_id": "BIBREF41" }, { "start": 761, "end": 779, "text": "Zhao et al., 2009)", "ref_id": "BIBREF50" }, { "start": 881, "end": 905, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Presumably, the first paper presenting a deep learning model for the paraphrase task is the one by Prakash et al. (2016) . The authors successfully compared their residual LSTM to previous LSTM-derived seq2seq models (Hochreiter and Schmidhuber, 1997; Schuster and Paliwal, 1997; Bahdanau et al., 2014; Vaswani et al., 2017) . In parallel, upon research on variational autoencoders (VAEs), e.g. Chung et al. (2015) , other approaches to paraphrasing emerged (Bowman et al., 2016; Gupta et al., 2018) .", "cite_spans": [ { "start": 99, "end": 120, "text": "Prakash et al. (2016)", "ref_id": "BIBREF31" }, { "start": 217, "end": 251, "text": "(Hochreiter and Schmidhuber, 1997;", "ref_id": "BIBREF16" }, { "start": 252, "end": 279, "text": "Schuster and Paliwal, 1997;", "ref_id": "BIBREF38" }, { "start": 280, "end": 302, "text": "Bahdanau et al., 2014;", "ref_id": "BIBREF3" }, { "start": 303, "end": 324, "text": "Vaswani et al., 2017)", "ref_id": "BIBREF40" }, { "start": 395, "end": 414, "text": "Chung et al. (2015)", "ref_id": "BIBREF9" }, { "start": 458, "end": 479, "text": "(Bowman et al., 2016;", "ref_id": "BIBREF5" }, { "start": 480, "end": 499, "text": "Gupta et al., 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "More recently Li et al. (2018b) implemented paraphrase generation with deep reinforcement learning that proved better performance than the previous seq2seq results on Twitter (Lan et al., 2017) and Quora (Quora, 2017) datasets. Insufficient corpora for paraphrase generation gave motivation to new practical studies on unsupervised approach (Artetxe et al., 2018; Conneau and Lample, 2019) . Recently, Roy and Grangier (2019) proposed a monolingual system for paraphrasing (without translation) and compared it to the unsupervised and translation methods, presenting various linguistic characteristics for each of them.", "cite_spans": [ { "start": 14, "end": 31, "text": "Li et al. (2018b)", "ref_id": "BIBREF25" }, { "start": 175, "end": 193, "text": "(Lan et al., 2017)", "ref_id": "BIBREF23" }, { "start": 204, "end": 217, "text": "(Quora, 2017)", "ref_id": "BIBREF34" }, { "start": 341, "end": 363, "text": "(Artetxe et al., 2018;", "ref_id": "BIBREF2" }, { "start": 364, "end": 389, "text": "Conneau and Lample, 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We name here only some existing solutions. Xu et al. (2012) used a phrase-based MT method on Shakespeare M2A corpus. This research was followed by Jhamtani et al. (2017) , who improved the results with the DNN seq2seq approach using a copy mechanism.", "cite_spans": [ { "start": 43, "end": 59, "text": "Xu et al. (2012)", "ref_id": "BIBREF47" }, { "start": 147, "end": 169, "text": "Jhamtani et al. (2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Style transfer methods", "sec_num": "2.2" }, { "text": "Rao and Tetreault (2018) adapted phrased-based and neural MT models for formality transfer using GYAFC corpus. Later Niu et al. (2018) improved results on GYAFC by creating a multi-task system for both formality transfer and English-French MT.", "cite_spans": [ { "start": 117, "end": 134, "text": "Niu et al. (2018)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Style transfer methods", "sec_num": "2.2" }, { "text": "It is worth mentioning Dryja\u0144ski et al. (2018) who used DNNs both elements: generation of ST phrases and their positions related to the input sentence.", "cite_spans": [ { "start": 23, "end": 46, "text": "Dryja\u0144ski et al. (2018)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Style transfer methods", "sec_num": "2.2" }, { "text": "Another distinctive study was conducted for text simplification (TS) task with the matched Wikipedia-Simple Wikipedia parallel data, e.g. Wubben et al. (2012; Wang et al. (2016) . What is significant for TS are the results for automatic measures in Xu et al. (2016) followed by Alva-Manchego et al. (2019) . We draw our inspiration from the authors, along with the results of Xu et al. (2012) in our measure propositions.", "cite_spans": [ { "start": 138, "end": 158, "text": "Wubben et al. (2012;", "ref_id": "BIBREF46" }, { "start": 159, "end": 177, "text": "Wang et al. (2016)", "ref_id": "BIBREF42" }, { "start": 249, "end": 265, "text": "Xu et al. (2016)", "ref_id": "BIBREF48" }, { "start": 278, "end": 305, "text": "Alva-Manchego et al. (2019)", "ref_id": "BIBREF0" }, { "start": 376, "end": 392, "text": "Xu et al. (2012)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Style transfer methods", "sec_num": "2.2" }, { "text": "Our solution has common features with Wieting and Gimpel (2018) . The authors demonstrated that using pretrained embeddings from a large parallel paraphrase corpus (\u223c50 millions) and out-of-the-box models, it was possible to reach state-of-the-art results on several SemEval semantic textual similarity competitions. In our work, instead of using pretrained embeddings we follow Johnson et al. (2017) and train the multi-task model on a single language, but with paraphrases and a small neutral-to-style dataset. Compared to the previous solutions, our approach can be seen as a universal method to tackle text ST (e.g. formality transfer, simplification and more).", "cite_spans": [ { "start": 38, "end": 63, "text": "Wieting and Gimpel (2018)", "ref_id": "BIBREF44" }, { "start": 379, "end": 400, "text": "Johnson et al. (2017)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Style transfer methods", "sec_num": "2.2" }, { "text": "We used the Multilingual Transformer (Vaswani et al., 2017) model from the Fairseq package (Ott et al., 2019) . Using this model we approach the problem similarly to multilingual translation, treating each style as a new language. An overview of the system is presented in Figure 1 .", "cite_spans": [ { "start": 37, "end": 59, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF40" }, { "start": 91, "end": 109, "text": "(Ott et al., 2019)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 273, "end": 281, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Multilingual model", "sec_num": "3.1" }, { "text": "In this architecture all the language pairs share a single Transformer neural network. We fed the model with two paired datasets, {English sentences vs English paraphrase references} and {English sentences vs English sentences with target style}, and trained it on both sets. The parallel training on both datasets following this multilingual (multi-task) approach produces robust results, but requires retraining the model from scratch after any modification of the style corpus. For Multilingual Transformer we preprocessed the sentences with the SentencePiece toolkit (Kudo and Richardson, 2018) without any pretokenization. The vocabulary size was predefined to 16k. We used a shared English dictionary for both the paraphrase pairs and the target style corpus binarization. We also removed lines with more than 250 tokens.", "cite_spans": [ { "start": 571, "end": 598, "text": "(Kudo and Richardson, 2018)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Multilingual model", "sec_num": "3.1" }, { "text": "For each training set the target style sample constituted only up to 0.6% of the whole set. It appears that Multilingual Transformer can effectively train the model even with a huge disproportion between paraphrases and style corpora sizes. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual model", "sec_num": "3.1" }, { "text": "For training the models we used multiple GPUs (GeForce GTX 1080 Ti, 11GB), which contributed to more representative updates due to different average sentence length among mini-batches distributed between workers. The training on 8 GPUs lasted about 27 hours. We used Multilingual Transformer models with shared-decoders from the Fairseq package. The architecture consisted of 6 encoder and 6 decoder fully connected layers of dimension 1054 with 4 attention heads. The embeddings were of dimension 512. We set the dropout to 0.3, weight decay to 0.0001 and the optimizer to Adam (\u03b2 1 = 0.9, \u03b2 2 = 0.98) with the learning rate equal to 0.0005. We used label-smoothed cross entropy loss with label smoothing set to 0.1. For generation we used beam size equal to 5. We stopped the training after 40 epochs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training parameters details", "sec_num": "3.2" }, { "text": "Using our ST methods we performed two tasks: N2C and M2A. For each task we prepared a number of target (style) parallel corpora differing only in volumes. For the N2C transformation it was 1k, 3k, 5k, 7k, 10k, 13k, and 17k. For the M2A transformation we prepared the same corpora volumes plus the additional 30k. The proportions for training and validation were equal for all the target datasets with the ratio of 80%:20% respectively. The style corpora are described in subsections 4.1.2 and 4.1.3. As supplementary data, we used much larger parallel paraphrases corpora described in subsection 4.1.1. Firstly, paraphrase data is required for producing high quality sentences. Secondly, the generated utterances must be properly tuned to the target style. In our experiments we searched for answers to the following questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "1. How much style data is needed in multilingual training for style transformations of high quality? 2. How to automatically evaluate the major aspects of ST models? How to create a synthetic measure? 3. What elements are successful and what challenges remain when using transformer models for ST? 4.1 Data", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For each task we built a large parallel corpus, used as supplementary data in model training. We treat these corpora as domain-unspecific and stylistically neutral. As sources, we used the paraphrases corpus presented in Wieting and Gimpel (2018) , MultiNLI corpus (Williams et al., 2018) , Quora Kaggle dataset (Quora, 2017) and Bible corpus (Carlson et al., 2018) . Table 1 : Parallel \"paraphrases\" corpora used in Neutral-to-Cute and Modern-to-Antique tasks.", "cite_spans": [ { "start": 221, "end": 246, "text": "Wieting and Gimpel (2018)", "ref_id": "BIBREF44" }, { "start": 265, "end": 288, "text": "(Williams et al., 2018)", "ref_id": "BIBREF45" }, { "start": 312, "end": 325, "text": "(Quora, 2017)", "ref_id": "BIBREF34" }, { "start": 343, "end": 365, "text": "(Carlson et al., 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 368, "end": 375, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Paraphrases data", "sec_num": "4.1.1" }, { "text": "In the M2A task we removed the Bible subcorpus from the dataset of paraphrases to perform the multi-task training with a small amount of style data (like in the N2C task).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paraphrases data", "sec_num": "4.1.1" }, { "text": "The N2C training corpus contains 17k lines, some longer than one sentence. The efforts of searching for available cute person style corpora turned out to be ineffective. Thus, the set was created with the use of an internal crowdsourcing platform by educated linguists with academic background and experience in style transfer projects. In a series of tasks the linguists rewrote input (\"neutral\") sentences into \"cute person\" style unless they were \"cute\" enough (pairs with no change constituted 15% of the total set). The \"cute style\" was described as \"informal\", \"positive\", \"superlative\", \"excited\" and \"slangy\". The \"cute style\" was usually created by inserting an adequate \"cute style\" phrase, paraphrasing a fragment or the whole sentence. The generated corpus covered numerous genres, styles and topics, e.g. self-presentation, jokes, facts, small talk and anecdotes. We also created a 300 lines long test corpus with four \"cute style\" candidate answers for each input sentence. Linguists were asked to follow the same guidelines as in the training corpus creation task, and additionally to keep a degree of variation between the candidate answers. The same set of neutral sentences is used both for human and automatic evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neutral-to-Cute data", "sec_num": "4.1.2" }, { "text": "The idea of using the Bible as a parallel corpus suitable for ST tasks was presented in Carlson et al. (2018) . The authors claim that traditionally used sentence and verse demarcation makes for easy sentence alignment. The dataset consists of 8 public domain available Bible versions. 2 For the \"antiquification\" task's purpose, as the input we chose World English Bible (WEB, released in 2000), being the most stylistically modern version. As the target style, we chose King James Version (KJV, released in 1611), as it is one of the most influential English versions of the Bible, which perfectly shows the \"majesty of style\".", "cite_spans": [ { "start": 88, "end": 109, "text": "Carlson et al. (2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Modern-to-Antique Bible data", "sec_num": "4.1.3" }, { "text": "Using these sources, we built a 30 thousand verse long parallel WEB-KJV Bible corpus. We additionally sampled 300 verses of pairs for human and automatic evaluation. The automatic test set was built in the same way as the one used in the N2C task. Each input (WEB-style) was paired with a target-style (KJV) sentence and three candidate answers from other Bible translations: Darby's, Young's Literal Translation and American Standard Version. For human evaluation we selected 150 Bible verses (half of the automatic sample). Additionally, we test out-of-domain ST capability of the model by adding 150 \"neutral style\" small talk sentences, previously used in the N2C task, to the M2A test corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modern-to-Antique Bible data", "sec_num": "4.1.3" }, { "text": "Additionally, we tested the lexical diversity of corpora used in both tasks. For this purpose we used MTLD (measure of textual lexical diversity) index (McCarthy, 2005; McCarthy and Jarvis, 2010) for our two tasks (see Table 2 ). We assume that higher MTLD scores are associated with higher topical and lexical diversity of \"neutral\" and \"cute\" corpora. This may indicate the higher difficulty of the N2C task.", "cite_spans": [ { "start": 152, "end": 168, "text": "(McCarthy, 2005;", "ref_id": "BIBREF27" }, { "start": 169, "end": 169, "text": "", "ref_id": null } ], "ref_spans": [ { "start": 220, "end": 227, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Lexical diversity", "sec_num": "4.1.4" }, { "text": "We also measured the number of tokens, types (unique tokens), mean and standard deviation of number of sentences and tokens per line. The study revealed the difference between M2A and N2C tasks. In the M2A task, output (KJV style) has two times less sentences than input (WEB style) while maintaining similar (10% lower) number of tokens. In contrast, in the second task, \"cute\" style outputs tend to have 50% more sentences and 33% more tokens than \"neutral\" style sentences (inputs). Table 2 : MTLD score for N2C and M2A corpora. The higher score indicates higher lexical diversity.", "cite_spans": [], "ref_spans": [ { "start": 486, "end": 493, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Lexical diversity", "sec_num": "4.1.4" }, { "text": "For human evaluation, a panel of three language experts was employed for each style. Every judge evaluated the same 300 transformations in four criteria, using Likert scale: 1 -very bad, 2 -unacceptable, 3 -flawed, but acceptable, 4 -good, with minor errors, 5 -very good.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human evaluation method", "sec_num": "4.2" }, { "text": "The criteria covered four aspects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human evaluation method", "sec_num": "4.2" }, { "text": "\u2022 Language: the correctness of grammar, spelling, vocabulary usage, lack of unnecessary repetitions or loops, etc. \u2022 Quality: semantics, fluency, comprehensibility, logic and the general \"feel\" of the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human evaluation method", "sec_num": "4.2" }, { "text": "\u2022 Content: the degree of semantic similarity to the input sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human evaluation method", "sec_num": "4.2" }, { "text": "\u2022 Style: the appropriateness of style in the output sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human evaluation method", "sec_num": "4.2" }, { "text": "Our Content is similar to Meaning Preservation used in Callison-Burch (2008) and Rao and Tetreault (2018) , while their Fluency is splitted into our Language and Quality.", "cite_spans": [ { "start": 55, "end": 76, "text": "Callison-Burch (2008)", "ref_id": "BIBREF6" }, { "start": 81, "end": 105, "text": "Rao and Tetreault (2018)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Human evaluation method", "sec_num": "4.2" }, { "text": "The judges were instructed on how to understand the cute person and antique styles. All the sets were evaluated by the same panel of language experts, in order to preserve the common understanding of criteria and keep the results coherent. The judges were attended by the panel of two supervisors who verified their understanding of the criteria. All the evaluators had academic background in linguistics and at least Bachelor's Degree in this or related field.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human evaluation method", "sec_num": "4.2" }, { "text": "The evaluated element was the transformation in relation to the input sentence. The examples below show that some transformation elements may produce parallelity of score values (compare Quality and Content). However, the opposite effect occurs as well -raising one score may cause lowering another one (compare Content and Style).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human evaluation method", "sec_num": "4.2" }, { "text": "Neutral-to-Cute Modern-to-Antique input Neutral: I don't want to scare you, but right now there is a skeleton inside you.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score", "sec_num": null }, { "text": "Modern: Let it be, when these signs have come to you, that you do what is appropriate for the occasion; for God is with you.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score", "sec_num": null }, { "text": "styled output Cute: Wow, don't want to scare you, but right now there's a skeleton inside you. So cool, so I'm sure we can try again! Antique: Let it be, when these signs be come unto thee, that thou doest ought for the occasion; for God is with thee. Language 5 -The sentence is linguistically correct. 3 -The output contains language mistakes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score", "sec_num": null }, { "text": "Quality 4 -The sentence is logical and semantically correct, but the phrase added at the end is a bit semantically separated from the input meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score", "sec_num": null }, { "text": "3 -Small semantic distortion (ought used incorrectly as a synonym to appropriate).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score", "sec_num": null }, { "text": "Content 3 -The score was lowered for the additional sentence introducing a little new meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score", "sec_num": null }, { "text": "3 -Small error: incorrect introduction of the word ought as a synonym to appropriate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score", "sec_num": null }, { "text": "Style 5 -The added phrase wow and the contraction there's are typical to the excited person style. Although unnecessarily introduced, the sentence which lowers the score for Content raises it for Style.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score", "sec_num": null }, { "text": "5 -The changes (thee, thou, doest) are compliant with the antique style. The two 300-transformation sets (one for neutral-to-cute and one for modern-to-antique) were evaluated separately for all the trainings with various sizes of corpora (1k, 3k, 5k, 7k, 10k, 13k, 17k for both transformations plus 30k for M2A only) by three language experts and in four criteria (Language, Quality, Content and Style). It gives the total number of 54,000 single assessments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score", "sec_num": null }, { "text": "We also measured the inter-annotator agreement using Krippendorff's alpha (Krippendorff, 2004) for each criterion and each task (see Table 4 ). The values in Krippendorff's alpha range from \u03b1 = 0 (perfect disagreement) to \u03b1 = 1 (perfect agreement). Customarily, \u03b1 0.667 is considered the minimum threshold for reliable annotation and \u03b1 0.8 is the optimal threshold (Krippendorff, 2004) .", "cite_spans": [ { "start": 74, "end": 94, "text": "(Krippendorff, 2004)", "ref_id": "BIBREF21" }, { "start": 365, "end": 385, "text": "(Krippendorff, 2004)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 133, "end": 140, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Score", "sec_num": null }, { "text": "For three out of four criteria, inter-annotator agreement was higher in the M2A task. The agreement in Style assessment was significantly higher in the N2C task. We assume that this discrepancy is caused by the limited proficiency of judges in the biblical style. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score", "sec_num": null }, { "text": "In this section we present the outcomes of our generative models for each dataset volume, focusing on the impact of the style data volume on style transformation quality. In order to prove the complexity of the problem, we adopted the proposed human evaluation measures (Language, Quality, Content and Style). As expected, dependencies between the target data volume used in models training and human measures statistics are not strictly linear. We can make an insight into this process. First, we focus on N2C transformations and then we move to Bible ST.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human evaluation results", "sec_num": "4.3" }, { "text": "In Table 5 and in Figure 2 we present the study results for \"cute person\" data. We counted means and standard deviations of combined datasets of size N = 900, putting together the same three datasets of 300 sentences evaluated by independent linguists. First of all, we notice that the scores for all datasets are quite high -above 3 for each human measure -including the smallest 1k style dataset. The lowest score, for Style (3.27) in the 1k style dataset, points that some transformations are stylistically flawed. The 3.81 score for Content shows that most of the generated sentences semantically reflect the input. The highest scores are those for Quality and Language -both over 4.30 -being more than good. The growth of the style dataset volume is also reflected by evaluation scores. The Style factor increases together with the data volume (except for the 5k set). The maximum Style score is 4.04 for the biggest 17k dataset, which demonstrates the high quality of style conversion. As opposed to the Style score, Language, Quality and Content do not improve with larger datasets sizes (between 7k and 17k). It may be due to the negative correlation between Style and the remaining human measures, which we analyze in section 4.4.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 18, "end": 26, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Neutral-to-Cute transformation", "sec_num": "4.3.1" }, { "text": "In Table 5 and in Figure 2 we added the average of arithmetic mean for Language, Content and Style (i.e. \"Average human score\"). Quality was omitted because its strong correlation with Language makes it too overlapping. The biggest difference in average human scores is between datasets 1k and 3k. The values for Content and Quality are lower for the largest volumes -13k and 17k. The reason may be the specific N2C transformation, where more elaborate cute person phrases contrast with context-suitable ones. Closing the analysis, we also point out the repetition ratio of the input as the model output (the lower the better). The best results are for the largest datasets of 13k and 17k (13.67% in both cases -a bit below the mean of 15% in the training sets).", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 18, "end": 26, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Neutral-to-Cute transformation", "sec_num": "4.3.1" }, { "text": "The following conclusions can be drawn from the N2C data transformation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neutral-to-Cute transformation", "sec_num": "4.3.1" }, { "text": "1. Style transformation is acceptable even for the 1k style dataset used for the multi-task training model. 2. For the 3k dataset there is the biggest growth in transformation quality, especially for vocabulary, semantics and logic (and smaller for style score). 3. The more style data we use, the better style transformation we obtain. The models trained with the datasets of 13k or larger, provide good quality with a low unchanged sentence ratio.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neutral-to-Cute transformation", "sec_num": "4.3.1" }, { "text": "In this part of the experiment we focus on the \"antique\" Bible style data as the model's output. We divided our evaluation into two tasks. In the first and main one we examine the evaluation where modern Bible data is used as the input. It is consistent with our model multi-task training, where besides the large corpus of paraphrases we take the style corpus of M2A Bible data. For this evaluation we use 150 sentences. As a supplementary test we employ the neutral dataset of 150 lines, similar to N2C data input. Table 6 and Figure 3 present the two evaluation studies. ", "cite_spans": [], "ref_spans": [ { "start": 517, "end": 524, "text": "Table 6", "ref_id": "TABREF9" }, { "start": 529, "end": 537, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Modern-to-Antique transformation", "sec_num": "4.3.2" }, { "text": "Comparing curves of Bible and Cute style transformations we see significant differences, but also some similarities. Like for N2C models, in both Bible tests the biggest increase in quality is noticeable between the trainings with 1k and 3k style datasets. The modern Bible input data results are better than good for all human criteria for the model using the 3k style dataset. The Style rank reaches a maximum of 4.50 for the model with 7k Bible data volume. However, the most robust model, referring to the human average score, was trained using the biggest 30k style dataset, with the result of 4.52. The M2A transformation models bring a very low unchanged sentence ratio (0-2%), reflecting the training data proportions. We can notice the nonlinear dependency between the target dataset size and Style scores. However, the average of human scores behaves as expected -it grows with the size of style data used for training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modern Bible input", "sec_num": null }, { "text": "To some extent, we can treat our test with neutral input for the Bible models as another transfer learning experiment. Surprisingly, the results of this study are quite satisfactory. Although in many examples of the same model, human Style scores are even 1 point lower compared to the dedicated modern Bible input, they are still above 3. Like for modern Bible data, the highest Style mark (3.59) for neutral input was reached with the 7k dataset. Considering the human average score, the largest quality growth is between the models trained with 1k and 3k target datasets -3.34 and 3.82 respectively. Evaluations for all the test samples show that the ratio of unchanged sentences, although higher than for modern Bible inputs, is still low or medium -between 2.67% and 10.67%. The comparison of human measure curves on the left side (for modern Bible input) of Figure 3 with similar ones on the right side (for neutral input) shows a very interesting observation: Language and Quality marks have more or less similar values on both plots, while Style means are much lower in neutral input tests (from 0.61 up to 1.12 points). In this study Content for neutral input varies from being the same or slightly lower to being lower up to 0.5 points, compared to modern Bible data.", "cite_spans": [], "ref_spans": [ { "start": 864, "end": 872, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Neutral input", "sec_num": null }, { "text": "From this analysis, we draw the following conclusions for Bible data and hypotheses for other styles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neutral input", "sec_num": null }, { "text": "1. Multi-task models built using paraphrases and a small or medium amount of style data can easily generate lexicon and logic that are distinctive for style conversion even for unexpected input data. 2. Moreover, style and semantics are more difficult to generate by multi-task models when new input data differ significantly from training input style data. Though, the results are still acceptable. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neutral input", "sec_num": null }, { "text": "Automated evaluation in text ST is a challenging task. Firstly, the full solution space cannot be clearly predefined (there are many ways of transferring style into a sentence). Secondly, the model should not only add style, but also maintain language correctness and preserve content. Additionally, stylizing text might be manifested in various language components (like syntax or lexis). It is difficult to reflect such multidimensional evaluation in any single synthetic score, especially when those aspects of assessment are poorly correlated or orthogonal. We examined a few well-known automatic metrics in the NLP field. For computations we used Vizseq toolkit (Wang et al., 2019) : BLEU, iBLEU, ROUGE-L, LASER; EASSE package (Alva-Manchego et al., 2019): SARI; python-Levenshtein package: Levenshtein ratio; BERTScore (Zhang et al., 2020) . These measures correlate to some extent with human ranks in various tasks: MT (ST might be seen as translation within one language), summarization or text simplification (a special case of ST). In Figure 4 we present Spearman correlations between NLP measures and human scores for N2C transformation.", "cite_spans": [ { "start": 667, "end": 686, "text": "(Wang et al., 2019)", "ref_id": "BIBREF43" }, { "start": 825, "end": 845, "text": "(Zhang et al., 2020)", "ref_id": "BIBREF49" } ], "ref_spans": [ { "start": 1045, "end": 1053, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Automated evaluation", "sec_num": "4.4" }, { "text": "The analysis reveals a limited correlation of the human Style score with the most of checked measures (the biggest positive value is for SARI: 0.41). In our opinion, the tested ranks are not sufficient to estimate all the factors of style transfer. Thus, in this section, we propose an easy method to compose a synthetic style transfer measure that exhibits better correlation with human judgment than the already known measures, even at the cost of its universality. Due to limited space, we focus on N2C transformation only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automated evaluation", "sec_num": "4.4" }, { "text": "We decided to assemble our score from two factors: one capturing the stylistic aspect and the other covering the paraphrase quality. As the measure of paraphrases quality we chose BertScore (Zhang et al., 2020) . It is a metric formed on the BERT model's contextual embeddings (Devlin et al., 2019) , adequate for identifying semantic similarity between sentences. Moreover, it handles synonyms and spatial lexical dependencies. For those reasons, it has the strongest correlation with the linguistic Content score.", "cite_spans": [ { "start": 190, "end": 210, "text": "(Zhang et al., 2020)", "ref_id": "BIBREF49" }, { "start": 277, "end": 298, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Automated evaluation", "sec_num": "4.4" }, { "text": "In order to facilitate the evaluation of the stylistic factor, we built a small classifier tool using BERT. We decided to use out-of-the-box torch-transformers 3 , and only apply a fine-tuning step on top. From the validation sets we selected 6k-verse subsets (they were not used for the training in style transfer tasks) and built 12k datasets of balanced positive (target) and negative (neutral) styles. The classifier reached an accuracy of 0.77. We called its softmax StyleBertScore. We assumed that the average human score (the arithmetic mean of Style, Content and Language) can be approximated using BertScore and StyleBertScore. In order to combine scores we built a few regression models, from a simple mean, through linear regression, to ones capable of model complex relations: Random Forest and Support Vector Machine Regression (SVR) (Drucker et al., 1997) . Figure 5 depicts the estimation results as a linear chart and a heatmap of correlations with human ranks. In our study, Random Forest has the biggest correlation (0.67) and the smallest mean square error (see Table 7 ). Our proposal for creation of automatic measure includes application of two machine learning models and some amount of human evaluation engagement. Although annotation is needed to calibrate the synthetic measure well, we need to make it only once for the whole process. Afterwards, created automatic model of the measure can be used to test many style transfer models without multiplication of human evaluation.", "cite_spans": [ { "start": 847, "end": 869, "text": "(Drucker et al., 1997)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 872, "end": 880, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 1081, "end": 1088, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Automated evaluation", "sec_num": "4.4" }, { "text": "Linear Regression Random Forest SVR mean square error (MSE) 11.732 0.401 0.278 0.401 Table 7 : Average mean square error (MSE) for each regression model.", "cite_spans": [ { "start": 54, "end": 59, "text": "(MSE)", "ref_id": null } ], "ref_spans": [ { "start": 85, "end": 92, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Mean", "sec_num": null }, { "text": "We showed that a non-linear combination of the selected metrics (BertScore and StyleBertScore) can approximate the arithmetic mean of human scores and be a reliable method for assembling a style specific automatic measure. The novelty of our metric is that it is partly model dependent, though this results from the nature of the ST task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mean", "sec_num": null }, { "text": "We discussed the method of text style conversion with a multilingual transformer trained for two tasks: paraphrasing and style changing. We showed a successful approach using a large paraphrase parallel corpus with much less data of neutral-target style pairs. In our numerical experiments models trained with varying sizes of style samples were evaluated with four human scores (and their average), revealing nonlinear dependencies. For both studies, Neutral-to-Cute and Modern-to-Antique, we pointed out essential data volumes in models that brought acceptable and good results. In particular, we indicated the meaningful model performance growth between 1k and 3k sizes of target data. Moreover, the transfer learning ability of our multilingual generator was tested with a satisfying outcome for Neutral-to-Bible transformation (not seen during the model training). Finally, we proposed an easy method to automatically measure style transfer results and to approximate average human score with it. A new measure can be used during model training to estimate style transformation quality, besides checking a typical minimum of cross entropy loss function. In our opinion, this opportunity is worth further research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Every judge evaluated the same 300 transformations in four criteria (\"Language\" -\"L\", \"Quality\" -\"Q\", \"Content\" -\"C\" and \"Style\" -\"S\") using Likert scale where 1 -very bad, 2 -unacceptable, 3 -flawed, but acceptable, 4 -good, with minor errors, 5 -very good.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix: Style Transfer Results", "sec_num": null }, { "text": "\"Bible\" stands for Modern-to-Antique dataset. \"Cute\" refers to the Neutral-to-Cute set. For more information refer to the section 4 of the article. The fool's talk bringeth a rod to his back; but the lips of the wise defendeth them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix: Style Transfer Results", "sec_num": null }, { "text": "5 5 5 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix: Style Transfer Results", "sec_num": null }, { "text": "The fool's speak bringeth a rod to his back, but the lips of the wise man keep them. Cute 1k I don't know much about the president. What's your opinion? 5 5 5 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bible 1k", "sec_num": null }, { "text": "The name's Bond, James Bond. Just kidding, it's me.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cute 17k", "sec_num": null }, { "text": "haha, the name's Bond, James Bond. Just kidding, its me! 4 5 5 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cute 17k", "sec_num": null }, { "text": "No prob darling, just pick the name is Bond, James Bond, just kidding, it's me.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cute 1k", "sec_num": null }, { "text": "Cute 17k Am I intelligent? Seriously, I'm not that intelligent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5", "sec_num": "2" }, { "text": "Amazing that, sweetie? 2 2 1 5 Cute 17k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2 3 Cute 1k", "sec_num": "5" }, { "text": "Carpe diem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2 3 Cute 1k", "sec_num": "5" }, { "text": "Carpe totally diem 4 4 5 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2 3 Cute 1k", "sec_num": "5" }, { "text": "Good Morning babe' u have been kinda quite on da' boat! 3 4 2 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cute 1k", "sec_num": null }, { "text": "https://github.com/keithecarlson/StyleTransferBibleData", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/huggingface/transformers", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "EASSE: Easier automatic sentence simplification evaluation", "authors": [ { "first": "Fernando", "middle": [], "last": "Alva-Manchego", "suffix": "" }, { "first": "Louis", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Carolina", "middle": [], "last": "Scarton", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations", "volume": "", "issue": "", "pages": "49--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fernando Alva-Manchego, Louis Martin, Carolina Scarton, and Lucia Specia. 2019. EASSE: Easier automatic sentence simplification evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP- IJCNLP): System Demonstrations, pages 49-54, Hong Kong, China, November. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "-celebrity voice skill for Alexa", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amazon. 2020. Samuel L. Jackson -celebrity voice skill for Alexa. www.amazon.com/gp/product/B07WS3HN5Q.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Unsupervised statistical machine translation", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3632--3642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Unsupervised statistical machine translation. In Pro- ceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632-3642, Brussels, Belgium, Oct -Nov. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Synonymous paraphrasing using wordnet and internet", "authors": [ { "first": "I", "middle": [ "A" ], "last": "Bolshakov", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Gelbukh", "suffix": "" } ], "year": 2004, "venue": "Natural Language Processing and Information Systems. NLDB 2004", "volume": "3136", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. A. Bolshakov and Alexander Gelbukh. 2004. Synonymous paraphrasing using wordnet and internet. In Meziane F., M\u00e9tais E. (eds) Natural Language Processing and Information Systems. NLDB 2004. Lecture Notes in Com- puter Science, vol 3136, pages 312-323, 01.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Generating sentences from a continuous space", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Jozefowicz", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "10--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Gen- erating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10-21, Berlin, Germany, August. Association for Computational Linguis- tics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Syntactic constraints on paraphrases extracted from parallel corpora", "authors": [ { "first": "Chris", "middle": [], "last": "Callison", "suffix": "" }, { "first": "-", "middle": [], "last": "Burch", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "196--205", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch. 2008. Syntactic constraints on paraphrases extracted from parallel corpora. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 196-205, Honolulu, Hawaii, October. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Evaluating prose style transfer with the Bible", "authors": [ { "first": "Keith", "middle": [], "last": "Carlson", "suffix": "" }, { "first": "Allen", "middle": [], "last": "Riddell", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Rockmore", "suffix": "" } ], "year": 2018, "venue": "Royal Society Open Science", "volume": "5", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keith Carlson, Allen Riddell, and Daniel Rockmore. 2018. Evaluating prose style transfer with the Bible. Royal Society Open Science, 5:171920, 10.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Multitask learning", "authors": [ { "first": "Rich", "middle": [], "last": "Caruana", "suffix": "" } ], "year": 1997, "venue": "Mach. Learn", "volume": "28", "issue": "1", "pages": "41--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rich Caruana. 1997. Multitask learning. Mach. Learn., 28(1):41-75, July.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A recurrent latent variable model for sequential data", "authors": [ { "first": "Junyoung", "middle": [], "last": "Chung", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Kastner", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Dinh", "suffix": "" }, { "first": "Kratarth", "middle": [], "last": "Goel", "suffix": "" }, { "first": "C", "middle": [], "last": "Aaron", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Courville", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "28", "issue": "", "pages": "2980--2988", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. 2015. A recurrent latent variable model for sequential data. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2980-2988. Curran Associates, Inc.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 25th International Conference on Machine Learning, ICML '08", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, ICML '08, page 160-167, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Cross-lingual language model pretraining", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" } ], "year": 2019, "venue": "33rd Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross-lingual language model pretraining. In 33rd Conference on Neural Information Processing Systems.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Support vector regression machines", "authors": [ { "first": "Harris", "middle": [], "last": "Drucker", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Christopher", "suffix": "" }, { "first": "Linda", "middle": [], "last": "Burges", "suffix": "" }, { "first": "Alex", "middle": [ "J" ], "last": "Kaufman", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Smola", "suffix": "" }, { "first": "", "middle": [], "last": "Vapnik", "suffix": "" } ], "year": 1997, "venue": "Advances in Neural Information Processing Systems 9", "volume": "", "issue": "", "pages": "155--161", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harris Drucker, Christopher J. C. Burges, Linda Kaufman, Alex J. Smola, and Vladimir Vapnik. 1997. Sup- port vector regression machines. In M. C. Mozer, M. I. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems 9, pages 155-161. MIT Press.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Affective natural language generation by phrase insertion", "authors": [ { "first": "T", "middle": [], "last": "Dryja\u0144ski", "suffix": "" }, { "first": "P", "middle": [], "last": "Bujnowski", "suffix": "" }, { "first": "H", "middle": [], "last": "Choi", "suffix": "" }, { "first": "K", "middle": [], "last": "Podlaska", "suffix": "" }, { "first": "K", "middle": [], "last": "Michalski", "suffix": "" }, { "first": "K", "middle": [], "last": "Beksa", "suffix": "" }, { "first": "P", "middle": [], "last": "Kubik", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE International Conference on Big Data (Big Data)", "volume": "", "issue": "", "pages": "4876--4882", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Dryja\u0144ski, P. Bujnowski, H. Choi, K. Podlaska, K. Michalski, K. Beksa, and P. Kubik. 2018. Affective natural language generation by phrase insertion. In 2018 IEEE International Conference on Big Data (Big Data), pages 4876-4882, Dec.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A deep generative framework for paraphrase generation", "authors": [ { "first": "Ankush", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Prawaan", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Rai", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "18", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2018. A deep generative framework for para- phrase generation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI18. AAAI Publications.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Shakespearizing modern language using copy-enriched sequence-to-sequence models", "authors": [ { "first": "Harsh", "middle": [], "last": "Jhamtani", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Gangal", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Nyberg", "suffix": "" } ], "year": 2017, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harsh Jhamtani, Varun Gangal, Eduard H. Hovy, and Eric Nyberg. 2017. Shakespearizing modern language using copy-enriched sequence-to-sequence models. ArXiv, abs/1707.01161.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Transfer learning from speaker verification to multispeaker text-to-speech synthesis", "authors": [ { "first": "Ye", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ron", "middle": [ "J" ], "last": "Weiss", "suffix": "" }, { "first": "Quan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Ruoming", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Ignacio", "middle": [ "Lopez" ], "last": "Moreno", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ye Jia, Yu Zhang, Ron J. Weiss, Quan Wang, Jonathan Shen, Fei Ren, Zhifeng Chen, Patrick Nguyen, Ruoming Pang, Ignacio Lopez Moreno, and Yonghui Wu. 2018. Transfer learning from speaker verification to multi- speaker text-to-speech synthesis.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "authors": [ { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Thorat", "suffix": "" }, { "first": "Fernanda", "middle": [], "last": "Vi\u00e9gas", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Wattenberg", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Macduff", "middle": [], "last": "Hughes", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "339--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Compu- tational Linguistics, 5:339-351.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Paraphrasing for automatic evaluation", "authors": [ { "first": "David", "middle": [], "last": "Kauchak", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference", "volume": "", "issue": "", "pages": "455--462", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Kauchak and Regina Barzilay. 2006. Paraphrasing for automatic evaluation. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 455-462, New York City, USA, June. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Content analysis: An introduction to its methodology thousand oaks. Calif", "authors": [ { "first": "Klaus", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klaus Krippendorff. 2004. Content analysis: An introduction to its methodology thousand oaks. Calif.: Sage.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "authors": [ { "first": "T", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "J", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Kudo and J. Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A continuously growing dataset of sentential paraphrases", "authors": [ { "first": "Wuwei", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Siyu", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Hua", "middle": [], "last": "He", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1224--1234", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wuwei Lan, Siyu Qiu, Hua He, and Wei Xu. 2017. A continuously growing dataset of sentential paraphrases. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1224-1234, Copenhagen, Denmark, September. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Delete, retrieve, generate: a simple approach to sentiment and style transfer", "authors": [ { "first": "Juncen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1865--1874", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juncen Li, Robin Jia, He He, and Percy Liang. 2018a. Delete, retrieve, generate: a simple approach to sentiment and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1865-1874, New Orleans, Louisiana, June. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Paraphrase generation with deep reinforcement learning", "authors": [ { "first": "Zichao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Lifeng", "middle": [], "last": "Shang", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3865--3878", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zichao Li, Xin Jiang, Lifeng Shang, and Hang Li. 2018b. Paraphrase generation with deep reinforcement learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3865- 3878, Brussels, Belgium, October-November. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Mtld, vocd-d, and hd-d: A validation study of sophisticated approaches to lexical diversity assessment", "authors": [ { "first": "M", "middle": [], "last": "Philip", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "", "middle": [], "last": "Jarvis", "suffix": "" } ], "year": 2010, "venue": "Behavior research methods", "volume": "42", "issue": "", "pages": "381--392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip M. McCarthy and Scott Jarvis. 2010. Mtld, vocd-d, and hd-d: A validation study of sophisticated ap- proaches to lexical diversity assessment. Behavior research methods, 42(2):381-392.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "An assessment of the range and usefulness of lexical diversity measures and the potential of the measure of textual, lexical diversity (MTLD)", "authors": [ { "first": "Philip", "middle": [ "M" ], "last": "Mccarthy", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip M. McCarthy. 2005. An assessment of the range and usefulness of lexical diversity measures and the potential of the measure of textual, lexical diversity (MTLD). Ph.D. thesis, The University of Memphis.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Paraphrasing questions using given and new information", "authors": [ { "first": "R", "middle": [], "last": "Kathleen", "suffix": "" }, { "first": "", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 1983, "venue": "American Journal of Computational Linguistics", "volume": "9", "issue": "1", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kathleen R. McKeown. 1983. Paraphrasing questions using given and new information. American Journal of Computational Linguistics, 9(1):1-10.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Multi-task neural models for translating between styles within and across languages", "authors": [ { "first": "Xing", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Sudha", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Marine", "middle": [], "last": "Carpuat", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1008--1021", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xing Niu, Sudha Rao, and Marine Carpuat. 2018. Multi-task neural models for translating between styles within and across languages. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1008-1021, Santa Fe, New Mexico, USA, August. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "authors": [ { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "Proceedings of NAACL-HLT 2019: Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Neural paraphrase generation with stacked residual LSTM networks", "authors": [ { "first": "Aaditya", "middle": [], "last": "Prakash", "suffix": "" }, { "first": "A", "middle": [], "last": "Sadid", "suffix": "" }, { "first": "Kathy", "middle": [], "last": "Hasan", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Ashequl", "middle": [], "last": "Datla", "suffix": "" }, { "first": "Joey", "middle": [], "last": "Qadir", "suffix": "" }, { "first": "Oladimeji", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Farri", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "2923--2934", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aaditya Prakash, Sadid A. Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual LSTM networks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2923-2934, Osaka, Japan, December. The COLING 2016 Organizing Committee.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Waveglow: A flow-based generative network for speech synthesis", "authors": [ { "first": "Ryan", "middle": [], "last": "Prenger", "suffix": "" }, { "first": "Rafael", "middle": [], "last": "Valle", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Catanzaro", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Prenger, Rafael Valle, and Bryan Catanzaro. 2018. Waveglow: A flow-based generative network for speech synthesis.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Monolingual machine translation for paraphrase generation", "authors": [ { "first": "Chris", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "William", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "142--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Quirk, Chris Brockett, and William Dolan. 2004. Monolingual machine translation for paraphrase gener- ation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 142-149, Barcelona, Spain, July. Association for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Quora question pairs", "authors": [ { "first": "", "middle": [], "last": "Quora", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quora. 2017. Quora question pairs.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer", "authors": [ { "first": "Sudha", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "129--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Corpus, bench- marks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 129-140, New Orleans, Louisiana, June. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Unsupervised paraphrasing without translation", "authors": [ { "first": "Aurko", "middle": [], "last": "Roy", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aurko Roy and David Grangier. 2019. Unsupervised paraphrasing without translation. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Galaxy's Celebrity Alarm lets you personalize notification alerts with Celebrity Voices", "authors": [ { "first": "", "middle": [], "last": "Samsung", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samsung. 2019. Galaxy's Celebrity Alarm lets you personalize notification alerts with Celebrity Voices. https://news.samsung.com/global/galaxys-celebrity-alarm-lets-you-personalize-notification-alerts- with-celebrity-voices.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Bidirectional recurrent neural networks", "authors": [ { "first": "M", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "K", "middle": [ "K" ], "last": "Paliwal", "suffix": "" } ], "year": 1997, "venue": "Trans. Sig. Proc", "volume": "45", "issue": "11", "pages": "2673--2681", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Schuster and K. K. Paliwal. 1997. Bidirectional recurrent neural networks. Trans. Sig. Proc., 45(11):2673-2681, November.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 27th International Conference on Neural Information Processing Systems", "volume": "2", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems -Volume 2, NIPS'14, page 3104-3112, Cambridge, MA, USA. MIT Press.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17", "volume": "", "issue": "", "pages": "6000--6010", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, page 6000-6010, Red Hook, NY, USA. Curran Associates Inc.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Towards statistical paraphrase generation: Preliminary evaluations of grammaticality", "authors": [ { "first": "Stephen", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dras", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Dale", "suffix": "" }, { "first": "C\u00e9cile", "middle": [], "last": "Paris", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Third International Workshop on Paraphrasing (IWP2005)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Wan, Mark Dras, Robert Dale, and C\u00e9cile Paris. 2005. Towards statistical paraphrase generation: Pre- liminary evaluations of grammaticality. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Text simplification using neural machine translation", "authors": [ { "first": "Tong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ping", "middle": [], "last": "Chen", "suffix": "" }, { "first": "John", "middle": [], "last": "Rochford", "suffix": "" }, { "first": "Jipeng", "middle": [], "last": "Qiang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16", "volume": "", "issue": "", "pages": "4270--7271", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tong Wang, Ping Chen, John Rochford, and Jipeng Qiang. 2016. Text simplification using neural machine trans- lation. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16, page 4270-7271. AAAI Press.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Vizseq: A visual analysis toolkit for text generation tasks", "authors": [ { "first": "Changhan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Anirudh", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Danlu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Changhan Wang, Anirudh Jain, Danlu Chen, and Jiatao Gu. 2019. Vizseq: A visual analysis toolkit for text gener- ation tasks. In In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing: System Demonstrations.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "ParaNMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations", "authors": [ { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "451--462", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Wieting and Kevin Gimpel. 2018. ParaNMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 451-462, Melbourne, Australia, July. Association for Computational Linguistics.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana, June. Association for Computational Linguistics.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Sentence simplification by monolingual machine translation", "authors": [ { "first": "", "middle": [], "last": "Sander Wubben", "suffix": "" }, { "first": "", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Bosch", "suffix": "" }, { "first": "", "middle": [], "last": "Krahmer", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1015--1024", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sander Wubben, Antal van den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual ma- chine translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1015-1024, Jeju Island, Korea, July. Association for Computational Linguis- tics.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Paraphrasing for style", "authors": [ { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" } ], "year": 2012, "venue": "The COLING 2012 Organizing Committee", "volume": "", "issue": "", "pages": "2899--2914", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, and Colin Cherry. 2012. Paraphrasing for style. In Proceedings of COLING 2012, pages 2899-2914. The COLING 2012 Organizing Committee, December.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Optimizing statistical machine translation for text simplification", "authors": [ { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Courtney", "middle": [], "last": "Napoles", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Quanze", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "401--415", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401-415.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Bertscore: Evaluating text generation with bert", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Application-driven statistical paraphrase generation", "authors": [ { "first": "Shiqi", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Li", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "", "issue": "", "pages": "834--842", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shiqi Zhao, Xiang Lan, Ting Liu, and Sheng Li. 2009. Application-driven statistical paraphrase generation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 834-842, Suntec, Singapore, August. Asso- ciation for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Model overview: Multilingual transformer trained with paraphrases and styled data." }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "Human evaluation of Neutral-to-Cute transformation (legend: average human score is an arithmetic mean of Language, Content and Style)." }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "Human evaluation of models for Bible Modern-to-Antique transformation (legend: average human score is an arithmetic mean of Language, Content and Style)." }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "Spearman correlation between various NLP metrics and human judgment for N2C data." }, "FIGREF4": { "type_str": "figure", "num": null, "uris": null, "text": "(a) Spearman correlation between proposed fusion of metrics and human scores (for Neutral-to-Cute evaluation data.) BS,SBS,Y:mean(L,C,S)) RandomForest(X:BS,SBS,Y:mean(L,C,S)) SVR(X:BS,SBS,Y:mean(L,C,S)) Average human score: mean(L,C,S) (b) Regression models estimation of average human rank." }, "FIGREF5": { "type_str": "figure", "num": null, "uris": null, "text": "Regression methods for approximating the model score." }, "TABREF2": { "content": "", "num": null, "text": "", "html": null, "type_str": "table" }, "TABREF4": { "content": "
", "num": null, "text": "", "html": null, "type_str": "table" }, "TABREF6": { "content": "
", "num": null, "text": "", "html": null, "type_str": "table" }, "TABREF9": { "content": "
5.05.0
4.754.75
4.54.5
4.254.25
Human score3.5 3.75 4.0Human score3.5 3.75 4.0
3.25Language3.25Language
3.0Quality Content3.0Quality Content
2.75Style Average human score2.75Style Average human score
2.51 3 5 7101317302.51 3 5 710131730
Volume of parallel style transfer corpus (in thousands)Volume of parallel style transfer corpus (in thousands)
(a) Modern Bible input(b) Neutral input
", "num": null, "text": "Human measures statistics of Modern-to-Antique transformation across sizes of target corpora.", "html": null, "type_str": "table" } } } }