{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:48:49.316536Z" }, "title": "Exploiting Social Media Content for Self-Supervised Style Transfer", "authors": [ { "first": "Dana", "middle": [], "last": "Ruiter", "suffix": "", "affiliation": { "laboratory": "Spoken Language Systems Group", "institution": "Saarland University", "location": { "country": "Germany" } }, "email": "druiter@lsv.uni-saarland.de" }, { "first": "Thomas", "middle": [], "last": "Kleinbauer", "suffix": "", "affiliation": { "laboratory": "Spoken Language Systems Group", "institution": "Saarland University", "location": { "country": "Germany" } }, "email": "" }, { "first": "Cristina", "middle": [], "last": "Espa\u00f1a-Bonet", "suffix": "", "affiliation": { "laboratory": "", "institution": "Saarland University", "location": { "country": "Germany" } }, "email": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "", "affiliation": { "laboratory": "", "institution": "Saarland University", "location": { "country": "Germany" } }, "email": "" }, { "first": "Dietrich", "middle": [], "last": "Klakow", "suffix": "", "affiliation": { "laboratory": "Spoken Language Systems Group", "institution": "Saarland University", "location": { "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recent research on style transfer takes inspiration from unsupervised neural machine translation (UNMT), learning from large amounts of non-parallel data by exploiting cycle consistency loss, back-translation, and denoising autoencoders. By contrast, the use of selfsupervised NMT (SSNMT), which leverages (near) parallel instances hidden in non-parallel data more efficiently than UNMT, has not yet been explored for style transfer. In this paper we present a novel Self-Supervised Style Transfer (3ST) model, which augments SS-NMT with UNMT methods in order to identify and efficiently exploit supervisory signals in non-parallel social media posts. We compare 3ST with state-of-the-art (SOTA) style transfer models across civil rephrasing, formality and polarity tasks. We show that 3ST is able to balance the three major objectives (fluency, content preservation, attribute transfer accuracy) the best, outperforming SOTA models on averaged performance across their tested tasks in automatic and human evaluation.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Recent research on style transfer takes inspiration from unsupervised neural machine translation (UNMT), learning from large amounts of non-parallel data by exploiting cycle consistency loss, back-translation, and denoising autoencoders. By contrast, the use of selfsupervised NMT (SSNMT), which leverages (near) parallel instances hidden in non-parallel data more efficiently than UNMT, has not yet been explored for style transfer. In this paper we present a novel Self-Supervised Style Transfer (3ST) model, which augments SS-NMT with UNMT methods in order to identify and efficiently exploit supervisory signals in non-parallel social media posts. We compare 3ST with state-of-the-art (SOTA) style transfer models across civil rephrasing, formality and polarity tasks. We show that 3ST is able to balance the three major objectives (fluency, content preservation, attribute transfer accuracy) the best, outperforming SOTA models on averaged performance across their tested tasks in automatic and human evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Style transfer is a highly versatile task in natural language processing, where the goal is to modify the stylistic attributes of a text while maintaining its original meaning. A broad variety of stylistic attributes has been considered, including formality (Rao and Tetreault, 2018) , gender (Prabhumoye et al., 2018) , polarity (Shen et al., 2017) and civility (Laugier et al., 2021) . Potential industrial applications are manifold and range from simplifying professional language to be intelligible to laypersons (Cao et al., 2020) , the generation of more compelling news headlines (Jin et al., 2020) , to related tasks such as text simplification for children and people with disabilities (Martin et al., 2020) .", "cite_spans": [ { "start": 258, "end": 283, "text": "(Rao and Tetreault, 2018)", "ref_id": "BIBREF32" }, { "start": 293, "end": 318, "text": "(Prabhumoye et al., 2018)", "ref_id": "BIBREF31" }, { "start": 330, "end": 349, "text": "(Shen et al., 2017)", "ref_id": "BIBREF38" }, { "start": 363, "end": 385, "text": "(Laugier et al., 2021)", "ref_id": "BIBREF23" }, { "start": 517, "end": 535, "text": "(Cao et al., 2020)", "ref_id": "BIBREF3" }, { "start": 587, "end": 605, "text": "(Jin et al., 2020)", "ref_id": "BIBREF12" }, { "start": 695, "end": 716, "text": "(Martin et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Data-driven style transfer methods can be classified according to the kind of data they use: parallel or non-parallel corpora in the two styles (Jin et al., 2021) . To learn style transfer on non-parallel monostylistic corpora, current approaches take inspiration from unsupervised neural machine translation (UNMT) (Lample et al., 2018) , by exploiting cycle consistency loss (Lample et al., 2019) , iterative back-translation (Jin et al., 2019) and denoising autoencoders (DAE) (Laugier et al., 2021) . As these approaches are similar to UNMT they suffer from the same limitations, i.e. poor performance relative to supervised neural machine translation (NMT) systems when the amount of UNMT training data is small and/or exhibits domain mismatch (Kim et al., 2020) . Unfortunately, this is precisely the case for most existing style transfer corpora.", "cite_spans": [ { "start": 144, "end": 162, "text": "(Jin et al., 2021)", "ref_id": "BIBREF11" }, { "start": 316, "end": 337, "text": "(Lample et al., 2018)", "ref_id": "BIBREF21" }, { "start": 377, "end": 398, "text": "(Lample et al., 2019)", "ref_id": "BIBREF22" }, { "start": 428, "end": 446, "text": "(Jin et al., 2019)", "ref_id": "BIBREF13" }, { "start": 480, "end": 502, "text": "(Laugier et al., 2021)", "ref_id": "BIBREF23" }, { "start": 749, "end": 767, "text": "(Kim et al., 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we follow an alternative approach inspired by self-supervised NMT (Ruiter et al., 2021) that jointly learns online (near) parallel sentence pair extraction (SPE), back-translation (BT) and style transfer in a loop. The goal is to identify and exploit supervisory signals present in limited amounts of (possibly domain-mismatched) nonparallel data ignored by UNMT. The architecture of our system-called Self-Supervised Style Transfer (3ST)-implements an online self-supervisory cycle, where learning SPE enables us to learn style transfer on extracted parallel data, which iteratively improves SPE and BT quality, and thereby style transfer learning, in a virtuous circle.", "cite_spans": [ { "start": 81, "end": 102, "text": "(Ruiter et al., 2021)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We evaluate and compare 3ST to current state-ofthe-art (SOTA) style transfer models on two established tasks: formality and polarity style transfer, where 3ST is the most balanced model and reaches top overall performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To gain insights into the performance of 3ST on an under-explored task, we also focus on the civil rephrasing task, which is interesting as i) it has been explored only twice before (Nogueira dos Santos et al., 2018; Laugier et al., 2021) and ii) it makes an important societal contribution in order to tackle hateful content online. We focus on performance and qualitative analysis of 3ST predictions on this task's test set and identify shortcomings of the currently available data setup for civil rephrasing. On civil rephrasing, 3ST generates more neutral sentences than the current SOTA model while being on par in overall performance.", "cite_spans": [ { "start": 182, "end": 216, "text": "(Nogueira dos Santos et al., 2018;", "ref_id": "BIBREF30" }, { "start": 217, "end": 238, "text": "Laugier et al., 2021)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contribution is threefold:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Efficient detection and exploitation of the supervisory signals in non-parallel social media content via jointly-learning online SPE and BT, outperforming SOTA models on averaged performance across civility, formality and polarity tasks in automatic and human evaluation (\u2206 in Tables 2 and 3 ).", "cite_spans": [], "ref_spans": [ { "start": 279, "end": 293, "text": "Tables 2 and 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Simple end-to-end training of a single online model without the need for additional external style-classifiers or external SPE, enabling the initialization of the 3ST network on a DAE task, which leads to SOTA-matching fluency scores during human evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 A qualitative analysis that identifies flaws in the current data, emphasizing the need for a high quality civil rephrasing corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Style transfer can be treated as a supervised translation task between two styles (Jhamtani et al., 2017) . However, for most style transfer tasks, parallel data is scarcely available. To learn style transfer without parallel data, prior research has focused on exploiting larger amounts of monostylistic data in combination with a smaller amount of style-labeled data. One such approach is using variational autoencoders and disentangled latent spaces (Fu et al., 2018) , which can be further incentivized towards generating fluent or style-relevant content by fusing them with adversarial (Shen et al., 2017) or styleenforcing (Hu et al., 2017) discriminators. Chawla and Yang (2020) use a language model as the discriminator, leading to a more informative signal to the generator during training and thus more fluent and stable results. Li et al. (2018) argue that adversarially learned outputs tend to be low-quality, and that most sentiment modification is based on simple deletion and replacement of relevant words. The above approaches focus on separating content and style, either in latent space or surface form, however this separation is difficult to achieve (Gonen and Goldberg, 2019). instead train a transformer together with a discriminator, without disentangling the style features before decoding. Current approaches treat style transfer similar to an unsupervised neural machine translation task. Jin et al. (2019) create pseudo-parallel corpora by extracting similar sentences offline from two monostylistic corpora to train an initial NMT model which is then iteratively improved using back-translation. Luo et al. (2019) use a reinforcement approach to further improve sentence fluency. Laugier et al. (2021) improve fluency without the need of any style-specific classifiers, giving their model a head start by initializing it on a pre-trained transformer model. argue that standard NMT training cannot account for the small differences between informal and formal style transfer, and apply style-specific decoder heads to enforce style differences.", "cite_spans": [ { "start": 82, "end": 105, "text": "(Jhamtani et al., 2017)", "ref_id": "BIBREF10" }, { "start": 453, "end": 470, "text": "(Fu et al., 2018)", "ref_id": "BIBREF6" }, { "start": 591, "end": 610, "text": "(Shen et al., 2017)", "ref_id": "BIBREF38" }, { "start": 629, "end": 646, "text": "(Hu et al., 2017)", "ref_id": "BIBREF9" }, { "start": 840, "end": 856, "text": "Li et al. (2018)", "ref_id": "BIBREF25" }, { "start": 1415, "end": 1432, "text": "Jin et al. (2019)", "ref_id": "BIBREF13" }, { "start": 1624, "end": 1641, "text": "Luo et al. (2019)", "ref_id": "BIBREF27" }, { "start": 1708, "end": 1729, "text": "Laugier et al. (2021)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our approach differs from the two step approach of Jin et al. (2019) , who first extract similar sentences from style corpora offline and then initialize their system by training on them. Ruiter et al. (2020) show that joint online learning to extract and translate in self-supervised NMT (SSNMT) leads to higher recall and precision of the extracted data. Following this observation, our 3ST approach performs similar sentence extraction and style transfer learning online with a single model in a loop. We further extend the SSNMT-based approach by combining it with UNMT methods, namely by generating additional training data via online back-translation, and by initialising our models with DAE trained in an unsupervised manner.", "cite_spans": [ { "start": 51, "end": 68, "text": "Jin et al. (2019)", "ref_id": "BIBREF13" }, { "start": 188, "end": 208, "text": "Ruiter et al. (2020)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "3 Self-Supervised Style Transfer (3ST) Figure 1 shows the 3ST architecture, which uses the encoder outputs at training time as sentence representations to perform online (near) parallel sentence pair extraction (SPE) together with online back-translation (BT) and style transfer.", "cite_spans": [], "ref_spans": [ { "start": 39, "end": 47, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Self-Supervised NMT (SSNMT): SSNMT (Ruiter et al., 2019) is an encoder-decoder architecture that jointly learns to identify parallel data in non-parallel data and bidirectional NMT. Instead of using SSNMT on different language corpora to learn machine translation, we show how ideas from SSNMT can be used to learn a self-supervised style transfer system from non-parallel social media content. A single bidirectional encoder simultaneously encodes both styles and maps the internal representations of the two styles into the same space. This way, they can be used to compute similarities between sentence pairs in order to identify similar and discard non-similar ones for training. Formally, given two monostylistic corpora S1 and S2 of opposing styles, e.g. toxic and neutral, sentence pairs (s S1 \u2208 S1, s S2 \u2208 S2) are input to an encoder-decoder system, a transformer in our experiments. From the internal representations for the input sentences s S1 and s S2 , SSNMT uses the sum of the word embeddings w(s) and the sum of the encoder outputs e(s) for filtering. The embedded pairs {w(s S1 ), w(s S2 )} are scored using the margin-based measure (Artetxe and Schwenk, 2019) . The same is done with pairs {e(s S1 ), e(s S2 )}. If a sentence pair is the most similar pair for both style directions and for both sentence representations, it is accepted for training, otherwise it is discarded. This sequence of scoring and filtering is denoted as sentence pair extraction (SPE) in 3ST. SPE improves style transfer and style transfer improves SPE online in a virtuous loop, resulting in a single system that jointly learns to identify its supervision signals in the data and to perform style transfer.", "cite_spans": [ { "start": 35, "end": 56, "text": "(Ruiter et al., 2019)", "ref_id": "BIBREF34" }, { "start": 1150, "end": 1177, "text": "(Artetxe and Schwenk, 2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To address the characteristics of the monostylistic corpora we extend basic SSNMT in two ways:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Large-Scale Extraction: SSNMT extracts parallel data from comparable corpora, which contain smaller topic-aligned documents {d S1 , d S2 } of similar content, thus reducing the search space during SPE from |S1|\u00d7|S2| to |d S1 |\u00d7|d S2 |. However, style transfer corpora usually consist of large collections of (unaligned) sequences of a specific style, which forces the exploration of the full space. Improving over the one-by-one comparison of vector representations, we index 1 our data using FAISS (Johnson et al., 2019) . UNMT-Style Data Augmentation: We follow Ruiter et al. (2021) and use the current models' state to generate back-translations online from sentences rejected during SPE in order to increase the amount of supervisory signals to train on. Further, we initialize our style transfer models using denoising autoencoding using BART-style 2 noise (Lewis et al., 2020) . After pre-training a DAE on the stylistic corpora, our models will generate fluent English sentences from the beginning and only need to learn to separate the two styles S1 and S2 during style transfer learning.", "cite_spans": [ { "start": 499, "end": 521, "text": "(Johnson et al., 2019)", "ref_id": "BIBREF14" }, { "start": 564, "end": 584, "text": "Ruiter et al. (2021)", "ref_id": "BIBREF35" }, { "start": 862, "end": 882, "text": "(Lewis et al., 2020)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "4 Experimental Setup 4.1 Data Formality For the formality task, we use the test and development (dev) splits of the GYAFC corpus (Rao and Tetreault, 2018) , which is based on the Yahoo Answers L6 3 corpus. However, as GYAFC is a parallel corpus and we want to evaluate our models in a setup where only monostylistic data is available, we follow Rao and Tetreault (2018) and re-create the training split without downsampling and without creating parallel reference sentences. For this, we extract all answers from the Entertainment & Music and Family & Relationships domains in the Yahoo Answers L6 corpus. We use a BERT classifier fine-tuned on the GYAFC training split to classify sentences as either informal or formal. This leaves us with a much (46\u00d7) larger training split than the parallel GYAFC corpus, although consisting of non-parallel data where a single instance is less informative than a parallel one. We remove sentences from our training data that are matched with a sentence in the official test-dev splits. We deduplicate the test-dev splits to match those used by Jin et al. (2019) . For DAE pre-training, we sample sentences from Yahoo Answers L6.", "cite_spans": [ { "start": 129, "end": 154, "text": "(Rao and Tetreault, 2018)", "ref_id": "BIBREF32" }, { "start": 345, "end": 369, "text": "Rao and Tetreault (2018)", "ref_id": "BIBREF32" }, { "start": 1082, "end": 1099, "text": "Jin et al. (2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Polarity We use the standard train-dev-test splits 4 of the Yelp sentiment transfer task (Shen et al., 2017) . This dataset is already tokenized and lower-cased. Therefore, as opposed to the civility and formality tasks, we do not perform any additional pre-processing on this corpus. For DAE pretraining, we sample sentences from a generic Yelp corpus 5 and process them to fit the preprocessing of the Yelp sentiment transfer task, i.e. we lowercase and perform sentence and word tokenization using NLTK (Bird and Loper, 2004) .", "cite_spans": [ { "start": 89, "end": 108, "text": "(Shen et al., 2017)", "ref_id": "BIBREF38" }, { "start": 506, "end": 528, "text": "(Bird and Loper, 2004)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Civility The civil rephrasing task is rooted in the broader domain of hate speech research, which commonly focuses on the detection of hateful, offensive, or profane contents . Besides deletion, moderation, and generating counterspeech (Tekiroglu et al., 2020), which are reactive measures after the abuse has already happened, there is a need for proactive ways of dealing with hateful contents to prevent harm (Jurgens et al., 2019) . Civil rephrasing is a novel approach to fight abusive or profane contents by suggesting civil rephrasings to authors before their comments are published. So far, civil rephrasing has been explored twice before (Nogueira dos Santos et al., 2018; Laugier et al., 2021) . However, their datasets are not publicly available. In order to compare the works, we reproduce the data sets used in Laugier et al. (2021) . We follow their approach and create our own train and dev splits on the Civil Comments 6 (CivCo) dataset. Style transfer learning requires distinct distributions in the two opposing style corpora. To increase the distinction in our toxic and neutral datasets, we filter them using a list of slurs 7 such that the toxic portion contains only sentences with at least one slur, and the neutral portion does not contain any slurs in the list. Laugier et al. (2021) kindly provided us with the original test set used in their study. We removed sentences contained in the test set from our corpus and split the remaining sentences into train and dev. To initialize 3ST on DAE with data related to the civility task domain, i.e. user comments, we sample sentences from generic Reddit comments crawled with PRAW 8 .", "cite_spans": [ { "start": 412, "end": 434, "text": "(Jurgens et al., 2019)", "ref_id": "BIBREF15" }, { "start": 647, "end": 681, "text": "(Nogueira dos Santos et al., 2018;", "ref_id": "BIBREF30" }, { "start": 682, "end": 703, "text": "Laugier et al., 2021)", "ref_id": "BIBREF23" }, { "start": 824, "end": 845, "text": "Laugier et al. (2021)", "ref_id": "BIBREF23" }, { "start": 1287, "end": 1308, "text": "Laugier et al. (2021)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Preprocessing On all datasets, excluding the polarity task data which is already preprocessed, we performed sentence tokenization using NLTK as well as punctuation normalization, tokenization and truecasing using standard Moses scripts (Koehn et al., 2007) . Following Rao and Tetreault (2018), we remove sentences containing URLs as well as those containing less than 5 or more than 25 words. For the civility task only, we allow longer sequences of up to 30 words due to the higher average sequence length in this task (Laugier et al., 2021) . We perform deduplication and language identification using polyglot 9 . We apply a bytepair encoding (Sennrich et al., 2016) of 8k mergeoperations. We add target style labels (e.g. ) to the beginning of each sequence. Table 1 summarizes all train, dev and test splits.", "cite_spans": [ { "start": 236, "end": 256, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF18" }, { "start": 521, "end": 543, "text": "(Laugier et al., 2021)", "ref_id": "BIBREF23" }, { "start": 647, "end": 670, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF37" } ], "ref_spans": [ { "start": 769, "end": 776, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We base our 3ST code on OpenNMT (Klein et al., 2017) , using a transformer-base with standard parameters, a batch size of 50 sentences and a maximum sequence length of 100 sub-word units. All models are trained until the attribute transfer accuracy on the development set has converged. Each model is trained on a single Titan X GPU, which takes around 2-5 days for a 3ST model. For DAE pre-training, we use the task-specific DAE data split into 20M train sentences and 5k dev and test sentences each. To create the noisy sourceside data, we apply BART-style noise with \u03bb = 3.5 and p = 0.35 for word sequence masking. We also add one random mask insertion per sequence and perform a sequence permutation.", "cite_spans": [ { "start": 32, "end": 52, "text": "(Klein et al., 2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Model Specifications", "sec_num": "4.2" }, { "text": "For BERT classifiers, which we use to automatically evaluate the attribute transfer accuracy, we fine-tune a bert-base-cased model on the relevant classification task using early stopping with \u03b4 = 0.01 and patience 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Specifications", "sec_num": "4.2" }, { "text": "While 3ST can perform style transfer bidirectionally, we only evaluate on the toxic\u2192neutral direc-tion of the civility task, as the other direction, i.e. generation of toxic content, would pose a harmful application of our system. Similarly, the formality task is only evaluated for the informal\u2192formal direction as this is the most common use-case (Rao and Tetreault, 2018) . The polarity task is evaluated in both directions. We compare our model against current SOTA models: multi-class (MUL) and conditional (CON) style transformers by Content Preservation (CP) In style transfer, the aim is to change the style of a source sentence into a target style without changing the underlying meaning of the sentence. To evaluate CP, BLEU is a common choice, despite its inability to account for paraphrases (Wieting et al., 2019) , which are at the core of style transfer. Instead, we use Siamese Sentence Transformers 11 12 to embed the source and prediction and then calculate the cosine similarity.", "cite_spans": [ { "start": 349, "end": 374, "text": "(Rao and Tetreault, 2018)", "ref_id": "BIBREF32" }, { "start": 804, "end": 826, "text": "(Wieting et al., 2019)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "4.3" }, { "text": "Attribute Transfer Accuracy (ATA) We want to transfer the style of the source sentence to the target style or attributes. Whether this transfer was successful is calculated using a BERT classification model. We train and evaluate our classifiers on the same data splits as the style-transfer models. This yields classifiers with Macro-F1 scores of 93.2 (formality), 87.4 (civility) and 97.1 (polarity) on the task-specific development sets. ATA is the percentage of generated target sentences that were labeled as belonging to the target style by the task-specific classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "4.3" }, { "text": "As generated sentences should be intelligible and natural-sounding to a reader, we take their fluency into consideration during evaluation. The perplexity of a language model is often used to evaluate this (Krishna et al., 2020) . However, perplexity is unbounded and therefore difficult to interpret, and has the limitation of favoring potentially unnatural sentences containing frequent words (Mir et al., 2019) . We therefore use a RoBERTa (Liu et al., 2019 ) model 13 trained on 10 Model outputs provided by He et al. (2020 CoLA (Warstadt et al., 2019) to label model predictions as either grammatical or ungrammatical.", "cite_spans": [ { "start": 206, "end": 228, "text": "(Krishna et al., 2020)", "ref_id": "BIBREF20" }, { "start": 395, "end": 413, "text": "(Mir et al., 2019)", "ref_id": "BIBREF29" }, { "start": 443, "end": 460, "text": "(Liu et al., 2019", "ref_id": null }, { "start": 483, "end": 485, "text": "10", "ref_id": null }, { "start": 512, "end": 527, "text": "He et al. (2020", "ref_id": "BIBREF8" }, { "start": 533, "end": 556, "text": "(Warstadt et al., 2019)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Fluency (FLU)", "sec_num": null }, { "text": "Aggregation (AGG) CP, ATA and FLU are important dimensions of style-transfer evaluation. A good style transfer model should be able to perform well across all three metrics. To compare overall style-transfer performance, it is possible to aggregate these metrics into a single value (Li et al., 2018) . Krishna et al. (2020) show that corpuslevel aggregation are less indicative for the overall performance of a system and we thus apply their sentence-level aggregation score, which ensures that each predicted sentence performs well across all measures, while penalizing predictions which are poor in at least one of the metrics. We also report the average AGG difference of a model m to 3ST across all tasks that m was tested on (\u2206).", "cite_spans": [ { "start": 283, "end": 300, "text": "(Li et al., 2018)", "ref_id": "BIBREF25" }, { "start": 303, "end": 324, "text": "Krishna et al. (2020)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Fluency (FLU)", "sec_num": null }, { "text": "The automatic evaluation relies on external models, which are sensitive to hyperparameter choices during training. However, we use the same evaluation models across all style transfer model predictions and supplement the automatic evaluation with a human evaluation. As we observe consistency between the automatic and human evaluation, the underlying models used for the automatic evaluation can be considered to be sufficiently reliable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fluency (FLU)", "sec_num": null }, { "text": "We compare the performance of 3ST with each of the two strongest baseline systems per task, chosen based on their aggregated scores achieved in the automatic evaluation. These are: CAE and IMT for comparison in the polarity task, DAR and IMT for the formality task and CAE for the civility task. Due to the large number of models in the polarity task, we also include CON and MUL in the human evaluation, as they are strongest on ATA and CP respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation", "sec_num": "4.4" }, { "text": "For each task, we sample 100 data points from the original test set and the corresponding predictions of the different models. We randomly duplicate 5 of the data points to calculate intra-rater agreement, resulting in a total of 105 evaluation sentences per system. Three fluent English speakers were asked to rate the content preservation, fluency and attribute transfer accuracy of the predictions on a 5-point Likert scale. In order to aggregate the different values, analogous to the automatic evaluation, we consider the transfer to be successful when a prediction was rated with a 4 or 5 across all three metrics (Li et al., 2018 (SR) is then defined as the ratio of successfully transferred instances over all instances. We also report the cross-task average SR difference of a model to 3ST (\u2206). All inter-rater agreements, calculated using Krippendorff-\u03b1, lie above 0.7, except for cases where most samples were annotated repeatedly with the same justified rating (e.g. a continuous FLU rating of 4) due to the underlying data distribution, which is sanctioned by the Krippendorff measure. Intra-rater agreement is at an average of 0.928 across all raters. A more detailed description of the evaluation task and a listing of the taskand rater-specific \u03b1-values is given in the appendix. For the ratings themselves, we calculate pair-wise statistical significance between SOTA models and 3ST using the Wilcoxon T test (p < 0.05). Table 2 provides an overview of the CP, FLU, ATA and AGG results of all compared models across the three tasks.", "cite_spans": [ { "start": 620, "end": 636, "text": "(Li et al., 2018", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 1438, "end": 1445, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Human Evaluation", "sec_num": "4.4" }, { "text": "Civility On attribute transfer accuracy, 3ST improves by +7.8 points over CAE, while CAE is stronger in content preservation (+3.7) and fluency (+5.3). There is, however, no statistically significant difference in the overall aggregated per-formance of the models, indicating that they are equivalent in performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "Formality 3ST substantially outperforms SOTA models in all four categories, with an overall performance (AGG) that surpasses the top-scoring SOTA model (IMT) by +9.5 points. This is indicative, as IMT was trained on a shuffled version of the parallel GYAFC corpus, which contains highly informative human written paraphrases, while 3ST was trained on a truly non-parallel corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "Polarity The polarity task has more recent SO-TAs to compare to, and the results show that no single model is best in all three categories. While MUL is strongest in content preservation (62.6), its fluency is low and outperformed by 3ST by +38.7 points, leading to a much lower overall performance (AGG) in comparison to 3ST (+14.9). Similarly, CON is strongest in attribute transfer accuracy (91.3) but has a low fluency (32.5), leading to a lower aggregated score than 3ST (+18). IMT is the strongest SOTA model with an overall performance (AGG) of 29.6 and the highest fluency score (84.4). Nevertheless, it is outperformed by 3ST by +5.7 points on overall performance (AGG), which is due to the comparatively better performance in content preservation (+13.2) of 3ST. Interestingly, unsupervised NMT (UMT) performs equally well on attribute transfer accuracy, while being slightly outperformed by 3ST in content preservation (+0.9). This may be due to the information-rich parallel instances automatically found in training by the SPE module. Further, 3ST has a much higher fluency than UMT (+25.3), which is due to its DAE pre-training. While 3ST is not top-performing in any of the three metrics CP, FLU and ATA, its top-scoring overall performance (AGG) shows that it is the most balanced model. Table 2 shows that 3ST outperforms each of the SOTA models fielded in a single task (CON, DLS, MUL, UMT) by the respective AGG \u2206, and all other models (CAE, DAR, IMT, SCA) on average AGG \u2206 14 . 3ST achieves high levels of FLU, with ATA in the medium to high 80's, clear testimony to successful style transfer.", "cite_spans": [], "ref_spans": [ { "start": 1304, "end": 1311, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "Human evaluation shows that 3ST has a high level of fluency, as it either outperforms or is on par with 14 e.g. \u2206(DAR, 3ST) = 14. current SOTA models across all three tasks (Table 3) , with ratings between 4.05 (civility) and 4.58 (polarity), and gains of up +1.42 (DAR, formality) points. According to the annotation protocol, a rating of 4 and 5 is to describe content written by native speakers, thus annotators deemed most generated sentences to have been written by a native speaker of English.", "cite_spans": [], "ref_spans": [ { "start": 173, "end": 183, "text": "(Table 3)", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Human Evaluation", "sec_num": "5.2" }, { "text": "For content preservation and attribute transfer, there seems to be a trade-off. In the formality task, 3ST outperforms or is on par with current SOTAs on CP with gains between +0.26 (IMT) and +1.0 (DAR) points, and ATA is on par with the SOTA (\u22120.01, IMT). Note that for all models tested on the formality task, the success rate is low. This is due to the nature of the training data, where many sentences in the formal portion of the dataset tend to be rather neutral, i.e. neither formal nor informal, rather than truly formal sentences. For the civility task, on the other hand, 3ST outperforms the current SOTA on ATA with gains of +0.53 (CAE) while being on par on CP (\u22120.17). For the polarity task, the CP is slightly below the best model (\u22120.35, MUL).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation", "sec_num": "5.2" }, { "text": "While some models are strong on single values, 3ST has the highest success rate (SR) across all tasks. 3ST outperforms each of the single task models (DAR, CON, MUL) on SR by \u2206 and each of the multitask models (CAE, IMT) by average cross-task SR \u2206, again highlighting that it balances best between the three capabilities CP, FLU and ATA, which leads to best-performing style transfer predictions. 3ST Quit trying to justify what he did.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation", "sec_num": "5.2" }, { "text": "SRC There was no consensus, 1 idiot and everyone else in the situation let him know he was in the wrong. (7) CAE there was no consensus, no one in the room and everyone in the room knew he was in the wrong place. 3ST No, there was no consensus in the past, and everyone else knew he was in the wrong place. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation", "sec_num": "5.2" }, { "text": "For our qualitative analysis, we focus on the civility task as this is a challenging, novel task and we want to understand its limitations. We analyze the same subset of the test set used for human evaluation and annotate common mistakes. Common errors in the neutral counterparts generated by 3ST can be classified into four classes. We observe fluency or structural errors (11% of sentences), e.g. a subject becoming a direct form of address (Table 4 , Ex-1). Attribute errors (14%) (Ex-2), where toxic content was not successfully removed, are another common source of error. Similarly to Laugier et al. (2021) , we observe stance reversal (14%), i.e. where a usually negative opinion in the original source sentence is reversed to a positive polarity (Ex-3). This is due to a negativity bias on the toxic side of the CivCo corpus, while the neutral side contains more positive sentences, thus introducing an incentive to translate negative sentiment to positive sentiment. Unlike Laugier et al. 2021, we do not observe that hallucinations are most frequent at the end of a sequence (supererogation). Rather, related hallucinations, where unnecessary content is mixed with words from the original source sentence, are found at arbitrary positions (23%, Ex-4, CAE). We observe few hallucinations where a prediction has no relation with the source (4%, Ex-5). Phenomena such as hallucinations can become amplified through back-translation (Raunak et al., 2021) . However, as they are most prevalent in the civility task, hallucinations in this case are likely originally triggered by long source sentences that i) overwhelm the current models' capacity, and ii) add additional noise to the training. It is less likely that a complex sentence has a perfect rephrasing to match with and therefore instead it will match with a similar rephrasing that introduces additional content, i.e. noise. For reference, the average length of source sentences that triggered hallucinations was 21.9 words, while for adequate re-writings (39%), it was 8 words. Note that we capped sentence lengths to 30 words in the training data while the test data contained sentences with up to 85 words.", "cite_spans": [ { "start": 592, "end": 613, "text": "Laugier et al. (2021)", "ref_id": "BIBREF23" }, { "start": 1440, "end": 1461, "text": "(Raunak et al., 2021)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 444, "end": 452, "text": "(Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "5.3" }, { "text": "Successful rephrasings are usually due to one of two factors. 3ST either replaces profane words by their neutral counterparts (Ex-{4,6}) or removes them (Ex-7).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "5.3" }, { "text": "To analyze the contribution of the three main components (SPE, BT and DAE) of 3ST, we remove them individually from the original architecture and observe the performance of the resulting models on the three different tasks (Table 5) . Without SPE, the model merely copies source sentences without performing style transfer, resulting in a large drop in overall performance (AGG). This shows in the low ATA scores (1.9-14.8), which are in direct correlation with the extremely high scores in CP (89.5-100.0) achieved by this model. This underlines that SPE is vital to the style-transfer capabilities of 3ST, as it retrieves similar paraphrases from the style corpora and lets 3ST train on these. This pushes the system to generate back-translations which themselves are paraphrases that fulfill the style-transfer task. BT and DAE are integral parts of 3ST, too, that improve over the underlying self-supervised neural machine translation (-BT-DAE) approach. This can be seen in the drastic drops of CP and FLU scores when BT and DAE techniques are removed. Especially DAE is important for the fluency of the model. The gains in CP and FLU through BT and DAE come at a minor drop in ATA.", "cite_spans": [], "ref_spans": [ { "start": 223, "end": 232, "text": "(Table 5)", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "5.4" }, { "text": "3ST is a style transfer architecture that efficiently uses the supervisory signals present in non-parallel social media content, by i) jointly learning style transfer and similar sentence extraction during training, ii) using online back-translation and iii) DAE-based initialization. 3ST gains strong results on all three metrics FLU, ATA and CP, outperforming SOTA models on averaged performance (\u2206) across their tested tasks in automatic (AGG) and human (SR) evaluation. We present one of the first studies on automatic civil rephrasing and, importantly, identify current weaknesses in the data, which lead to limitations in 3ST and other SOTA models on the civil rephrasing task. Our code and model predictions are publicly available at https://github.com/uds-lsv/3ST.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We perform a human evaluation to assess the quality of the top performing models according to automatic metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Human Evaluation Task", "sec_num": null }, { "text": "We select 3 systems for Formality, 5 systems for Polarity and the only 2 systems available for the Civility task. For each of these tasks, we sample 100 data points from the original test set and the corresponding predictions of the different models. We randomly duplicate 5 of the points for quality controls, resulting in evaluation tests with 105 sentences per system. Three fluent English speakers (raters) were shown with pairs source-system prediction and were asked to rate the content preservation, fluency and attribute transfer accuracy of the predictions on a 5-point Likert scale. Raters were payed around 10 Euros per hour of work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Human Evaluation Task", "sec_num": null }, { "text": "We calculate the reliability of the ratings using Krippendorff-\u03b1 (Krippendorff, 2004) . Table 6 shows the inter-rater agreement measured by \u03b1 for content preservation (CP), fluency (FLU) and Figure 2 : FLU, CP and ATA of generated backtranslations (BTs) during training of 3ST on the three transfer tasks.", "cite_spans": [ { "start": 50, "end": 85, "text": "Krippendorff-\u03b1 (Krippendorff, 2004)", "ref_id": null } ], "ref_spans": [ { "start": 88, "end": 95, "text": "Table 6", "ref_id": "TABREF10" }, { "start": 191, "end": 199, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "A Human Evaluation Task", "sec_num": null }, { "text": "attribute transfer accuracy (ATA). Notice that \u03b1 significantly differs between tasks. The lower \u03b1 on polarity CP and formality task ATA is due to the repetitive ratings of the same kind. i.e. 4, 5 on polarity CP and 3 for formality ATA, which is sanctioned by the Krippendorff measure. For the intra-rater agreement estimated from 40 duplicated sentences per rater, we obtain values of 0.988 (Rater-1), 0.869 (Rater-2) and 0.927 (Rater-3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Human Evaluation Task", "sec_num": null }, { "text": "The back-translations that 3ST generates during training give us a direct insight into the changing state of the model throughout the training process. We thus automatically evaluate ATA, FLU and CP on the back-translations over time. BT fluency (Figure 2 , top) on all three tasks is strong already at the beginning of training, due to the DAE pre-training. For the formality and polarity task, the high level of FLU remains stable (\u223c 80) throughout training, while for Civility it slightly drops. This underlines the observation that the Civility task is prone to hallucinations due to the sparse amount of parallel supervisory signals in the dataset, which then leads to lower FLU scores.", "cite_spans": [], "ref_spans": [ { "start": 246, "end": 255, "text": "(Figure 2", "ref_id": null } ], "eq_spans": [], "section": "B Performance Evolution", "sec_num": null }, { "text": "For all tasks, content preservation between the generated BTs and the source sentences is already high at the beginning of training. This is due to the DAE pre-training which taught the models to copy and denoise inputs. All of the models decay in CP over time, showing that they are slowly diverging from merely copying inputs. CP scores of the formality and the polarity tasks are close to convergence at around 1M train steps, while the scores of the civility task keep on decaying. This may again be due to the complexity of the data of the toxicity task, which contains longer sequences than the other two. This can lead to hallucinations when supervisory signals are lacking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Performance Evolution", "sec_num": null }, { "text": "As back-translation CP decays, attribute transfer accuracy increases dramatically. Especially on the civility task, where the initial accuracy is low (8.2%) but grows to ATA \u223c82%. For the other two tasks, the curves are less steep, and most of the transfer is learned at the beginning, within the first 300k generated BTs, after which they converge with ATA \u223c95% (formality) and \u223c88% (polarity). This shows the trade-off between attribute accuracy and content preservation: the higher the ATA, the lower the CP score. Nevertheless, as ATA converges earlier than CP (for formality and polarity tasks), an earlier training stop can easily benefit content preservation while having little impact on the already converged ATA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Performance Evolution", "sec_num": null }, { "text": "For each of the three tasks, Civility, Formality and Polarity, we randomly sample 5 source sentences from the respective test sets. In Table 7 we present these source sentences together with the corresponding prediction of 3ST and the two bestscoring SOTA models with respect to the AGG score per task, namely CAE for Civility, DAR and IMT for Formality and CAE and IMT for Polarity.", "cite_spans": [], "ref_spans": [ { "start": 135, "end": 142, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "C Sample Predictions", "sec_num": null }, { "text": "As our internal representations change during the course of training, we re-index at each iteration over the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This is algorithmically equivalent to using a common pretrained BART model for initialization, with the benefit that we have full control on the vocabulary size and data it is pretrained on. We use this benefit by focusing the pre-training on in-domain data instead of generic out-of-domain data.3 www.webscope.sandbox.yahoo.com/ catalog.php?datatype=l", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "www.github.com/shentianxiao/ language-style-transfer 5 www.yelp.com/dataset 6 www.tensorflow.org/datasets/catalog/ civil_comments 7 www.cs.cmu.edu/~biglou/resources/ bad-words.txt", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "www.praw.readthedocs.io/en/latest/ 9 www.github.com/aboSamoor/polyglot", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We want to thank the annotators for their keen work. Partially funded by the DFG (WI 4204/3-1), German Federal Ministry of Education and Research (01IW20010) and the EU Horizon 2020 project COMPRISE (3081705). The author is responsible for the content of this publication.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "An effective approach to unsupervised machine translation", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "194--203", "other_ids": { "DOI": [ "10.18653/v1/P19-1019" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019. An effective approach to unsupervised machine trans- lation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 194-203, Florence, Italy. ACL.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Marginbased parallel corpus mining with multilingual sentence embeddings", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3197--3203", "other_ids": { "DOI": [ "10.18653/v1/P19-1309" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe and Holger Schwenk. 2019. Margin- based parallel corpus mining with multilingual sen- tence embeddings. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 3197-3203, Florence, Italy. ACL.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "NLTK: The natural language toolkit", "authors": [ { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the ACL Interactive Poster and Demonstration Sessions", "volume": "", "issue": "", "pages": "214--217", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Bird and Edward Loper. 2004. NLTK: The natu- ral language toolkit. In Proceedings of the ACL In- teractive Poster and Demonstration Sessions, pages 214-217, Barcelona, Spain. Association for Compu- tational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Expertise style transfer: A new task towards better communication between experts and laymen", "authors": [ { "first": "Yixin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Ruihao", "middle": [], "last": "Shui", "suffix": "" }, { "first": "Liangming", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1061--1071", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.100" ] }, "num": null, "urls": [], "raw_text": "Yixin Cao, Ruihao Shui, Liangming Pan, Min-Yen Kan, Zhiyuan Liu, and Tat-Seng Chua. 2020. Expertise style transfer: A new task towards better communi- cation between experts and laymen. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1061-1071, On- line. ACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Semi-supervised formality style transfer using language model discriminator and mutual information maximization", "authors": [ { "first": "Kunal", "middle": [], "last": "Chawla", "suffix": "" }, { "first": "Diyi", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "2340--2354", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.212" ] }, "num": null, "urls": [], "raw_text": "Kunal Chawla and Diyi Yang. 2020. Semi-supervised formality style transfer using language model dis- criminator and mutual information maximization. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 2340-2354, Online. ACL.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Style transformer: Unpaired text style transfer without disentangled latent representation", "authors": [ { "first": "Ning", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Jianze", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5997--6007", "other_ids": { "DOI": [ "10.18653/v1/P19-1601" ] }, "num": null, "urls": [], "raw_text": "Ning Dai, Jianze Liang, Xipeng Qiu, and Xuanjing Huang. 2019. Style transformer: Unpaired text style transfer without disentangled latent representation. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 5997- 6007, Florence, Italy. ACL.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Style transfer in text: Exploration and evaluation", "authors": [ { "first": "Zhenxin", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Xiaoye", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Dongyan", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)", "volume": "32", "issue": "", "pages": "663--670", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), volume 32, pages 663-670, New Orleans, LA.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them", "authors": [ { "first": "Hila", "middle": [], "last": "Gonen", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "609--614", "other_ids": { "DOI": [ "10.18653/v1/N19-1061" ] }, "num": null, "urls": [], "raw_text": "Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609-614, Minneapolis, MN. ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A probabilistic formulation of unsupervised text style transfer", "authors": [ { "first": "Junxian", "middle": [], "last": "He", "suffix": "" }, { "first": "Xinyi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" } ], "year": 2020, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junxian He, Xinyi Wang, Graham Neubig, and Taylor Berg-Kirkpatrick. 2020. A probabilistic formulation of unsupervised text style transfer. In Proceedings of ICLR.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Toward controlled generation of text", "authors": [ { "first": "Zhiting", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Eric", "middle": [ "P" ], "last": "Xing", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "1587--1596", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward con- trolled generation of text. In Proceedings of the 34th International Conference on Machine Learning, vol- ume 70 of Proceedings of Machine Learning Re- search, pages 1587-1596, International Convention Centre, Sydney, Australia. PMLR.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Shakespearizing modern language using copy-enriched sequence to sequence models", "authors": [ { "first": "Harsh", "middle": [], "last": "Jhamtani", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Gangal", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Nyberg", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Workshop on Stylistic Variation", "volume": "", "issue": "", "pages": "10--19", "other_ids": { "DOI": [ "10.18653/v1/W17-4902" ] }, "num": null, "urls": [], "raw_text": "Harsh Jhamtani, Varun Gangal, Eduard Hovy, and Eric Nyberg. 2017. Shakespearizing modern language using copy-enriched sequence to sequence models. In Proceedings of the Workshop on Stylistic Variation, pages 10-19, Copenhagen, DK. ACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Deep learning for text style transfer: A survey", "authors": [ { "first": "Di", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Zhijing", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Zhiting", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Vechtomova", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, and Rada Mihalcea. 2021. Deep learning for text style transfer: A survey. CoRR, abs/2011.00416.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Hooks in the headline: Learning to generate headlines with controlled styles", "authors": [ { "first": "Di", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Zhijing", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Joey", "middle": [ "Tianyi" ], "last": "Zhou", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Orii", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Szolovits", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5082--5093", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.456" ] }, "num": null, "urls": [], "raw_text": "Di Jin, Zhijing Jin, Joey Tianyi Zhou, Lisa Orii, and Peter Szolovits. 2020. Hooks in the headline: Learn- ing to generate headlines with controlled styles. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5082- 5093, Online. ACL.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "IMaT: Unsupervised text attribute transfer via iterative matching and translation", "authors": [ { "first": "Zhijing", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Di", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Matthews", "suffix": "" }, { "first": "Enrico", "middle": [], "last": "Santus", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3097--3109", "other_ids": { "DOI": [ "10.18653/v1/D19-1306" ] }, "num": null, "urls": [], "raw_text": "Zhijing Jin, Di Jin, Jonas Mueller, Nicholas Matthews, and Enrico Santus. 2019. IMaT: Unsupervised text attribute transfer via iterative matching and trans- lation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3097-3109, Hong Kong, China. ACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Billion-scale similarity search with gpus", "authors": [ { "first": "Jeff", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Matthijs", "middle": [], "last": "Douze", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2019, "venue": "IEEE Transactions on Big Data", "volume": "7", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeff Johnson, Matthijs Douze, and Herv\u00e9 J\u00e9gou. 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data, 7(3).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A just and comprehensive strategy for using NLP to address online abuse", "authors": [ { "first": "David", "middle": [], "last": "Jurgens", "suffix": "" }, { "first": "Libby", "middle": [], "last": "Hemphill", "suffix": "" }, { "first": "Eshwar", "middle": [], "last": "Chandrasekharan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3658--3666", "other_ids": { "DOI": [ "10.18653/v1/P19-1357" ] }, "num": null, "urls": [], "raw_text": "David Jurgens, Libby Hemphill, and Eshwar Chan- drasekharan. 2019. A just and comprehensive strat- egy for using NLP to address online abuse. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 3658- 3666, Florence, Italy. ACL.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "When and why is unsupervised neural machine translation useless?", "authors": [ { "first": "Yunsu", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Gra\u00e7a", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", "volume": "", "issue": "", "pages": "35--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yunsu Kim, Miguel Gra\u00e7a, and Hermann Ney. 2020. When and why is unsupervised neural machine trans- lation useless? In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, pages 35-44, Lisboa, Portugal. Euro- pean Association for Machine Translation.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "OpenNMT: Opensource toolkit for neural machine translation", "authors": [ { "first": "Guillaume", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yuntian", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Senellart", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ACL 2017, System Demonstrations", "volume": "", "issue": "", "pages": "67--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. OpenNMT: Open- source toolkit for neural machine translation. In Pro- ceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. ACL.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Constantin", "suffix": "" }, { "first": "Evan", "middle": [], "last": "Herbst", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177-180, Prague, Czech Republic. ACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Content Analysis, an Introduction to Its Methodology", "authors": [ { "first": "Klaus", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klaus Krippendorff. 2004. Content Analysis, an In- troduction to Its Methodology. Sage Publications, Thousand Oaks, Calif.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Reformulating unsupervised style transfer as paraphrase generation", "authors": [ { "first": "Kalpesh", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "737--762", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.55" ] }, "num": null, "urls": [], "raw_text": "Kalpesh Krishna, John Wieting, and Mohit Iyyer. 2020. Reformulating unsupervised style transfer as para- phrase generation. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 737-762, Online. ACL.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Unsupervised machine translation using monolingual corpora only", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" } ], "year": 2018, "venue": "6th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018. Unsupervised ma- chine translation using monolingual corpora only. In 6th International Conference on Learning Represen- tations, ICLR, Vancouver, BC, Canada.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Multipleattribute text rewriting", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Eric", "middle": [ "Michael" ], "last": "Smith", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Y-Lan", "middle": [], "last": "Boureau", "suffix": "" } ], "year": 2019, "venue": "7th International Conference on Learning Representations, ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Sandeep Subramanian, Eric Michael Smith, Ludovic Denoyer, Marc'Aurelio Ranzato, and Y-Lan Boureau. 2019. Multiple- attribute text rewriting. In 7th International Conference on Learning Representations, ICLR, New Orleans, LA.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Civil rephrases of toxic texts with self-supervised transformers", "authors": [ { "first": "L\u00e9o", "middle": [], "last": "Laugier", "suffix": "" }, { "first": "John", "middle": [], "last": "Pavlopoulos", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Sorensen", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Dixon", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", "volume": "", "issue": "", "pages": "1442--1461", "other_ids": {}, "num": null, "urls": [], "raw_text": "L\u00e9o Laugier, John Pavlopoulos, Jeffrey Sorensen, and Lucas Dixon. 2021. Civil rephrases of toxic texts with self-supervised transformers. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1442-1461, Online. ACL.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Abdelrahman", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7871--7880", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.703" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7871-7880, Online. ACL.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Delete, retrieve, generate: a simple approach to sentiment and style transfer", "authors": [ { "first": "Juncen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1865--1874", "other_ids": { "DOI": [ "10.18653/v1/N18-1169" ] }, "num": null, "urls": [], "raw_text": "Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to senti- ment and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1865-1874, New Orleans, Louisiana. ACL.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A dual reinforcement learning framework for unsupervised text style transfer", "authors": [ { "first": "Fuli", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Zhifang", "middle": [], "last": "Sui", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19", "volume": "", "issue": "", "pages": "5116--5122", "other_ids": { "DOI": [ "10.24963/ijcai.2019/711" ] }, "num": null, "urls": [], "raw_text": "Fuli Luo, Peng Li, Jie Zhou, Pengcheng Yang, Baobao Chang, Xu Sun, and Zhifang Sui. 2019. A dual re- inforcement learning framework for unsupervised text style transfer. In Proceedings of the Twenty- Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 5116-5122. Interna- tional Joint Conferences on Artificial Intelligence Organization.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "\u00c9ric de la Clergerie, Antoine Bordes, and Beno\u00eet Sagot. 2020. Multilingual unsupervised sentence simplification", "authors": [ { "first": "Louis", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Louis Martin, Angela Fan, \u00c9ric de la Clergerie, An- toine Bordes, and Beno\u00eet Sagot. 2020. Multilin- gual unsupervised sentence simplification. CoRR, abs/2005.00352.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Evaluating style transfer for text", "authors": [ { "first": "Remi", "middle": [], "last": "Mir", "suffix": "" }, { "first": "Bjarke", "middle": [], "last": "Felbo", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Obradovich", "suffix": "" }, { "first": "Iyad", "middle": [], "last": "Rahwan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "495--504", "other_ids": { "DOI": [ "10.18653/v1/N19-1049" ] }, "num": null, "urls": [], "raw_text": "Remi Mir, Bjarke Felbo, Nick Obradovich, and Iyad Rahwan. 2019. Evaluating style transfer for text. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 495-504, Minneapolis, MN. ACL.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Fighting offensive language on social media with unsupervised text style transfer", "authors": [ { "first": "Cicero", "middle": [], "last": "Nogueira Dos Santos", "suffix": "" }, { "first": "Igor", "middle": [], "last": "Melnyk", "suffix": "" }, { "first": "Inkit", "middle": [], "last": "Padhi", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "189--194", "other_ids": { "DOI": [ "10.18653/v1/P18-2031" ] }, "num": null, "urls": [], "raw_text": "Cicero Nogueira dos Santos, Igor Melnyk, and Inkit Padhi. 2018. Fighting offensive language on social media with unsupervised text style transfer. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 189-194, Melbourne, Australia. ACL.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Style transfer through back-translation", "authors": [ { "first": "Yulia", "middle": [], "last": "Shrimai Prabhumoye", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Black", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "866--876", "other_ids": { "DOI": [ "10.18653/v1/P18-1080" ] }, "num": null, "urls": [], "raw_text": "Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhut- dinov, and Alan W Black. 2018. Style transfer through back-translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 866-876, Melbourne, Australia. ACL.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer", "authors": [ { "first": "Sudha", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "129--140", "other_ids": { "DOI": [ "10.18653/v1/N18-1012" ] }, "num": null, "urls": [], "raw_text": "Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Corpus, bench- marks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 129-140, New Or- leans, LA. ACL.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "The curious case of hallucinations in neural machine translation", "authors": [ { "first": "Vikas", "middle": [], "last": "Raunak", "suffix": "" }, { "first": "Arul", "middle": [], "last": "Menezes", "suffix": "" }, { "first": "Marcin Junczys-Dowmunt", "middle": [], "last": "", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1172--1183", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vikas Raunak, Arul Menezes, and Marcin Junczys- Dowmunt. 2021. The curious case of hallucinations in neural machine translation. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 1172-1183, Online. ACL.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Self-supervised neural machine translation", "authors": [ { "first": "Dana", "middle": [], "last": "Ruiter", "suffix": "" }, { "first": "Cristina", "middle": [], "last": "Espa\u00f1a-Bonet", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1828--1834", "other_ids": { "DOI": [ "10.18653/v1/P19-1178" ] }, "num": null, "urls": [], "raw_text": "Dana Ruiter, Cristina Espa\u00f1a-Bonet, and Josef van Gen- abith. 2019. Self-supervised neural machine transla- tion. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1828-1834, Florence, Italy. ACL.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Integrating unsupervised data generation into self-supervised neural machine translation for low-resource languages", "authors": [ { "first": "Dana", "middle": [], "last": "Ruiter", "suffix": "" }, { "first": "Dietrich", "middle": [], "last": "Klakow", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" }, { "first": "Cristina", "middle": [], "last": "Espa\u00f1a-Bonet", "suffix": "" } ], "year": 2021, "venue": "Proceedings of Machine Translation Summit XVIII: Research Track", "volume": "", "issue": "", "pages": "76--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dana Ruiter, Dietrich Klakow, Josef van Genabith, and Cristina Espa\u00f1a-Bonet. 2021. Integrating unsuper- vised data generation into self-supervised neural ma- chine translation for low-resource languages. In Pro- ceedings of Machine Translation Summit XVIII: Re- search Track, pages 76-91, Virtual. Association for Machine Translation in the Americas.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Self-induced curriculum learning in self-supervised neural machine translation", "authors": [ { "first": "Dana", "middle": [], "last": "Ruiter", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" }, { "first": "Cristina", "middle": [], "last": "Espa\u00f1a-Bonet", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2560--2571", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.202" ] }, "num": null, "urls": [], "raw_text": "Dana Ruiter, Josef van Genabith, and Cristina Espa\u00f1a- Bonet. 2020. Self-induced curriculum learning in self-supervised neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2560-2571, Online. ACL.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": { "DOI": [ "10.18653/v1/P16-1162" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. ACL.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Style transfer from non-parallel text by cross-alignment", "authors": [ { "first": "Tianxiao", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "6830--6841", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in Neural Infor- mation Processing Systems, volume 30, pages 6830- 6841. Curran Associates, Inc.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Generating counter narratives against online hate speech: Data and strategies", "authors": [ { "first": "Yi-Ling", "middle": [], "last": "Serra Sinem Tekiroglu", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Chung", "suffix": "" }, { "first": "", "middle": [], "last": "Guerini", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1177--1190", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.110" ] }, "num": null, "urls": [], "raw_text": "Serra Sinem Tekiroglu, Yi-Ling Chung, and Marco Guerini. 2020. Generating counter narratives against online hate speech: Data and strategies. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1177-1190, On- line. ACL.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Formality style transfer with shared latent space", "authors": [ { "first": "Yunli", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Zhoujun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Wen-Han", "middle": [], "last": "Chao", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "2236--2249", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.203" ] }, "num": null, "urls": [], "raw_text": "Yunli Wang, Yu Wu, Lili Mou, Zhoujun Li, and Wen- Han Chao. 2020. Formality style transfer with shared latent space. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2236-2249, Barcelona, Spain (Online). International Committee on Computational Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Neural network acceptability judgments", "authors": [ { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "625--641", "other_ids": { "DOI": [ "10.1162/tacl_a_00290" ] }, "num": null, "urls": [], "raw_text": "Alex Warstadt, Amanpreet Singh, and Samuel R. Bow- man. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Beyond BLEU:training neural machine translation with semantic similarity", "authors": [ { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4344--4355", "other_ids": { "DOI": [ "10.18653/v1/P19-1427" ] }, "num": null, "urls": [], "raw_text": "John Wieting, Taylor Berg-Kirkpatrick, Kevin Gimpel, and Graham Neubig. 2019. Beyond BLEU:training neural machine translation with semantic similarity. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 4344- 4355, Florence, Italy. ACL.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "R", "middle": [], "last": "Russ", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5753--5763", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for lan- guage understanding. In Advances in neural informa- tion processing systems, pages 5753-5763.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Civility SRC It is time to impeach this idiot judge. CAE it is time to impeach this judge", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Civility SRC It is time to impeach this idiot judge. CAE it is time to impeach this judge. 3ST It is time to impeach this judge.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "CAE this is classic case of corporate welfare and collective bargaining. 3ST This is classic example of collective corporate greed and individual managerial malice. SRC You silly goose! CAE you mean the goose, right? 3ST You forgot the goose! SRC Afraid of how idiotic social engineering makes people look? CAE imagine how socially acceptable some of the people make?", "authors": [], "year": null, "venue": "SRC This is classic example of collective corporate stupidity and individual managerial malice", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "SRC This is classic example of collective corporate stupidity and individual managerial malice. CAE this is classic case of corporate welfare and collective bargaining. 3ST This is classic example of collective corporate greed and individual managerial malice. SRC You silly goose! CAE you mean the goose, right? 3ST You forgot the goose! SRC Afraid of how idiotic social engineering makes people look? CAE imagine how socially acceptable some of the people make? 3ST Afraid of how social engineering works.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "3ST Not a good idea", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "3ST Not a good idea.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Formality SRC haha julesac is funny, but mean. DAR is funny , but I understand what you mean . IMT That is funny . Those silly people annoy me ! 3ST Julesac is very funny", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Formality SRC haha julesac is funny, but mean. DAR is funny , but I understand what you mean . IMT That is funny . Those silly people annoy me ! 3ST Julesac is very funny.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "SHE WILL LEARN TO DEAL WITH IT . IMT TELL HER YOUR TRUE FEELINGS , IT MAY SHOCK HER BUT WILL WORK . 3ST Do NOT LET HER RUN WITH YOU, SHE WILL NEVER HAVE TO WORK", "authors": [ { "first": "", "middle": [], "last": "Src Don't Let Her Rule Your", "suffix": "" }, { "first": "She", "middle": [], "last": "Life", "suffix": "" }, { "first": "", "middle": [], "last": "Will", "suffix": "" }, { "first": "", "middle": [], "last": "Have", "suffix": "" }, { "first": "", "middle": [], "last": "Learn", "suffix": "" }, { "first": "", "middle": [], "last": "Deal", "suffix": "" }, { "first": "", "middle": [], "last": "It", "suffix": "" }, { "first": "", "middle": [], "last": "Dar", "suffix": "" }, { "first": "", "middle": [], "last": "Her", "suffix": "" }, { "first": "", "middle": [], "last": "Be", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "SRC DON'T LET HER RULE YOUR LIFE, SHE WILL JUST HAVE TO LEARN TO DEAL WITH IT. DAR LET HER BE , SHE WILL LEARN TO DEAL WITH IT . IMT TELL HER YOUR TRUE FEELINGS , IT MAY SHOCK HER BUT WILL WORK . 3ST Do NOT LET HER RUN WITH YOU, SHE WILL NEVER HAVE TO WORK. SRC cause it's buy one take one.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "DAR I can not wait to buy one take one . IMT Because it is buy one take one", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "DAR I can not wait to buy one take one . IMT Because it is buy one take one . 3ST You can buy one.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "SRC All my votes are going to Taylor Hicks though", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "SRC All my votes are going to Taylor Hicks though...", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "DAR All my votes are , and I am going to Hicks IMT All my votes are going to Taylor . 3ST All my votes are going to be Taylor Hicks", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "DAR All my votes are , and I am going to Hicks IMT All my votes are going to Taylor . 3ST All my votes are going to be Taylor Hicks. SRC but paris hilton isn't far behind.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "DAR I do not know but is n't far behind", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "DAR I do not know but is n't far behind .", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "IMT I ca n't read the stars , just find another way to say it", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "IMT I ca n't read the stars , just find another way to say it . 3ST Paris hilton is far behind.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Polarity SRC even if i was insanely drunk , i could n't force this pizza down . CAE even if i was n't in the mood , i loved this place . IMT honestly , i could n't stop eating it because it was so good ! 3ST even if i was drunk , i could still force myself", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Polarity SRC even if i was insanely drunk , i could n't force this pizza down . CAE even if i was n't in the mood , i loved this place . IMT honestly , i could n't stop eating it because it was so good ! 3ST even if i was drunk , i could still force myself .", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "CAE great massage with great pedicure and manicure . IMT awesome relaxation and massage with my pedicure . 3ST great massage with my manicure and pedicure . SRC excellent knowledgeable dentist and staff ! CAE excellent dentist and dental hygienist ! ! ! ! IMT not very knowledgeable staff ! 3ST horrible dentist and staff ! SRC do not go here if you are interested in eating good food . CAE definitely recommend this place if you are looking for good food at a good price . IMT if you are looking for consistent delicious food go here", "authors": [], "year": null, "venue": "SRC i will definitely return often ! CAE i will not return often ! ! ! ! IMT i will definitely not return ! 3ST i will not return often ! SRC no massage with my manicure or pedicure", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "SRC i will definitely return often ! CAE i will not return often ! ! ! ! IMT i will definitely not return ! 3ST i will not return often ! SRC no massage with my manicure or pedicure . CAE great massage with great pedicure and manicure . IMT awesome relaxation and massage with my pedicure . 3ST great massage with my manicure and pedicure . SRC excellent knowledgeable dentist and staff ! CAE excellent dentist and dental hygienist ! ! ! ! IMT not very knowledgeable staff ! 3ST horrible dentist and staff ! SRC do not go here if you are interested in eating good food . CAE definitely recommend this place if you are looking for good food at a good price . IMT if you are looking for consistent delicious food go here . 3ST if you are looking for good food , this is the place to go . Table 7: Examples of 3ST and SOTA model predictions.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "3ST: joint learning of style transfer, SPE, and BT.", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": ", unsupervised machine translation (UMT) (Lample et al., 2019) 10 as well as models by Li et al. (2018) (DAR), Jin et al. (2019) (IMT), Laugier et al. (2021) (CAE), He et al. (2020) (DLA) and Shen et al. (2017) (SCA). Our automatic evaluation focuses on four main aspects:", "uris": null, "num": null }, "TABREF0": { "num": null, "text": "Number of sentences of the different tasks train, dev and test splits, as well as average number of tokens per sequence (\u2205) of the tokenized test sets. Splits with target references available are underlined.", "type_str": "table", "content": "
CorpusTrainDevTest\u2205
CivCo-Neutral CivCo-Toxic Yahoo-Formal Yahoo-Informal 3,148,351 5,665 2,741 12.4 136,618 500 --399,691 500 4,878 14.9 1,737,043 4,603 2,100 12.7 Yelp-Pos 266,041 2,000 500 9.9 Yelp-Neg 177,218 2,000 500 10.7
", "html": null }, "TABREF2": { "num": null, "text": "). The success rate", "type_str": "table", "content": "
Task ModelCPFLUATA AGG\u2206
Civ. CAE 3ST*64.2 *80.6 *81.9 60.5 75.3 89.739.8 39.0-2.9 0.0
For. DAR IMT SCA 3ST*64.5 *27.9 *66.0 *14.2 -30.0 *71.5 *73.1 *79.2 *45.2 -7.6 *54.4 *14.7 *27.4 *4.0 -40.3 75.6 83.1 84.9 54.7 0.0
Pol. CAE CON DAR DLS IMT MUL SCA UMT 3ST*48.3 *76.4 *84.3 *28.7 *57.5 *32.5 *91.3 *17.3 -18.0 -2.9 *50.4 *32.7 *87.8 *15.8 -30.0 *50.9 *50.4 85.3 *20.1 -15.2 *42.5 *84.4 *84.6 *29.6 -7.6 *62.6 *42.3 *82.5 *20.4 -14.9 *36.7 *19.5 *73.2 *5.5 -40.3 *54.8 *55.7 85.4 *24.2 -11.1 55.7 81.0 85.4 35.3 0.0
", "html": null }, "TABREF3": { "num": null, "text": "", "type_str": "table", "content": "
: Automatic scores for CP, FLU, ATA and their aggregated score (AGG) of SOTA models and our ap-proach (3ST) across the Civ(ility), For(mality) and Pol(arity) tasks. Cross-task average AGG difference to 3ST under \u2206. Best values per task in bold and mod-els selected for human evaluation underlined. Values statistically significantly different (p < 0.05) from 3ST are marked with *.
", "html": null }, "TABREF5": { "num": null, "text": "", "type_str": "table", "content": "
: Average human ratings of CP, FLU, ATA and success rate (SR) on the three transfer tasks Civ(ility), For(mality) and Pol(arity). Cross-task average SR dif-ference to 3ST (\u2206). Best values per task in bold. Values statistically significantly different (p < 0.05) from 3ST are marked with *.
", "html": null }, "TABREF6": { "num": null, "text": "What our ignorant PM, Mad McCallum and stupid Liberal politicians going to say? (1) CAE what our pm, trudeau and his liberals are going to do about this?... ... .. .. .. .. .. .. .. .. . 3ST Mad McCallum, what are our politicians going to say?SRC Dear Hipster Jackass-Go to Bend. (2) CAE dear hippie -go to hawaiian to get around........3ST Dear Hipster Jackass-Go to Bend.", "type_str": "table", "content": "
SRC Trump's a liar.
(3) CAE trump's a liar. 3ST Trump's a \u2190\u2212\u2212\u2212\u2212\u2192 good man.
SRC Says the idiot on perpetual welfare. (4) CAE says the author on the daily basis, on the basis of perpetual welfare. 3ST Says the guy on perpetual welfare.
SRC A muslim racist. (5) CAE a muslim \u2190 \u2212\u2212\u2212 \u2192 minority. 3ST Not a democrat.
SRC Quit trying to justify what this jackass did.
(6) CAE quit trying to justify what this jackass did.
", "html": null }, "TABREF7": { "num": null, "text": "", "type_str": "table", "content": "
: 3ST and SOTA model (CAE) predictions on the CivCo test set, with adequate predictions, error in structure, target attribute, \u2190 \u2212\u2212\u2212\u2212\u2212\u2212\u2212\u2212 \u2192 stance reversal, and halluci-nations marked.
", "html": null }, "TABREF9": { "num": null, "text": "3ST Ablation. CP, FLU and ATA with SPE, BT, DAE removed. Best values per task in bold.", "type_str": "table", "content": "", "html": null }, "TABREF10": { "num": null, "text": "Inter-rater agreement calculated using Krippendorff-\u03b1 across the different tasks and metrics.", "type_str": "table", "content": "
80
FLU70
60
80
CP60
ATA75 50FormalityCivilityPolarity
0150300450 #BT Steps (k) 6007509001050
", "html": null } } } }