{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:29:23.849122Z" }, "title": "Human Judgement as a Compass to Navigate Automatic Metrics for Formality Transfer", "authors": [ { "first": "Huiyuan", "middle": [], "last": "Lai", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Groningen", "location": { "country": "The Netherlands" } }, "email": "h.lai@rug.nl" }, { "first": "Jiali", "middle": [], "last": "Mao", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Groningen", "location": { "country": "The Netherlands" } }, "email": "jiali.mao@rug.nl" }, { "first": "Antonio", "middle": [], "last": "Toral", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Groningen", "location": { "country": "The Netherlands" } }, "email": "" }, { "first": "Malvina", "middle": [], "last": "Nissim", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Groningen", "location": { "country": "The Netherlands" } }, "email": "m.nissim@rug.nl" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Although text style transfer has witnessed rapid development in recent years, there is as yet no established standard for evaluation, which is performed using several automatic metrics, lacking the possibility of always resorting to human judgement. We focus on the task of formality transfer, and on the three aspects that are usually evaluated: style strength, content preservation, and fluency. To cast light on how such aspects are assessed by common and new metrics, we run a human-based evaluation and perform a rich correlation analysis. We are then able to offer some recommendations on the use of such metrics in formality transfer, also with an eye to their generalisability (or not) to related tasks. 1", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Although text style transfer has witnessed rapid development in recent years, there is as yet no established standard for evaluation, which is performed using several automatic metrics, lacking the possibility of always resorting to human judgement. We focus on the task of formality transfer, and on the three aspects that are usually evaluated: style strength, content preservation, and fluency. To cast light on how such aspects are assessed by common and new metrics, we run a human-based evaluation and perform a rich correlation analysis. We are then able to offer some recommendations on the use of such metrics in formality transfer, also with an eye to their generalisability (or not) to related tasks. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Text style transfer (TST) is the task of automatically changing the style of a given text while preserving its style-independent content, or theme. Quite different tasks, and thus quite different types of transformations, traditionally fall under the TST label. For example, given the sentence \"i like this screen, it's just the right size...\", we may produce its negative counterpart \"i hate this screen, it is not the right size\" for the task defined as polarity swap (Shen et al., 2017; Li et al., 2018a) , or turn it into the formal \"I like this screen, it is just the right size.\" for the task called formality transfer (Rao and Tetreault, 2018) .", "cite_spans": [ { "start": 470, "end": 489, "text": "(Shen et al., 2017;", "ref_id": "BIBREF37" }, { "start": 490, "end": 507, "text": "Li et al., 2018a)", "ref_id": "BIBREF18" }, { "start": 625, "end": 650, "text": "(Rao and Tetreault, 2018)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For the transfer to be considered successful, the output must be written (i) in the appropriate target style; (ii) in a way such that the original content, or theme, is preserved; and (iii) in proper language, hence fluent and grammatical (relative to the desired style). These aspects to be evaluated are usually defined as (i) style strength, (ii) content preservation, and (iii) fluency, and automatic 1 Our analysis code, literature list for Figure 1 , and all data are available at https://github.com/laihuiyuan/ eval-formality-transfer.", "cite_spans": [ { "start": 405, "end": 406, "text": "1", "ref_id": null } ], "ref_spans": [ { "start": 446, "end": 454, "text": "Figure 1", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Figure 1: Automatic evaluation metrics in 45 ACL Anthology papers focusing on style transfer and its evaluation in terms of (i) style strength: regressor and classifier; (ii) content preservation: COMET, BLEURT, BERTScore, METEOR, WMD, ROUGE, chrF, Self-BLEU (source-based BLEU) and Ref-BLEU (reference-based BLEU); (iii) fluency: PPL (perplexity); and (iv) overall score: HM (harmonic mean) and GM (geometric mean).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ratio of Papers", "sec_num": null }, { "text": "evaluation metrics are used accordingly, lacking the possibility of using human judgement for any given experiment. Figure 1 shows a survey of such metrics (organised by aspect) as used in 45 papers published over the last three years in the ACL Anthology, which focus on TST in general. A classifier or a regressor is used to assess style strength, a variety of content-based metrics target content preservation, perplexity is used to measure fluency, and some overall metrics combining content and style are often reported.", "cite_spans": [], "ref_spans": [ { "start": 116, "end": 124, "text": "Figure 1", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Ratio of Papers", "sec_num": null }, { "text": "In spite of the attempts to perform careful automatic evaluation, and of some works studying specific aspects of it, such as traditional metrics for polarity swap (Tikhonov et al., 2019; Mir et al., 2019) , content preservation for formality transfer (Yamshchikov et al., 2021) , and a recent attempt at correlating automatic metrics and human judgment for some aspects of multilingual formality transfer (Briakou et al., 2021a) , the community has not yet reached fully shared standards in evaluation practices. We believe this is due to a concurrence of factors.", "cite_spans": [ { "start": 163, "end": 186, "text": "(Tikhonov et al., 2019;", "ref_id": "BIBREF39" }, { "start": 187, "end": 204, "text": "Mir et al., 2019)", "ref_id": "BIBREF25" }, { "start": 251, "end": 277, "text": "(Yamshchikov et al., 2021)", "ref_id": "BIBREF44" }, { "start": 405, "end": 428, "text": "(Briakou et al., 2021a)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Ratio of Papers", "sec_num": null }, { "text": "First, different tasks are conflated under the TST label while they are not exactly the same, and evaluation is a serious issue. Lai et al. (2021a) have shown that polarity swap and formality transfer cannot be considered alike especially in terms of content preservation, as in the former the meaning of the output is expected to be the opposite of the input rather than approximately the same. Hence, it is difficult to imagine that the same metric would capture well the content aspect in both tasks.", "cite_spans": [ { "start": 129, "end": 147, "text": "Lai et al. (2021a)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Ratio of Papers", "sec_num": null }, { "text": "Second, the evaluation setting is not necessarily straightforward: if the content of the input has to be preserved in the output, the quality of the generated text can be assessed either against the input itself or against a human-produced reference, specifically crafted for evaluation. However, not all metrics are equally suitable for both assessments. For instance, BLEU (Papineni et al., 2002) is the metric most commonly used for evaluating content preservation (Fig. 1) . Intuitively, this n-gram based metric should be appropriate for comparing the output and the human reference, but is much less suitable for comparing the model output and the source sentence, since the whole task is indeed concerned with changing the surface realisation towards a more appropriate target style. On the contrary, neural network-based metrics should also work between the model output and the source sentence. This leads to asking what the best way is to use and possibly combine these metrics under which settings. Closely related to this point, it is not fully clear what the used metrics actually measure and what desirable scores are. For example, comparing source and reference for metrics that measure content similarity should yield high scores, but we will see in our experiments that this is not the case. Recent research has only compared using the reference and the source sentence for one metric: BLEU (Briakou et al., 2021a) , and introduced some embeddings-based metrics only to compare the output to the source. A comprehensive picture of a large set of metrics in the two different evaluation conditions (output to source and output to reference) is still missing and provided in this contribution.", "cite_spans": [ { "start": 375, "end": 398, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF27" }, { "start": 1408, "end": 1431, "text": "(Briakou et al., 2021a)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 468, "end": 476, "text": "(Fig. 1)", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Ratio of Papers", "sec_num": null }, { "text": "Lastly, and related to the previous point, it is yet unclear whether and how the used metrics correlate to human judgements under different conditions (e.g. not only the given source/reference used for evaluation but also different transfer directions, as previous work has assessed human judgement over the informal to formal direction (Briakou et al., 2021a) only), and how they differ from one another. This does not only affect content preservation, as discussed above, but also style strength and fluency.", "cite_spans": [ { "start": 337, "end": 360, "text": "(Briakou et al., 2021a)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Ratio of Papers", "sec_num": null }, { "text": "Focusing on formality transfer, where the aspect of content preservation is clear, we specifically pose the following research questions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ratio of Papers", "sec_num": null }, { "text": "\u2022 RQ1 What is the difference in using a classifier or a regressor to assess style strength and how do they correlate with human judgement?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ratio of Papers", "sec_num": null }, { "text": "\u2022 RQ2 How do different content preservation metrics fare in comparison to human judgement, and how do they behave when used to compare TST outputs to source or reference sentences?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ratio of Papers", "sec_num": null }, { "text": "\u2022 RQ3 Is fluency well captured by perplexity, and what if the target style is informal?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ratio of Papers", "sec_num": null }, { "text": "To address these questions we conduct a human evaluation for a set of system outputs, collecting judgments over the three evaluation aspects, and unpack each of them by means a thorough correlation analysis with automatic metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ratio of Papers", "sec_num": null }, { "text": "Contributions Focusing on formality transfer, we offer a comprehensive analysis of this task and the nature of each aspect of its evaluation. Thanks to the analysis of correlations with human judgements, we uncover which automatic metrics are more reliable for evaluating TST systems and which metrics might not be suitable for this task under specific conditions. Since it is not feasible to always have access to human evaluation, having a clearer picture of which metrics better correlate with human evaluation is an important step towards a better systematisation of the task's evaluation. (Shen et al., 2017; Li et al., 2018b ) is a task of transforming sentences, swapping their polarity while preserving their theme. Political slant is the task that preserves the intent of the commenter but modifies their observable political affiliation (Prabhumoye et al., 2018) . Formality transfer is the task of reformulating an informal sentence into formal (or viceversa) (Rao and Tetreault, 2018; Briakou et al., 2021b) . Cao et al. (2020) propose an expertise style transfer that aims to simplify the professional language in medicine to the level of laypeople descriptions using simple words. Jin et al. (2021) provide an overview for different TST tasks.", "cite_spans": [ { "start": 594, "end": 613, "text": "(Shen et al., 2017;", "ref_id": "BIBREF37" }, { "start": 614, "end": 630, "text": "Li et al., 2018b", "ref_id": "BIBREF19" }, { "start": 847, "end": 872, "text": "(Prabhumoye et al., 2018)", "ref_id": "BIBREF30" }, { "start": 971, "end": 996, "text": "(Rao and Tetreault, 2018;", "ref_id": "BIBREF32" }, { "start": 997, "end": 1019, "text": "Briakou et al., 2021b)", "ref_id": "BIBREF3" }, { "start": 1022, "end": 1039, "text": "Cao et al. (2020)", "ref_id": "BIBREF4" }, { "start": 1195, "end": 1212, "text": "Jin et al. (2021)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Ratio of Papers", "sec_num": null }, { "text": "Automatic Evaluation In Figure 1 we see that more than 80% of papers employ a style classifier to assess the attributes of transferred text for the aspect of style strength. For content preservation, BLEU is by far the most popular automatic metric, but recent work has also employed other metrics, including string-based (e.g. METEOR (Mir et al., 2019; Lyu et al., 2021; Briakou et al., 2021a) ) and neural-based (e.g. BERTScore (Reid and Zhong, 2021; Lee et al., 2021; Briakou et al., 2021a) ). In order to further increase the capturing of semantic information beyond the lexical level, Lai et al. (2021b,a) recently also employed BLEURT (Sellam et al., 2020) and COMET (Rei et al., 2020) to evaluate their systems. These learnable metrics attempt to directly optimize the correlation with human judgments, and have shown promising results in machine translation evaluation. For fluency, a language model (LM) trained on the training data is used to calculate the perplexity of the transferred text (John et al., 2019; Sudhakar et al., 2019; Huang et al., 2020) . Geometric mean and harmonic mean of style accuracy and BLEU are often used for overall performance (Xu et al., 2018; Luo et al., 2019; Krishna et al., 2020; Lai et al., 2021a,b) .", "cite_spans": [ { "start": 335, "end": 353, "text": "(Mir et al., 2019;", "ref_id": "BIBREF25" }, { "start": 354, "end": 371, "text": "Lyu et al., 2021;", "ref_id": "BIBREF22" }, { "start": 372, "end": 394, "text": "Briakou et al., 2021a)", "ref_id": "BIBREF2" }, { "start": 430, "end": 452, "text": "(Reid and Zhong, 2021;", "ref_id": "BIBREF34" }, { "start": 453, "end": 470, "text": "Lee et al., 2021;", "ref_id": "BIBREF17" }, { "start": 471, "end": 493, "text": "Briakou et al., 2021a)", "ref_id": "BIBREF2" }, { "start": 590, "end": 610, "text": "Lai et al. (2021b,a)", "ref_id": null }, { "start": 673, "end": 691, "text": "(Rei et al., 2020)", "ref_id": "BIBREF33" }, { "start": 1002, "end": 1021, "text": "(John et al., 2019;", "ref_id": "BIBREF11" }, { "start": 1022, "end": 1044, "text": "Sudhakar et al., 2019;", "ref_id": "BIBREF38" }, { "start": 1045, "end": 1064, "text": "Huang et al., 2020)", "ref_id": "BIBREF9" }, { "start": 1166, "end": 1183, "text": "(Xu et al., 2018;", "ref_id": "BIBREF42" }, { "start": 1184, "end": 1201, "text": "Luo et al., 2019;", "ref_id": "BIBREF21" }, { "start": 1202, "end": 1223, "text": "Krishna et al., 2020;", "ref_id": "BIBREF13" }, { "start": 1224, "end": 1244, "text": "Lai et al., 2021a,b)", "ref_id": null } ], "ref_spans": [ { "start": 24, "end": 32, "text": "Figure 1", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Ratio of Papers", "sec_num": null }, { "text": "Evaluation Practices Although some previous work has run correlations of human judgements and automatic metrics (Rao and Tetreault, 2018; Luo et al., 2019) , this was not the focus of the contribution and no deeper analysis or comparison was run. On the other hand, Yamshchikov et al. (2021) examined 13 content-related metrics in the context of formality transfer and paraphrasing, and show that none of the metrics is close enough to the human judgment. Briakou et al. (2021a) have recently evaluated automatic metrics on the task of multilingual formality transfer against human judgement. We also examine automatic metrics in terms of correlation with human judgement, but there are some core differences between our contribution and their work. First, for style strength, they focus on comparing two different architectures in a cross-lingual setting using the correlation on human judgement for regression, and they do not provide this analysis for style classification, rather an evaluation against the gold label. In contrast, we adopt an architecture that provides regression and classification comparisons in fitting human judgments. Second, regarding content, Briakou et al. (2021a) focus on similarity (and therefore metrics) to the source sentence, while we stress the importance of triangulation also with the reference 2 . Also, we introduce two learnable metrics in the evaluation setup, which correlation with human judgement shows to be the most informative. Third, they compare perplexity, likelihood, and pseudolikelihood scores for fluency evaluation, while we provide a deeper evaluation of just perplexity considering though the two directions (Briakou et al. (2021a) evaluate only the informal-to-formal direction) and highlight differences that point to a potential benefit in using different approaches or evaluation strategies for the two directions. In addition, we (i) use a continuous scale setting for human judgement which, unlike a discrete Likert scale, allows to normalize judgments (Graham et al., 2013) , hence increasing homogeneity of the assessments; (ii) evaluate eight existing, published systems of different sorts (including state-of-theart models) for both transfer directions, thereby potentially enabling a reconsideration of results as reported in previous work; (iii) study the nature of each evaluation aspect and the corresponding automatic metrics, analyzing the differences in the correlation between metric and human judgements that might arise under different conditions (e.g. looking at high-quality systems).", "cite_spans": [ { "start": 112, "end": 137, "text": "(Rao and Tetreault, 2018;", "ref_id": "BIBREF32" }, { "start": 138, "end": 155, "text": "Luo et al., 2019)", "ref_id": "BIBREF21" }, { "start": 456, "end": 478, "text": "Briakou et al. (2021a)", "ref_id": "BIBREF2" }, { "start": 1667, "end": 1690, "text": "(Briakou et al. (2021a)", "ref_id": "BIBREF2" }, { "start": 2018, "end": 2039, "text": "(Graham et al., 2013)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Ratio of Papers", "sec_num": null }, { "text": "We use GYAFC (Rao and Tetreault, 2018) , a formality transfer dataset for English that contains aligned formal and informal sentences from two domains: Entertainment & Music and Family & Relationships. Figure 2 shows an example for alignment, transformation, and evaluation relations be-Reference: Now we do so many things together and I do not know what to do.", "cite_spans": [ { "start": 13, "end": 38, "text": "(Rao and Tetreault, 2018)", "ref_id": "BIBREF32" } ], "ref_spans": [ { "start": 202, "end": 210, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "Source: but now we do all these things together and i dont know what to do..", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "Output: Now we do all these things together and I do not know what to do. tween input, output, and reference. We run a human evaluation and a battery of automatic metrics on a selection of human-and machine-produced texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "Source and Reference Texts The source and reference texts we use are from the Family & Relationships domain. The test set contains 1,332 and 1,019 sentences in \"informal to formal\" and \"formal to informal\" directions, respectively. There are four human references for each test sentence. We randomly select 80 source sentences (40 for each transfer direction) from the test set, as well as their corresponding human references. For each source sentence, we obtain the corresponding transformations as produced by eight different systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence-pair", "sec_num": null }, { "text": "System Outputs The evaluation results are often affected by the system's outputs, since if the evaluated systems are of different types, they may exhibit different error patterns so that various automatic evaluation metrics can be differently sensitive to these patterns (Ma et al., 2019; Mathur et al., 2020) . To fully examine the evaluation methods, the systems we use are all from previous work, both supervised and unsupervised approaches. 3 Overall, the eight systems yield a total of 640 output sentences (80 per system, 40 in each direction).", "cite_spans": [ { "start": 271, "end": 288, "text": "(Ma et al., 2019;", "ref_id": "BIBREF23" }, { "start": 289, "end": 309, "text": "Mathur et al., 2020)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence-pair", "sec_num": null }, { "text": "To facilitate the annotation and obtain a manageable size for each annotator, we split the 80 source sentences (Section 3) into four different surveys with 20 sentences each (10 for each transfer direction), and their corresponding system outputs plus one reference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation", "sec_num": "4.1" }, { "text": "We recruited eight highly proficient English speakers for this task, i.e. two per survey, so that two annotations for each target sentence can be collected; from these we can use the average score assigned, and also calculate inter-annotator agreement. The task is to rate the transferred sentence on a continuous scale (0-100), inspired by Direct Assessment (Graham et al., 2013 (Graham et al., , 2015 , in terms of three evaluation aspects: (i) style strength (does the transformed sentence fit the target style?); (ii) content preservation (is the content of the transformed sentence the same as the original sentence?), and (iii) fluency (considering the target style, could the transformed sentence have been written by a native speaker?).", "cite_spans": [ { "start": 359, "end": 379, "text": "(Graham et al., 2013", "ref_id": "BIBREF7" }, { "start": 380, "end": 402, "text": "(Graham et al., , 2015", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation", "sec_num": "4.1" }, { "text": "Before starting the rating task, we provided annotators with detailed guidelines and examples of transformed sentences along with plausible assessments for each aspect. 4 We also reminded the annotators that such examples are only indicative of what we believe to be plausible judgements but there are many possible correct answers, of course.", "cite_spans": [ { "start": 169, "end": 170, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation", "sec_num": "4.1" }, { "text": "We test a wide range of commonly used as well as new automatic metrics on the three aspects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "4.2" }, { "text": "The most commonly used method for assessing style strength is a style classifier, with the problem cast as a binary classification task (formal vs informal in formality transfer). Briakou et al. (2021a) have recently shown that a style regressor fine-tuned with English rating data correlates better with human judgments in other languages (Italian, French, and Portuguese). To run a proper comparison, we use BERT (Devlin et al., 2019 ) as our base model, and fine-tune it with style labelled data (GYAFC) and the rating data of PT16 (Pavlick and Tetreault, 2016) to obtain a style classifier (C-GYAFC) and a regressor (R-P16), respectively. Following Rao and Tetreault (2018), we collect sentences from PT16 with human rating from -3 to +1 as informal and the rest as formal, and train a style classifier on them (C-PT16). C-GYAFC and C-PT16 achieve an accuracy of 94.4% and 58.6% on the test sets, respectively.", "cite_spans": [ { "start": 180, "end": 202, "text": "Briakou et al. (2021a)", "ref_id": "BIBREF2" }, { "start": 415, "end": 435, "text": "(Devlin et al., 2019", "ref_id": "BIBREF5" }, { "start": 535, "end": 564, "text": "(Pavlick and Tetreault, 2016)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Style Strength", "sec_num": null }, { "text": "We consider the following metrics, including both surface-based and embedding-based approaches: 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Content Preservation", "sec_num": null }, { "text": "\u2022 BLEU (Papineni et al., 2002) It compares a given text to others (reference) by using a precisionoriented approach based on n-gram overlap;", "cite_spans": [ { "start": 7, "end": 30, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Content Preservation", "sec_num": null }, { "text": "\u2022 chrF (Popovi\u0107, 2015) It measures the similarity of sentences using the character n-gram F-score;", "cite_spans": [ { "start": 7, "end": 22, "text": "(Popovi\u0107, 2015)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Content Preservation", "sec_num": null }, { "text": "\u2022 ROUGE (Lin, 2004) It compares a given text to others (human reference) by using n-gram/the longest co-occurring in sequence overlap and a recall-oriented approach;", "cite_spans": [ { "start": 8, "end": 19, "text": "(Lin, 2004)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Content Preservation", "sec_num": null }, { "text": "\u2022 WMD (Kusner et al., 2015) It measures the dissimilarity between two texts as an optimal transport problem which is based on word embedding.", "cite_spans": [ { "start": 6, "end": 27, "text": "(Kusner et al., 2015)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Content Preservation", "sec_num": null }, { "text": "\u2022 METEOR (Banerjee and Lavie, 2005) It computes the similarity score of two texts by using a combination of unigram-precision, unigramrecall, and some additional measures like stemming and synonymy matching.", "cite_spans": [ { "start": 9, "end": 35, "text": "(Banerjee and Lavie, 2005)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Content Preservation", "sec_num": null }, { "text": "\u2022 BERTScore It computes a similarity score for each token in the candidate sentence with each token in the reference sentence. Instead of exact matches, it computes token similarity using contextual embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Content Preservation", "sec_num": null }, { "text": "\u2022 BLEURT (Sellam et al., 2020) It is a learned evaluation metric based on BERT (Devlin et al., 2019) , trained on human judgements. It is trained with a pre-training scheme that uses millions of synthetic examples to help the model generalize.", "cite_spans": [ { "start": 9, "end": 30, "text": "(Sellam et al., 2020)", "ref_id": "BIBREF35" }, { "start": 79, "end": 100, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Content Preservation", "sec_num": null }, { "text": "\u2022 COMET (Rei et al., 2020) It is a learnable metric which leverages cross-lingual pretrained language modeling resulting in multilingual machine translation evaluation models that exploit both source and reference sentences.", "cite_spans": [ { "start": 8, "end": 26, "text": "(Rei et al., 2020)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Content Preservation", "sec_num": null }, { "text": "For assessing content preservation in the output, we can exploit both the source and the reference (see Fig. 2 ). When comparing our output to the source, we want to answer the following question: (a) how close in content is the generated text to the original text?, which addresses naturally the content preservation aspect of the task. When comparing our output to the human-produced reference, we want to answer a different question: (b) how similar is the automatically generated text to the human written one? Both are valid strategies, but by answering different questions they are likely to react differently to, and require, different metrics. The advantages of the (a) approach are that evaluation is possible even without a human reference, it is the most natural way of assessing the task, and it does not incur reference bias (Fomicheva and Specia, 2016) . The core problem lies in the use and interpretation of metrics: surface-based metrics (like BLEU) would score highest if nothing has changed from input to output (if the model doesn't perform the task, basically), so aiming for a high score is pointless. A very low score is undesirable, too, however. For more sophisticated metrics, the problem is similar in the sense the highest score would be achieved if the two texts are identical, but since it is not fully clear what they measure exactly in terms of similarity, what to aim for isn't straightforward (an indication is provided by using metrics to compare source and reference).", "cite_spans": [ { "start": 838, "end": 866, "text": "(Fomicheva and Specia, 2016)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 104, "end": 110, "text": "Fig. 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Content Preservation", "sec_num": null }, { "text": "The main advantage of the (b) approach is that metrics can be used in a more standard way: tending to the highest possible score is good for any of them, since getting close to the human solution is desirable. However, the gold reference is only one of many possible realisations, and while high scores are good, low scores can be somewhat meaningless, as proper meaning-preserving outputs may be very different from the human-produced ones, especially at surface level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Content Preservation", "sec_num": null }, { "text": "While we have as yet no specific solution to this, this study contributes substantially to a better understanding of automatic metrics, especially for content preservation, possibly leading to a combined metric which considers mainly the source, and possibly the reference(s) in a learning phase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Content Preservation", "sec_num": null }, { "text": "Fluency In formality transfer, both informal and formal outputs must be evaluated. Intuitively, the latter should be more fluent and grammatical than the former so that evaluating the fluency of informal sentences might be more challenging, both for humans and automatic metrics. We use the perplexity of the language model GPT-2 (Radford et al., 2019) fine-tuned with style labelled texts. Specifically, we fine-tune two GPT-2 models on informal sentences and formal sentences respectively, and then we use the target-style model to calculate the perplexity of the generated sentence. Finally, we provide a separate correlation analysis between automatic metrics and human judgements for the two transfer directions.", "cite_spans": [ { "start": 330, "end": 352, "text": "(Radford et al., 2019)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Content Preservation", "sec_num": null }, { "text": "We employ Pearson correlation (r) as our main evaluation measure for system-/segment-level metrics:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pearson Correlation", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r = n i=1 (H i \u2212H)(M \u2212M ) n i=1 (H i \u2212H) 2 n i=1 (M i \u2212M ) 2", "eq_num": "(1)" } ], "section": "Pearson Correlation", "sec_num": null }, { "text": "where H i is the human assessment score, M i is the corresponding score as predicted by a given metric. H andM are the their means, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pearson Correlation", "sec_num": null }, { "text": "Kendall's Tau-like formulation We follow the WMT17 Metrics Shared Task (Bojar et al., 2017) and take the official Kendall's Tau-like formulation, \u03c4 , as the our main evaluation measure for segmentlevel metrics. A true pairwise comparison is likely to lead to more stable results for segment-level evaluation (Vazquez-Alvarez and Huckvale, 2002) . The Kendall's Tau-like formulation \u03c4 is as follows:", "cite_spans": [ { "start": 71, "end": 91, "text": "(Bojar et al., 2017)", "ref_id": "BIBREF1" }, { "start": 308, "end": 344, "text": "(Vazquez-Alvarez and Huckvale, 2002)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Pearson Correlation", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c4 = Concordant \u2212 Discordant Concordant + Discordant", "eq_num": "(2)" } ], "section": "Pearson Correlation", "sec_num": null }, { "text": "Where Concordant is the number of times for which a given metric suggests a higher score to the \"better\" hypothesis judged by human and Discordant is the number of times for which a given metric suggests a higher score to the \"worse\" hypothesis judged by human. Most automatic metrics, like BLEU, aim to achieve a strong positive correlation with human assessment, with the exception of WMD and perplexity, where the smaller is better. We thereby employ absolute correlation value for WMD and perplexity in the following analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pearson Correlation", "sec_num": null }, { "text": "In this section, we first measure the inter-annotator agreement of the human evaluation, then discuss both system-level and sentence-level evaluation results on the three aforementioned evaluation aspects, so as to provide a different perspective on the correlation between automatic metrics and human judgements under different conditions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5" }, { "text": "There are two human judgements for each sentence and we measure their inter-annotator agreement (IAA) by computing the Pearson Correlation coefficient, instead of the commonly used Cohen's K, since judgements are given on a continuous scale. on the content aspect, followed by fluency, with style yielding the lowest scores, suggesting that annotators have more varied perceptions of sentence style than content. Overall, we achieve reasonable agreement for all surveys and evaluation aspects. Table 2 shows the correlation of automatic metrics in style strength with human judgements. We see that C-GYAFC achieves the highest correlation at both system-and segment-level, R-PT16 and C-PT16 have the same system-level correlation score while the former has a slightly lower score at segment-level. Given that C-PT16 and C-GYAFC have close correlation scores while their performances on the test set are quite different, we also employ Pearson correlation to compute the segment-level result, and see rather different correlation scores (C-PT16 with 0.33 and C-GYAFC with 0.67). We think that evaluating the system outputs for a given source using C-PT16 and C-GYAFC results in similar scores ranking, so their Kendall's Tau-like correlations are very close. In general, it is easier to evaluate systems which have large differences in quality, while it is more difficult when systems have similar quality. To assess the reliability of automatic metrics for closequality systems, we first sort the systems based on human judgements, and plot the correlation of the top-/last-N systems, with N ranging from all systems to the best/worst three systems (Fig. 3) . We see that the correlation between automatic metrics and human judgements decreases as we decrease N for both top-N and last-N systems, especially R-PT16 in the top-N systems. Again we observe that C-GYAFC and C-PT16 have similar scores over the top-/last-N systems. Overall, C-GYAFC appears to be the most stable model.", "cite_spans": [], "ref_spans": [ { "start": 494, "end": 501, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 1649, "end": 1657, "text": "(Fig. 3)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Inter-Annotator Agreement", "sec_num": "5.1" }, { "text": "As mentioned in the Introduction, since a styletransformed output should not alter the meaning of the input, content preservation can be measured against the input itself, or against a human reference in the expected target style. However, metrics cannot be used interchangeably (Section 4.2), as, for instance, the output is expected to have a higher n-gram overlap with the reference, while this is not desirable with respect to the input. Table 3 presents the results of human and automatic evaluation: all systems have a higher n-gram overlap (BLEU, chrF) with the source sentence than the human reference, indicating that existing models tend to copy from the input and lack diverse rewriting abilities. We also report the results for the reference against the source. Bearing in mind that the reference can be conceived as an optimal output, it is interesting to see that it does not score high in any metric, not even the learnable ones. This leaves some crucial open questions: how can these metrics be best used to assess content preservation in generated outputs? What are desirable scores? We also observe that RAO's system has the highest scores of surface-based metrics (e.g. BLEU) with the source sentence while its scores with learnable metrics (e.g. BLEURT) are lower than some other systems (e.g. HIGH). In the evaluation against the human reference, the system BART and NIU achieve better results on most metrics. Figure 4 shows the correlations of content preservation metrics with human judgments. For the system-level results, there is a big gap in correlation between source sentence and human reference for surface-based metrics (e.g. BLEU), but not for neural network based ones (e.g. COMET). Using the latter therefore seems to open up the possibility of automatically evaluating content without a human reference. It is interesting to see that the correlations of using source sentences at segmentlevel are all higher than using the human reference, and surface-based metrics of the latter correlate particularly poorly with human scores. We suggest two main reasons: (i) existing systems lack diverse rewriting ability given the source sentences, and the annotators rate the generated sentences comparing them to the source sentence, not to a reference; (ii) human references are linguistically more diverse (e.g. word choice and order). The first one is not within the scope of this work. For the second aspect, we exploit the fact that we have multiple references available, and run the evaluation in a multireference setting; we observe that correlations for surface-based metrics improve as more variety is included, but not for neural ones. In Table 4 , we see that learnable metrics using the first reference have higher correlation with other references than surface-based metrics. Overall, learnable metrics always have the highest correlation scores in evaluating content preservation using source sentences or human references, while surface-based metrics generally require a multi-reference setting.", "cite_spans": [], "ref_spans": [ { "start": 442, "end": 449, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 1432, "end": 1440, "text": "Figure 4", "ref_id": null }, { "start": 2676, "end": 2683, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Content Preservation", "sec_num": "5.3" }, { "text": "Similar to style strength, we plot the correlation of the top-/last-N systems sorted by human judgements for the content aspect (Fig. 5) . The correlation score between automatic metrics and human scores decreases as we decrease N for the top-N systems while this shows stability for the last-N systems. This suggests that evaluating high-quality TST systems is more challenging than evaluating low-quality systems. Again, we see that the correlation when using the source sentence has better stability than when using human references. Although BLEU and charF show stable performances, their correlations are lower than those by other metrics in most cases. Regardless of whether we use human references or source sentences, COMET(w) generally has the highest correlation scores with human judgements under different conditions. Table 6 : Results of GPT-2 based perplexity scores and their absolute Pearson correlation with human judgements at segment-level. Notes: (i) GPT2-Inf and GPT2-For are fine-tuned with informal sentences and formal sentences, respectively; (ii) the correlation is calculated using the perplexity of GPT-2 in the target style with human judgment.", "cite_spans": [], "ref_spans": [ { "start": 128, "end": 136, "text": "(Fig. 5)", "ref_id": "FIGREF4" }, { "start": 830, "end": 837, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Content Preservation", "sec_num": "5.3" }, { "text": "we see that GPT-2 based perplexity correlates better with human scores in the direction informalto-formal than in the opposite one, at both systemand segment-level. In general, a \"good\" formal sentence should be fluent, while an informal sentence might as well not be, and there can be varied perceptions by people. Indeed, we see higher IAA scores in the informal-to-formal direction (informal-toformal: 0.70 vs informal-to-formal: 0.63). Table 6 presents the results of correlations and perplexity scores of GPT-2 in the two transfer directions for each system. The perplexity scores for most sentences are in the correct place, i.e. the scores from GPT2-Inf are higher than those from GPT2-For for the informal sentences, and viceversa. However, Figure 6 : The distance between the source and target sentences as measured by content-related metrics.", "cite_spans": [], "ref_spans": [ { "start": 440, "end": 447, "text": "Table 6", "ref_id": null }, { "start": 749, "end": 757, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Fluency", "sec_num": "5.4" }, { "text": "we also observe that the correlations of informal-toformal for each system (except ZHOU) are higher than those for the formal-to-informal direction. This confirms our hypothesis that assessing the fluency of informal sentences is not that obvious even for humans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fluency", "sec_num": "5.4" }, { "text": "We have focused here on formality transfer, but polarity swap is also commonly defined as a style transfer task. In previous work, we have suggested that these tasks are intrinsically different, especially in terms of content preservation, since while formality transfer is somewhat akin to paraphrasing, in polarity swap the meaning is substantially altered (Lai et al., 2021a) . This would imply that content-measuring metrics could not be used in the same way in the two tasks.", "cite_spans": [ { "start": 359, "end": 378, "text": "(Lai et al., 2021a)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Broader Implications for Style Transfer", "sec_num": "5.5" }, { "text": "We further peek here into this issue, in view of future work that should evaluate metrics for the assessment of polarity swap, too, and show in Figure 6 the use of different metrics to measure the distance between the source and target sentences for paraphrasing, formality transfer, and polarity swap. Using n-gram based metrics, we see that the distance between source and target sentences in polarity swap is closer than in the other two tasks. With learnable metrics, instead, we see that source and target sentences for polarity swap are quite distant. Formality transfer shows overall the same trend as paraphrasing in all metrics, suggesting that it's much more of a content-preserving paraphraselike task than polarity swap, and metrics should be selected accordingly. Future work will explore how to best use them in polarity swap under different settings (using source vs reference, for example).", "cite_spans": [], "ref_spans": [ { "start": 144, "end": 150, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Broader Implications for Style Transfer", "sec_num": "5.5" }, { "text": "We have considered a wide range of automatic metrics on the three evaluation aspects of formality transfer, and assessed them against human judgements that we have elicited.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "For style strength, we have compared the style classifiers and regressor in the setting of using the same raw data for training (with a binary label for classification and continuous scores for regression), as well as classifiers with different performances. We have observed that there is little difference among them when evaluating multiple TST systems. However, the style regressor performs worse when evaluating high-quality TST systems. For classifiers with different performances, we recommend the one with the highest performance since it results in the highest overall Pearson correlation with human judgements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "To assess content preservation, we have explored different kinds of automatic metrics using the source or reference(s), and have observed the follwoing: (i) if using the source sentence, we strongly recommend employing learnable metrics since their correlation in that condition is much higher than those of traditional surface-based metrics (which are not indicative, since high scores correspond to not changing the input, hence not performing the task); still, the question of how scores should be interpreted and what score ranges are desirable remains open; (ii) most metrics are reliable to be used to measure and compare the performances at system-level when a human reference is available; (iii) however, we do not recommend to use surface-base metrics to measure sentence-level comparisons, especially with only one reference. Overall, learnable metrics seem to provide a more reliable measurement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "For fluency, perplexity can be used for evaluating the informal-to-formal direction, either at system-or segment-level, while it is clearly less reliable for the opposite direction, and it remains to be investigated how to best perform evaluation in this transfer direction, considering the wide variability of acceptable outputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "This study focuses on formality transfer, and offers a better understanding of automatic evaluation thanks to the comprehensive correlations with human judgments herein conducted. However, the findings may not generalise to other tasks usually considered similar, such as polarity swap. To this end, future dedicated work will be required. Table A .2: Automatic evaluation results in content preservation. Notes: (i) the results of Reference is the distance between source and reference sentence measuring by metrics; (ii) \u2193 indicates the lower score is better. ", "cite_spans": [], "ref_spans": [ { "start": 340, "end": 347, "text": "Table A", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Although the reference is not always available, using it in studying evaluation metrics in comparison with how they behave when the source is used provides insights into the overall behaviour of such metrics and how they should best be employed even in the absence of a reference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Details of the systems are in Appendix A.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Screenshots of our annotation guidelines and interface are in Appendix A.3.5 The implementation details for automatic metrics are in Appendix A.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was partly funded by the China Scholarship Council (CSC). We are very grateful to the anonymous reviewers for their useful comments, especially in connection to closely related work, which contributed to strengthening this paper. We would also like to thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluster. We thank the annotators as well as Ana Guerberof Arenas and Amy Isard for testing, and helping us to improve, a preliminary version of the survey.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "A Appendices: These Appendices include: (i) evaluated systems (A.1); (ii) implementation details for automatic metrics (A.2); and (iii) annotation guidelines and interface (A.3). Table A .1 presents the systems' ranking based on the human judgements. We use eight published systems of different sorts (including state-of-the-art models). For supervised approaches, we include the following systems:", "cite_spans": [], "ref_spans": [ { "start": 179, "end": 186, "text": "Table A", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "\u2022 RAO (Rao and Tetreault, 2018) : A copy-enriched NMT model trained on the rule-processed data and the additional forward and backward translations produced by the PBMT model;\u2022 NIU (Niu et al., 2018) : A bi-directional model trained on formality-tagged bilingual data using multi-task learning;\u2022 BART (Lai et al., 2021b) : Fine-tuning a pretrained model BART with gold parallel data and reward strategies;\u2022 HIGH (Lai et al., 2021a) : Fine-tuning BART with high-quality synthetic parallel data and reward strategies.For unsupervised approaches, we include the following systems:\u2022 LUO (Luo et al., 2019) : A dual reinforcement learning framework that directly transforms the style of the text via a one-step mapping model without parallel data;\u2022 YI (Yi et al., 2020) : A style instance supported method that learns a more discriminative and expressive latent space to enhance style signals and make a better balance between style and content;\u2022 Zhou (Zhou et al., 2020) : An attentional seq2seq model that pre-trains the model to reconstruct the source sentence and re-predict its word-level style relevance;\u2022 IBT (Lai et al., 2021a) : An iterative backtranslation framework based on the pre-trained seq2seq model BART. \u2022 ROUGE:We use the open-sourced implementations Rouge. 8\u2022 WMD: We employ the gensim library and word embedding googlenews-vectors-negative300.bin. 9\u2022 METEOR: We adopt the NLTK library.\u2022 BERTScore: We use the official implementation with a rescaling function. 10\u2022 BLEURT: We use the official checkpoint of bleurt-large-512. 11\u2022 COMET: We adopt the official checkpoint of wmt-large-da-estimator-1719. 12 COMET-QE is a referenceless metric that uses source and output only. But we found that it yielded lower correlations with human judgements than COMET in our evaluations. This may be because the input and output are different languages in COMET-QE training.\u2022 Style and Fluency: All experiments are implemented atop Transformers (Wolf et al., 2020) using BERT base model (cased) for style and GPT-2 base model for fluency. We fine-tune models using the Adam optimiser (Kingma and Ba, 2015) with learning rate of 1e-5 for BERT and 3e-5 for GPT-2, with a batch size of 32 for all experiments. Figure A .1 show the screenshots of task guidelines and annotation interface.", "cite_spans": [ { "start": 6, "end": 31, "text": "(Rao and Tetreault, 2018)", "ref_id": "BIBREF32" }, { "start": 181, "end": 199, "text": "(Niu et al., 2018)", "ref_id": "BIBREF26" }, { "start": 301, "end": 320, "text": "(Lai et al., 2021b)", "ref_id": "BIBREF16" }, { "start": 412, "end": 431, "text": "(Lai et al., 2021a)", "ref_id": "BIBREF15" }, { "start": 583, "end": 601, "text": "(Luo et al., 2019)", "ref_id": "BIBREF21" }, { "start": 747, "end": 764, "text": "(Yi et al., 2020)", "ref_id": "BIBREF45" }, { "start": 947, "end": 966, "text": "(Zhou et al., 2020)", "ref_id": "BIBREF47" }, { "start": 1111, "end": 1130, "text": "(Lai et al., 2021a)", "ref_id": "BIBREF15" }, { "start": 1946, "end": 1965, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF41" } ], "ref_spans": [ { "start": 2208, "end": 2216, "text": "Figure A", "ref_id": null } ], "eq_spans": [], "section": "A.1 Evaluated Systems", "sec_num": null }, { "text": "6 https://www.nltk.org/ 7 https://github.com/mjpost/sacrebleu 8 https://github.com/pltrdy/rouge 9 https://radimrehurek.com/gensim/index. html 10 https://github.com/Tiiiger/bert_score 11 https://github.com/google-research/ bleurt12 https://github.com/Unbabel/COMET", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.3 Annotation Guidelines and Interface", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", "authors": [ { "first": "Satanjeev", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization", "volume": "", "issue": "", "pages": "65--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65-72, Ann Ar- bor, Michigan. Association for Computational Lin- guistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Results of the WMT17 metrics shared task", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Kamran", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Second Conference on Machine Translation", "volume": "", "issue": "", "pages": "489--513", "other_ids": { "DOI": [ "10.18653/v1/W17-4755" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Yvette Graham, and Amir Kamran. 2017. Results of the WMT17 metrics shared task. In Proceedings of the Second Conference on Machine Translation, pages 489-513, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Evaluating the evaluation metrics for style transfer: A case study in multilingual formality transfer", "authors": [ { "first": "Eleftheria", "middle": [], "last": "Briakou", "suffix": "" }, { "first": "Sweta", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" }, { "first": "Marine", "middle": [], "last": "Carpuat", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1321--1336", "other_ids": { "DOI": [ "10.18653/v1/2021.emnlp-main.100" ] }, "num": null, "urls": [], "raw_text": "Eleftheria Briakou, Sweta Agrawal, Joel Tetreault, and Marine Carpuat. 2021a. Evaluating the evaluation metrics for style transfer: A case study in multi- lingual formality transfer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1321-1336, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Ol\u00e1, bonjour, salve! XFORMAL: A benchmark for multilingual formality style transfer", "authors": [ { "first": "Eleftheria", "middle": [], "last": "Briakou", "suffix": "" }, { "first": "Di", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Ke", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "3199--3216", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.256" ] }, "num": null, "urls": [], "raw_text": "Eleftheria Briakou, Di Lu, Ke Zhang, and Joel Tetreault. 2021b. Ol\u00e1, bonjour, salve! XFORMAL: A benchmark for multilingual formality style trans- fer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 3199-3216, Online. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Expertise style transfer: A new task towards better communication between experts and laymen", "authors": [ { "first": "Yixin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Ruihao", "middle": [], "last": "Shui", "suffix": "" }, { "first": "Liangming", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1061--1071", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.100" ] }, "num": null, "urls": [], "raw_text": "Yixin Cao, Ruihao Shui, Liangming Pan, Min-Yen Kan, Zhiyuan Liu, and Tat-Seng Chua. 2020. Expertise style transfer: A new task towards better communi- cation between experts and laymen. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1061-1071, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Reference bias in monolingual machine translation evaluation", "authors": [ { "first": "Marina", "middle": [], "last": "Fomicheva", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "77--82", "other_ids": { "DOI": [ "10.18653/v1/P16-2013" ] }, "num": null, "urls": [], "raw_text": "Marina Fomicheva and Lucia Specia. 2016. Reference bias in monolingual machine translation evaluation. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 77-82, Berlin, Germany. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Continuous measurement scales in human evaluation of machine translation", "authors": [ { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Alistair", "middle": [], "last": "Moffat", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Zobel", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", "volume": "", "issue": "", "pages": "33--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Continuous measurement scales in human evaluation of machine translation. In Pro- ceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 33-41, Sofia, Bulgaria. Association for Computational Lin- guistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Can machine translation systems be evaluated by the crowd alone", "authors": [ { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Alistair", "middle": [], "last": "Moffat", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Zobel", "suffix": "" } ], "year": 2015, "venue": "Natural Language Engineering", "volume": "23", "issue": "", "pages": "3--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2015. Can machine translation sys- tems be evaluated by the crowd alone. Natural Lan- guage Engineering, 23:3 -30.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Cycleconsistent adversarial autoencoders for unsupervised text style transfer", "authors": [ { "first": "Yufang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Wentao", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Deyi", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Yiye", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Changjian", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Feiyu", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "2213--2223", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.201" ] }, "num": null, "urls": [], "raw_text": "Yufang Huang, Wentao Zhu, Deyi Xiong, Yiye Zhang, Changjian Hu, and Feiyu Xu. 2020. Cycle- consistent adversarial autoencoders for unsuper- vised text style transfer. In Proceedings of the 28th International Conference on Computational Linguis- tics, pages 2213-2223, Barcelona, Spain (Online). International Committee on Computational Linguis- tics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Deep learning for text style transfer: A survey", "authors": [ { "first": "Di", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Zhijing", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Zhiting", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Vechtomova", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2011.00416" ] }, "num": null, "urls": [], "raw_text": "Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, and Rada Mihalcea. 2021. Deep learning for text style transfer: A survey. arXiv preprint, arXiv: 2011.00416.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Disentangled representation learning for non-parallel text style transfer", "authors": [ { "first": "Vineet", "middle": [], "last": "John", "suffix": "" }, { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Hareesh", "middle": [], "last": "Bahuleyan", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Vechtomova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "424--434", "other_ids": { "DOI": [ "10.18653/v1/P19-1041" ] }, "num": null, "urls": [], "raw_text": "Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2019. Disentangled representation learning for non-parallel text style transfer. In Pro- ceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics, pages 424-434, Florence, Italy. Association for Computational Lin- guistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Repre- sentations.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Reformulating unsupervised style transfer as paraphrase generation", "authors": [ { "first": "Kalpesh", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "737--762", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.55" ] }, "num": null, "urls": [], "raw_text": "Kalpesh Krishna, John Wieting, and Mohit Iyyer. 2020. Reformulating unsupervised style transfer as para- phrase generation. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 737-762, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "From word embeddings to document distances", "authors": [ { "first": "Matt", "middle": [], "last": "Kusner", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Kolkin", "suffix": "" }, { "first": "Kilian", "middle": [], "last": "Weinberger", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 32nd International Conference on Machine Learning", "volume": "", "issue": "", "pages": "957--966", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to doc- ument distances. In Proceedings of the 32nd In- ternational Conference on Machine Learning, vol- ume 37 of Proceedings of Machine Learning Re- search, pages 957-966, Lille, France.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Generic resources are what you need: Style transfer tasks without task-specific parallel training data", "authors": [ { "first": "Huiyuan", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Toral", "suffix": "" }, { "first": "Malvina", "middle": [], "last": "Nissim", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4241--4254", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huiyuan Lai, Antonio Toral, and Malvina Nissim. 2021a. Generic resources are what you need: Style transfer tasks without task-specific parallel training data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4241-4254, Online and Punta Cana, Domini- can Republic. Association for Computational Lin- guistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Thank you BART! rewarding pre-trained models improves formality style transfer", "authors": [ { "first": "Huiyuan", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Toral", "suffix": "" }, { "first": "Malvina", "middle": [], "last": "Nissim", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "484--494", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-short.62" ] }, "num": null, "urls": [], "raw_text": "Huiyuan Lai, Antonio Toral, and Malvina Nissim. 2021b. Thank you BART! rewarding pre-trained models improves formality style transfer. In Pro- ceedings of the 59th Annual Meeting of the Associa- tion for Computational Linguistics and the 11th In- ternational Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 484- 494, Online. Association for Computational Linguis- tics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Enhancing content preservation in text style transfer using reverse attention and conditional layer normalization", "authors": [ { "first": "Dongkyu", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Zhiliang", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Lanqing", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Nevin", "middle": [ "L" ], "last": "Zhang", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "93--102", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-long.8" ] }, "num": null, "urls": [], "raw_text": "Dongkyu Lee, Zhiliang Tian, Lanqing Xue, and Nevin L. Zhang. 2021. Enhancing content preser- vation in text style transfer using reverse attention and conditional layer normalization. In Proceed- ings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 93-102, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Delete, retrieve, generate: a simple approach to sentiment and style transfer", "authors": [ { "first": "Juncen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1865--1874", "other_ids": { "DOI": [ "10.18653/v1/N18-1169" ] }, "num": null, "urls": [], "raw_text": "Juncen Li, Robin Jia, He He, and Percy Liang. 2018a. Delete, retrieve, generate: a simple approach to sen- timent and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1865-1874, New Orleans, Louisiana. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Delete, retrieve, generate: a simple approach to sentiment and style transfer", "authors": [ { "first": "Juncen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1865--1874", "other_ids": { "DOI": [ "10.18653/v1/N18-1169" ] }, "num": null, "urls": [], "raw_text": "Juncen Li, Robin Jia, He He, and Percy Liang. 2018b. Delete, retrieve, generate: a simple approach to sen- timent and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1865-1874, New Orleans, Louisiana. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A dual reinforcement learning framework for unsupervised text style transfer", "authors": [ { "first": "Fuli", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Zhifang", "middle": [], "last": "Sui", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 28th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "5116--5122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fuli Luo, Peng Li, Jie Zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, and Xu Sun. 2019. A dual reinforcement learning framework for unsupervised text style transfer. In Proceedings of the 28th Inter- national Joint Conference on Artificial Intelligence, pages 5116-5122.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "StylePTB: A compositional benchmark for fine-grained controllable text style transfer", "authors": [ { "first": "Yiwei", "middle": [], "last": "Lyu", "suffix": "" }, { "first": "Paul", "middle": [ "Pu" ], "last": "Liang", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Barnab\u00e1s", "middle": [], "last": "P\u00f3czos", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Louis-Philippe", "middle": [], "last": "Morency", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "2116--2138", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.171" ] }, "num": null, "urls": [], "raw_text": "Yiwei Lyu, Paul Pu Liang, Hai Pham, Eduard Hovy, Barnab\u00e1s P\u00f3czos, Ruslan Salakhutdinov, and Louis- Philippe Morency. 2021. StylePTB: A composi- tional benchmark for fine-grained controllable text style transfer. In Proceedings of the 2021 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 2116-2138, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Results of the WMT19 metrics shared task: Segment-level and strong MT systems pose big challenges", "authors": [ { "first": "Qingsong", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Johnny", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "2", "issue": "", "pages": "62--90", "other_ids": { "DOI": [ "10.18653/v1/W19-5302" ] }, "num": null, "urls": [], "raw_text": "Qingsong Ma, Johnny Wei, Ond\u0159ej Bojar, and Yvette Graham. 2019. Results of the WMT19 metrics shared task: Segment-level and strong MT sys- tems pose big challenges. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62-90, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Results of the WMT20 metrics shared task", "authors": [ { "first": "Nitika", "middle": [], "last": "Mathur", "suffix": "" }, { "first": "Johnny", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Freitag", "suffix": "" }, { "first": "Qingsong", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "688--725", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitika Mathur, Johnny Wei, Markus Freitag, Qingsong Ma, and Ond\u0159ej Bojar. 2020. Results of the WMT20 metrics shared task. In Proceedings of the Fifth Conference on Machine Translation, pages 688-725, Online. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Evaluating style transfer for text", "authors": [ { "first": "Remi", "middle": [], "last": "Mir", "suffix": "" }, { "first": "Bjarke", "middle": [], "last": "Felbo", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Obradovich", "suffix": "" }, { "first": "Iyad", "middle": [], "last": "Rahwan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "495--504", "other_ids": { "DOI": [ "10.18653/v1/N19-1049" ] }, "num": null, "urls": [], "raw_text": "Remi Mir, Bjarke Felbo, Nick Obradovich, and Iyad Rahwan. 2019. Evaluating style transfer for text. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 495-504, Minneapolis, Minnesota. Association for Computa- tional Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Multi-task neural models for translating between styles within and across languages", "authors": [ { "first": "Xing", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Sudha", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Marine", "middle": [], "last": "Carpuat", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1008--1021", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xing Niu, Sudha Rao, and Marine Carpuat. 2018. Multi-task neural models for translating between styles within and across languages. In Proceedings of the 27th International Conference on Computa- tional Linguistics, pages 1008-1021, Santa Fe, New Mexico, USA. Association for Computational Lin- guistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "An empirical analysis of formality in online communication", "authors": [ { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "61--74", "other_ids": { "DOI": [ "10.1162/tacl_a_00083" ] }, "num": null, "urls": [], "raw_text": "Ellie Pavlick and Joel Tetreault. 2016. An empiri- cal analysis of formality in online communication. Transactions of the Association for Computational Linguistics, 4:61-74.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "chrF: character n-gram F-score for automatic MT evaluation", "authors": [ { "first": "Maja", "middle": [], "last": "Popovi\u0107", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "392--395", "other_ids": { "DOI": [ "10.18653/v1/W15-3049" ] }, "num": null, "urls": [], "raw_text": "Maja Popovi\u0107. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Style transfer through back-translation", "authors": [ { "first": "Yulia", "middle": [], "last": "Shrimai Prabhumoye", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Black", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "866--876", "other_ids": { "DOI": [ "10.18653/v1/P18-1080" ] }, "num": null, "urls": [], "raw_text": "Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhut- dinov, and Alan W Black. 2018. Style transfer through back-translation. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 866-876, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer", "authors": [ { "first": "Sudha", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "129--140", "other_ids": { "DOI": [ "10.18653/v1/N18-1012" ] }, "num": null, "urls": [], "raw_text": "Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Cor- pus, benchmarks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long Papers), pages 129-140, New Orleans, Louisiana. Association for Computa- tional Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "COMET: A neural framework for MT evaluation", "authors": [ { "first": "Ricardo", "middle": [], "last": "Rei", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Stewart", "suffix": "" }, { "first": "Ana", "middle": [ "C" ], "last": "Farinha", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2685--2702", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.213" ] }, "num": null, "urls": [], "raw_text": "Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 2685-2702, Online. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "LEWIS: Levenshtein editing for unsupervised text style transfer", "authors": [ { "first": "Machel", "middle": [], "last": "Reid", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Zhong", "suffix": "" } ], "year": 2021, "venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", "volume": "", "issue": "", "pages": "3932--3944", "other_ids": { "DOI": [ "10.18653/v1/2021.findings-acl.344" ] }, "num": null, "urls": [], "raw_text": "Machel Reid and Victor Zhong. 2021. LEWIS: Leven- shtein editing for unsupervised text style transfer. In Findings of the Association for Computational Lin- guistics: ACL-IJCNLP 2021, pages 3932-3944, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "BLEURT: Learning robust metrics for text generation", "authors": [ { "first": "Thibault", "middle": [], "last": "Sellam", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7881--7892", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.704" ] }, "num": null, "urls": [], "raw_text": "Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7881-7892, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Controlling politeness in neural machine translation via side constraints", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "35--40", "other_ids": { "DOI": [ "10.18653/v1/N16-1005" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Controlling politeness in neural machine translation via side constraints. In Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 35-40, San Diego, California. Association for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Style transfer from non-parallel text by cross-alignment", "authors": [ { "first": "Tianxiao", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17", "volume": "", "issue": "", "pages": "6833--6844", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Proceedings of the 31st Inter- national Conference on Neural Information Process- ing Systems, NIPS'17, page 6833-6844, Red Hook, NY, USA. Curran Associates Inc.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "transforming\" delete, retrieve, generate approach for controlled text style transfer", "authors": [ { "first": "Akhilesh", "middle": [], "last": "Sudhakar", "suffix": "" }, { "first": "Bhargav", "middle": [], "last": "Upadhyay", "suffix": "" }, { "first": "Arjun", "middle": [], "last": "Maheswaran", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3269--3279", "other_ids": { "DOI": [ "10.18653/v1/D19-1322" ] }, "num": null, "urls": [], "raw_text": "Akhilesh Sudhakar, Bhargav Upadhyay, and Arjun Ma- heswaran. 2019. \"transforming\" delete, retrieve, generate approach for controlled text style transfer. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3269- 3279, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Style transfer for texts: Retrain, report errors, compare with rewrites", "authors": [ { "first": "Alexey", "middle": [], "last": "Tikhonov", "suffix": "" }, { "first": "Viacheslav", "middle": [], "last": "Shibaev", "suffix": "" }, { "first": "Aleksander", "middle": [], "last": "Nagaev", "suffix": "" }, { "first": "Aigul", "middle": [], "last": "Nugmanova", "suffix": "" }, { "first": "Ivan", "middle": [ "P" ], "last": "Yamshchikov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3936--3945", "other_ids": { "DOI": [ "10.18653/v1/D19-1406" ] }, "num": null, "urls": [], "raw_text": "Alexey Tikhonov, Viacheslav Shibaev, Aleksander Na- gaev, Aigul Nugmanova, and Ivan P. Yamshchikov. 2019. Style transfer for texts: Retrain, report er- rors, compare with rewrites. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3936-3945, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "The reliability of the itu-t p.85 standard for the evaluation of text-to-speech systems", "authors": [ { "first": "Yolanda", "middle": [], "last": "Vazquez", "suffix": "" }, { "first": "-Alvarez", "middle": [], "last": "", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Huckvale", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yolanda Vazquez-Alvarez and Mark Huckvale. 2002. The reliability of the itu-t p.85 standard for the eval- uation of text-to-speech systems.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "Remi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "", "middle": [], "last": "Drame", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-demos.6" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach", "authors": [ { "first": "Jingjing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xuancheng", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Houfeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "979--988", "other_ids": { "DOI": [ "10.18653/v1/P18-1090" ] }, "num": null, "urls": [], "raw_text": "Jingjing Xu, Xu Sun, Qi Zeng, Xiaodong Zhang, Xu- ancheng Ren, Houfeng Wang, and Wenjie Li. 2018. Unpaired sentiment-to-sentiment translation: A cy- cled reinforcement learning approach. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 979-988, Melbourne, Australia. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Paraphrasing for style", "authors": [ { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" } ], "year": 2012, "venue": "Mumbai, India. The COLING 2012 Organizing Committee", "volume": "", "issue": "", "pages": "2899--2914", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, and Colin Cherry. 2012. Paraphrasing for style. In Pro- ceedings of COLING 2012, pages 2899-2914, Mum- bai, India. The COLING 2012 Organizing Commit- tee.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Styletransfer and paraphrase: Looking for a sensible semantic similarity metric", "authors": [ { "first": "P", "middle": [], "last": "Ivan", "suffix": "" }, { "first": "Viacheslav", "middle": [], "last": "Yamshchikov", "suffix": "" }, { "first": "Nikolay", "middle": [], "last": "Shibaev", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Khlebnikov", "suffix": "" }, { "first": "", "middle": [], "last": "Tikhonov", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "14213--14220", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan P. Yamshchikov, Viacheslav Shibaev, Nikolay Khlebnikov, and Alexey Tikhonov. 2021. Style- transfer and paraphrase: Looking for a sensible se- mantic similarity metric. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 14213-14220.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Text style transfer via learning style instance supported latent space", "authors": [ { "first": "Xiaoyuan", "middle": [], "last": "Yi", "suffix": "" }, { "first": "Zhenghao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wenhao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20", "volume": "", "issue": "", "pages": "3801--3807", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoyuan Yi, Zhenghao Liu, Wenhao Li, and Maosong Sun. 2020. Text style transfer via learning style in- stance supported latent space. In Proceedings of the Twenty-Ninth International Joint Conference on Ar- tificial Intelligence, IJCAI-20, pages 3801-3807.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Bertscore: Evaluating text generation with bert", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "*", "middle": [], "last": "", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "*", "middle": [], "last": "", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "*", "middle": [], "last": "", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with bert. In International Conference on Learning Representations.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Exploring contextual word-level style relevance for unsupervised style transfer", "authors": [ { "first": "Chulun", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Liangyu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jiachen", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xinyan", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Jinsong", "middle": [], "last": "Su", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7135--7144", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.639" ] }, "num": null, "urls": [], "raw_text": "Chulun Zhou, Liangyu Chen, Jiachen Liu, Xinyan Xiao, Jinsong Su, Sheng Guo, and Hua Wu. 2020. Exploring contextual word-level style relevance for unsupervised style transfer. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 7135-7144, Online. As- sociation for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Alignment, transformation, evaluation pairs.", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "Kendall's Tau-like correlation in style strength computed over the top-/last-N systems which are sorted by human judgements.", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "Automatic metrics results against human reference.", "uris": null, "type_str": "figure", "num": null }, "FIGREF4": { "text": "Kendall's Tau-like correlation in content preservation computed over the top-/last-N systems which are sorted by human judgements.", "uris": null, "type_str": "figure", "num": null }, "FIGREF5": { "text": "(a) A screenshot of task guidelines.(b) A screenshot of annotation interface.", "uris": null, "type_str": "figure", "num": null }, "FIGREF6": { "text": "Screenshots of our interface.", "uris": null, "type_str": "figure", "num": null }, "TABREF2": { "content": "
N R-PT16 C-PT16 C-GYAFC 8 0.93 0.93 0.97 Segment-level (\u03c4 ) 640 System-level (r) 0.33 0.39 0.42
", "text": "presents the results of IAA for each aspect in each single survey and overall. Across the four surveys annotators have the highest agreement", "num": null, "html": null, "type_str": "table" }, "TABREF3": { "content": "
Last-NTop-N
", "text": "Correlation of automatic metrics in style strength with human judgements. The underlined scores indicate p < 0.01.", "num": null, "html": null, "type_str": "table" }, "TABREF5": { "content": "
SystemSource Single
-Ref
Source
SegmentSingle -Ref
Multi-
Ref
Figure 4: Correlations of automatic metrics computed against source/reference in content preservation with human judgments. Underlining indicates p < 0.01.
BLEUchrFROUGE-1ROUGE-2ROUGE-LWMDMETEORBERTScoreBLEURTCOMET-w
Reference 2 0.28 0.37 0.33 0.10 0.36 0.46 0.21 0.59 0.61 0.61 Reference 3 0.25 0.41 0.37 0.12 0.35 0.47 0.34 0.60 0.60 0.55 Reference 4 0.37 0.41 0.46 0.24 0.46 0.49 0.31 0.60 0.56 0.62
", "text": "Human evaluation (z-score) and automatic metrics in content preservation. Notes: (i) \u2193 indicates the lower the score the better; (ii) COMET-w indicates that the input setting is not used.", "num": null, "html": null, "type_str": "table" }, "TABREF6": { "content": "", "text": "Kendall's Tau-like correlation between using the first human reference and other references for evaluation content preservation at segment-level.", "num": null, "html": null, "type_str": "table" }, "TABREF7": { "content": "
shows the absolute correlation of fluency
metrics with human judgements. Unsurprisingly,
", "text": "", "num": null, "html": null, "type_str": "table" }, "TABREF8": { "content": "
: Absolute correlation of automatic metrics in fluency with human judgements. The underlined scores indicate p < 0.01.
Informal-to-Formal GPT2-Inf GPT2-ForrFormal-to-Informal GPT2-Inf GPT2-Forr
Source Reference BART IBT NIU HIGH RAO ZHOU YI LUO76 60 34 32 43 41 54 189 160 128143 37 26 26 37 35 57 218 182 152-0.21 0.33 0.32 0.30 0.62 0.33 0.36 0.31 0.4387 115 24 33 71 80 54 103 205 696268 270 28 40 75 75 55 111 436 8191-0.13 0.02 0.17 0.03 0.00 0.02 0.42 0.27 0.17
", "text": "", "num": null, "html": null, "type_str": "table" }, "TABREF9": { "content": "
BART REF IBT NIU HIGH RAO YI ZHOU LUO1 2 3 4 5 6 7 8 982.7 82.3 80.1 76.9 76.3 70.2 51.1 47.2 46.70.494 HIGH 0.469 NIU 0.407 BART 0.297 IBT 0.293 RAO 0.085 REF -0.588 ZHOU -0.726 YI -0.731 LUO1 2 3 4 5 6 7 8 992.4 90.7 86.5 85.1 84.7 73.6 50.9 50.5 47.60.542 BART 0.491 IBT 0.370 NIU 0.337 HIGH 0.328 REF 0.009 RAO -0.659 ZHOU -0.669 YI -0.749 LUO1 2 3 4 5 6 7 8 987.8 86.0 84.9 83.3 82.4 77.3 45.1 38.6 37.90.540 0.491 0.463 0.420 0.385 0.247 -0.717 -0.903 -0.926
Table A.1: Results based on original human evaluation and z-score.
BLEUchrFROUGE-1ROUGE-2ROUGE-LWMD \u2190\u2212\u2212\u2212METEORBERTScoreBLEURTCOMET-wBLEUchrFROUGE-1ROUGE-2ROUGE-LWMD \u2190\u2212\u2212\u2212METEORBERTScoreBLEURTCOMET-w
Systems Reference 0.291 0.492 0.533 0.307 0.501 1.334 0.487 0.605 0.235 Reference 1 HIGH 0.366 0.547 0.624 0.401 0.582 1.086 0.554 0.643 0.347 NIU 0.376 0.560 0.646 0.434 0.605 1.036 0.567 0.649 0.373 BART 0.382 0.555 0.632 0.412 0.596 1.053 0.573 0.646 0.388 IBT 0.373 0.550 0.620 0.404 0.582 1.094 0.574 0.635 0.350 RAO 0.336 0.525 0.602 0.367 0.561 1.145 0.533 0.601 0.234 ZHOU 0.253 0.461 0.536 0.300 0.494 1.351 0.469 0.508 -0.200 -0.125 0.245 0.451 0.495 0.271 0.444 1.488 0.476 0.478 -0.206 -0.212 Reference 2 0.314 0.231 0.459 0.494 0.259 0.449 1.469 0.444 0.565 0.155 0.202 0.400 0.300 0.515 0.564 0.342 0.512 1.260 0.521 0.605 0.317 0.289 0.418 0.333 0.525 0.578 0.369 0.526 1.202 0.538 0.617 0.329 0.286 0.425 0.305 0.511 0.561 0.349 0.513 1.278 0.526 0.605 0.353 0.279 0.391 0.291 0.503 0.553 0.335 0.503 1.289 0.512 0.595 0.305 0.271 0.305 0.297 0.505 0.556 0.344 0.512 1.281 0.512 0.568 0.200 0.196 YI 0.288 0.483 0.551 0.324 0.517 1.307 0.491 0.524 -0.154 -0.059 0.225 0.443 0.497 0.263 0.454 1.475 0.457 0.488 -0.203 -0.167 LUO 0.222 0.416 0.483 0.272 0.445 1.514 0.434 0.453 -0.289 -0.278 0.189 0.381 0.419 0.209 0.378 1.694 0.389 0.425 -0.266 -0.368
Systems Reference 0.213 0.442 0.472 0.231 0.434 1.537 0.433 0.567 0.102 Reference 3 HIGH 0.316 0.513 0.566 0.340 0.528 1.229 0.506 0.617 0.236 NIU 0.325 0.509 0.574 0.351 0.534 1.232 0.505 0.612 0.257 BART 0.341 0.517 0.577 0.361 0.539 1.208 0.526 0.617 0.274 IBT 0.307 0.514 0.570 0.344 0.531 1.220 0.522 0.614 0.267 RAO 0.293 0.499 0.556 0.329 0.511 1.288 0.493 0.574 0.140 ZHOU 0.227 00.190 0.326 0.309 0.354 0.328 0.252Reference 4 0.231 0.459 0.505 0.261 0.461 1.438 0.466 0.595 0.224 0.295 0.511 0.585 0.343 0.535 1.227 0.526 0.634 0.327 0.310 0.518 0.607 0.365 0.552 1.173 0.548 0.637 0.349 0.327 0.532 0.621 0.384 0.574 1.128 0.565 0.655 0.405 0.316 0.520 0.592 0.363 0.543 1.217 0.534 0.632 0.332 0.293 0.505 0.577 0.336 0.526 1.234 0.541 0.600 0.2500.293 0.412 0.413 0.447 0.388 0.315
", "text": "StyleContentFluency System Rank AVE. s AVE. z System Rank AVE. s AVE. z System Rank AVE. s AVE. z .419 0.478 0.245 0.438 1.496 0.421 0.489 -0.257 -0.186 0.210 0.425 0.507 0.248 0.453 1.451 0.448 0.507 -0.212 -0.162 YI 0.220 0.436 0.487 0.255 0.449 1.477 0.416 0.488 -0.263 -0.149 0.204 0.432 0.501 0.250 0.458 1.466 0.430 0.509 -0.182 -0.086 LUO 0.189 0.380 0.422 0.244 0.390 1.671 0.371 0.431 -0.346 -0.356 0.197 0.393 0.458 0.243 0.410 1.591 0.420 0.451 -0.282 -0.317", "num": null, "html": null, "type_str": "table" } } } }