ACL-OCL / Base_JSON /prefixN /json /ngt /2020.ngt-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:06:31.380965Z"
},
"title": "Learning to Generate Multiple Style Transfer Outputs for an Input Sentence",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington o NVIDIA Research",
"location": {}
},
"email": "kvlin@uw.edu"
},
{
"first": "Ming-Yu",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington o NVIDIA Research",
"location": {}
},
"email": "mingyul@nvidia.com"
},
{
"first": "Ming-Ting",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington o NVIDIA Research",
"location": {}
},
"email": ""
},
{
"first": "Jan",
"middle": [],
"last": "Kautz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington o NVIDIA Research",
"location": {}
},
"email": "jkautz@nvidia.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Text style transfer refers to the task of rephrasing a given text in a different style. While various methods have been proposed to advance the state of the art, they often assume the transfer output follows a delta distribution, and thus their models cannot generate different style transfer results for a given input text. To address the limitation, we propose a one-to-many text style transfer framework. In contrast to prior works that learn a one-to-one mapping that converts an input sentence to one output sentence, our approach learns a one-to-many mapping that can convert an input sentence to multiple different output sentences, while preserving the input content. This is achieved by applying adversarial training with a latent decomposition scheme. Specifically, we decompose the latent representation of the input sentence to a style code that captures the language style variation and a content code that encodes the language style-independent content. We then combine the content code with the style code for generating a style transfer output. By combining the same content code with a different style code, we generate a different style transfer output. Extensive experimental results with comparisons to several text style transfer approaches on multiple public datasets using a diverse set of performance metrics validate effectiveness of the proposed approach.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Text style transfer refers to the task of rephrasing a given text in a different style. While various methods have been proposed to advance the state of the art, they often assume the transfer output follows a delta distribution, and thus their models cannot generate different style transfer results for a given input text. To address the limitation, we propose a one-to-many text style transfer framework. In contrast to prior works that learn a one-to-one mapping that converts an input sentence to one output sentence, our approach learns a one-to-many mapping that can convert an input sentence to multiple different output sentences, while preserving the input content. This is achieved by applying adversarial training with a latent decomposition scheme. Specifically, we decompose the latent representation of the input sentence to a style code that captures the language style variation and a content code that encodes the language style-independent content. We then combine the content code with the style code for generating a style transfer output. By combining the same content code with a different style code, we generate a different style transfer output. Extensive experimental results with comparisons to several text style transfer approaches on multiple public datasets using a diverse set of performance metrics validate effectiveness of the proposed approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text style transfer aims at changing the language style of an input sentence to a target style with the constraint that the style-independent content should remain the same across the transfer. While several methods are proposed for the task (John et al., 2019; Smith et al., 2019; Jhamtani et al., 2017; Kerpedjiev, 1992; Xu et al., 2012; Subramanian et al., 2018; Xu et al., 2018) , they commonly model the distribution of the transfer outputs as a delta distribution, which implies a one-to-one mapping mechanism that converts an input sentence in one language style to a single corresponding sentence in the target language style.",
"cite_spans": [
{
"start": 242,
"end": 261,
"text": "(John et al., 2019;",
"ref_id": "BIBREF17"
},
{
"start": 262,
"end": 281,
"text": "Smith et al., 2019;",
"ref_id": "BIBREF37"
},
{
"start": 282,
"end": 304,
"text": "Jhamtani et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 305,
"end": 322,
"text": "Kerpedjiev, 1992;",
"ref_id": "BIBREF19"
},
{
"start": 323,
"end": 339,
"text": "Xu et al., 2012;",
"ref_id": "BIBREF47"
},
{
"start": 340,
"end": 365,
"text": "Subramanian et al., 2018;",
"ref_id": null
},
{
"start": 366,
"end": 382,
"text": "Xu et al., 2018)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We argue a multimodal mapping is better suited for the text style transfer task. For examples, the following two reviews: 1. \"This lightweight vacuum is simply effective.\", 2. \"This easy-to-carry vacuum picks up dust and trash amazingly well.\" would both be considered correct negative-topositive transfer results for the input sentence, \"This heavy vacuum sucks\". Furthermore, a one-tomany mapping allows a user to pick the preferred text style transfer outputs in the inference time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a one-to-many text style transfer framework that can be trained using non-parallel text. That is, we assume the training data consists of two corpora of different styles, and no paired input and output sentences are available. The core of our framework is a latent decomposition scheme learned via adversarial training. We decompose the latent representation of a sentence into two parts where one encodes the style of a sentence, while the other encodes the styleindependent content of the sentence. In the test time, for changing the style of an input sentence, we first extract its content code. We then sample a sentence from the training dataset of the target style corpus and extract its style code. The two codes are combined to generate an output sentence, which would carry the same content but in the target style. As sampling a different style sentence, we have a different style code and have a different style transfer output. We conduct experiments with comparison to several state-of-the-art approaches on multiple public datasets, including Yelp (yel) and Amazon (He and McAuley, 2016) . The results, evaluated using various performance metrics, including content preservation, style accuracy, output diversity, and user preference, show that the model trained with our framework performs consistently better than the competing approaches.",
"cite_spans": [
{
"start": 1105,
"end": 1127,
"text": "(He and McAuley, 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "X2 S1 C S2 I will never go to this restaurant again. I will definitely go to this restaurant again.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "X1",
"sec_num": null
},
{
"text": "I will continue go to this restaurant again.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "X1",
"sec_num": null
},
{
"text": "I would love to visit this restaurant again! X1 S1 C I will never go to this restaurant again.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "X1",
"sec_num": null
},
{
"text": "x1 c1 s1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "X1",
"sec_num": null
},
{
"text": "Figure 1: We formulate text style transfer as a one-to-many mapping function. Left: We decompose the sentence x 1 to a content code c 1 that controls the sentence meaning, and a style code s 1 that captures the stylistic properties of the input x 1 . Right: One-to-many style transfer is achieved by fusing the content code c 1 and a style code s 2 randomly sampled from the target style space S 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "X1",
"sec_num": null
},
{
"text": "Let X 1 and X 2 be two spaces of sentences of two different language styles. Let Z 1 and Z 2 be their corresponding latent spaces. We further assume Z 1 and Z 2 can be decomposed into two latent spaces Z 1 = S 1 \u00d7 C 1 and Z 2 = S 2 \u00d7 C 2 where S 1 and S 2 are the latent spaces that control the style variations in X 1 and X 2 and C 1 and C 2 are the latent spaces that control the style-independent content information. Since C 1 and C 2 are style-independent content representation, we have C \u2261 C 1 \u2261 C 2 . For example, X 1 and X 2 may denote the spaces of negative and positive product reviews where the elements in C encode the product and its features reviewed in a sentence, the elements in S 1 represent variations in negative styles such as the degree of preferences and the exact phrasing, and the elements in S 2 represent the corresponding variations in positive styles. The above modeling implies 1. A sentence x 1 \u2208 X 1 can be decomposed to a content code c 1 \u2208 C and a style code s 1 \u2208 S 1 . 2. A sentence x 1 \u2208 X 1 can be reconstructed by fusing its content code c 1 and its style code s 1 . 3. To transfer a sentence in X 1 to a corresponding sentence in X 2 , one can simply fuse the content code c 1 with a style code s 2 where s 2 \u2208 S 2 . Figure 1 provides a visualization of the modeling.",
"cite_spans": [],
"ref_spans": [
{
"start": 1258,
"end": 1266,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "Under this formulation, the text style transfer mechanism is given by a conditional distribution p(x 1\u21922 |x 1 ), where x 1\u21922 is the sentence generated by transferring sentence x 1 to the target domain X 2 . Note that existing works (Fu et al., 2018; formulate the text style transfer mechanism to be a one-to-one mapping that converts an input sentence to only a single corresponding output sentence. That is p(x 1\u21922 |x 1 ) = \u03b4(x 1 ) where \u03b4 is the Dirac delta function. As a results, they",
"cite_spans": [
{
"start": 232,
"end": 249,
"text": "(Fu et al., 2018;",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "x1 E1 E2 G2 c s c1 s2 F z1 2 y1 2 x2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "I will never go to this restaurant again. I love drinking bubble milk tea.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "I would love to visit this restaurant again.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "x1 E1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "E2 G2 c s c1 s2 F z1 2 y1 2 x2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "I will never go to this restaurant again. This is definitely the best store! I will definitely go to this restaurant again! Figure 2 : Overview of the proposed one-to-many style transfer approach. We show an example of transferring a negative restaurant review sentence x 1 to multiple different positive ones y 1\u21922 . To transfer the sentence, we first randomly sample a sentence x 2 from the space of positive reviews X 2 and extract its style code s 2 using E s 2 . We then compute z 1\u21922 by combining c 1 with s 2 and convert it to the transfer output y 1\u21922 using G 2 . We note that by sampling a different x 2 and hence a different s 2 , we have a different style transfer output y 1\u21922 .",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 132,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "can not be used to generate multiple style transfer outputs for an input sentence. One-to-Many Style Transfer. To model the transfer function, we use a framework consists of a set of networks as visualized in Figure 2 . It has a content encoder E c i , a style encoder E s i , and a decoder G i for each domain X i . In the following, we will explain the framework in details using the task of transferring from X 1 to X 2 . The task of transferring from X 2 to X 1 follows the same pattern.",
"cite_spans": [],
"ref_spans": [
{
"start": 209,
"end": 217,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "The content encoder E c 1 takes the sequence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "x 1 = {x 1 1 , x 2 1 , . . . , x m 1 (x 1 ) 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "} of m 1 (x 1 ) elements as input and computes a content code c 1 \u2261 {c 1 1 , c 2 1 , . . . , c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "m 1 (x 1 ) 1 } = E c 1 (x 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": ", which is a se-quence of vectors describing the sentence's styleindependent content. The style encoder E S 2 converts x 2 to a style code",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "s 2 \u2261 (s 2,\u00b5 , s 2,\u03c3 ) = E s 2 (x 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": ", which is a pair of vectors. Note that we will use s 2,\u00b5 and s 2,\u03c3 as the new mean and standard deviation of the feature activation of the input x 1 for the style transfer task of converting a sentence in X 1 to a corresponding sentence in X 2 . Specifically, we combine the content code c 1 and the style code s 2 using a composition function F , which will be discussed momentarily, to obtain",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "z 1\u21922 = {z 1 1\u21922 , z 2 1\u21922 , . . . , z m 1 (x 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "1\u21922 }. Then, we use the decoder G 2 to map the representation z 1\u21922 to the output sequence y 1\u21922 . Note that s 2 is extracted from a randomly sampled x 2 \u2208 X 2 , and by sampling a different sentence, say x 2 \u2208 X 2 where x 2 = x 2 , we have s 2 = s 2 and hence a different style transfer output. By treating style variations as sample-able quantities, we achieve one-to-many style transfer output capability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "The combination function is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "F (c k i , s j ) = s j,\u03c3 \u2297(c k i \u2212\u00b5(c i )) \u03c3(c i )+s j,\u00b5 ,",
"eq_num": "(1)"
}
],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "where \u2297 denotes element-wise product, denotes element-wise division, \u00b5(\u2022) and \u03c3(\u2022) indicate the operation of computing mean and standard derivation for the content latent code by treating each vector in c i as an independent realization of a random variable. In other words, the latent representation z k i\u2192j = F (c k i , s j ) is constructed by first normalizing the content code c i in the latent space and then applying the non-linear transformation whose parameters are provided from a sentence of target style. Since F contains no learnable parameters, we consider F as part of the decoder. This design draws inspirations from image style transfer works (Huang and Belongie, 2017; Dumoulin et al., 2016) , which show that image style transfer can be achieved by controlling the mean and variance of the feature activations in the neural networks. We hypothesize this is the same case for the text style transfer task and apply it to achieve the oneto-many style transfer capability. Network Design. We realize the content encoder E c i using a convolutional network. To ensure the length of the output sequence c is equal to the length of the input sentence, we pad the input by m \u2212 1 zero vectors on both left and right side, where m is the length of the input sequence as discussed in (Gehring et al., 2017) . For the convolution operation, we do not include any stride convolution. We also realize the style encoder E s i using a convolutional network. To extract the style code, after several convolution layers, we apply global average pooling and then project the results to s i,\u00b5 and s i,\u03c3 using a two-layer multi-layer perceptron. We apply the log-exponential nonlinearity to compute s i,\u03c3 to ensure the outputs are strictly positive, required for modeling the deviations. The decoder G i is realized using a convolutional network with an attention mechanism followed by a convolutional sequence-to-sequence network (ConvS2S) (Gehring et al., 2017) . We realized our method based on ConvS2S, but it can be extended to work with transformer models (Vaswani et al., 2017; Devlin et al., 2018; Radford et al., 2019) . Further details are given in the supplementary materials.",
"cite_spans": [
{
"start": 659,
"end": 685,
"text": "(Huang and Belongie, 2017;",
"ref_id": "BIBREF14"
},
{
"start": 686,
"end": 708,
"text": "Dumoulin et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 1292,
"end": 1314,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 1939,
"end": 1961,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 2060,
"end": 2082,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF42"
},
{
"start": 2083,
"end": 2103,
"text": "Devlin et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 2104,
"end": 2125,
"text": "Radford et al., 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "We train our one-to-many text style transfer model by minimizing multiple loss terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "Reconstruction loss. We use reconstruction loss to regularize the text style transfer learning. Specifically, we assume the pair of content encoder E c i and style encoder E s i and the decoder G i form an auto-encoder. We train them by minimizing the negative log likelihood of the training corpus:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "L i rec = E x i [\u2212 log P (y k i |x k i ; \u03b8 E c i , \u03b8 E s i , \u03b8 G i )] (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "\u03b8 E c i , \u03b8 E s i and \u03b8 G i denote the parameters of E c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "i , E s i , and G i respectively. For each training sentence, G i synthesizes the output sequence by predicting the most possible token y t based on the latent representation z i \u2261 {z 1 i , z 2 i , ..., z m i } and the previous output predictions {y 1 , y 2 , . . . , y t\u22121 }, so that the probability of a sentence can be calculated by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (y|x;\u03b8 E c i , \u03b8 E s i , \u03b8 G i ) = T t=1 p(y t |z i , y 1 , y 2 , . . . , y t\u22121 ; \u03b8 G i ),",
"eq_num": "(3)"
}
],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "where t denotes the token index and T is the sentence length. Following (Gehring et al., 2017) , the probability of a token is computed by the linear projection of the decoder output using softmax.",
"cite_spans": [
{
"start": 72,
"end": 94,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "Back-translation loss. Inspired by recent studies (Prabhumoye et al., Sennrich et al., 2015; Brislin, 1970) that show that back-translation loss, which is closely related to the cycle-consistency Figure 3 : Illustration of the back-translation loss. We transfer x 1 to the domain of X 2 and then transfer it back to the domain of X 1 using its original style code s 1 . The resultant sentence x 1\u21922\u21921 should be as similar as possible to x 1 if the content code is preserved across transfer. To tackle the non-differentiable of the sentence decoding mechanism (beam search), we replace the hard decoding of x 1\u21922 by a learned non-linear projections between the decoder G 2 and the content encoder E c 2 .",
"cite_spans": [
{
"start": 70,
"end": 92,
"text": "Sennrich et al., 2015;",
"ref_id": "BIBREF35"
},
{
"start": 93,
"end": 107,
"text": "Brislin, 1970)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 196,
"end": 204,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "E1 E2 c s c1 s2 E1 s s1 G1 G2 E2 c c1 2 z1 2 1 x1 2 1 x1 x2 F z1 2 F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "loss (Zhu et al., 2017a) used in computer vision, is helpful for preserving the content of the input, we adopt a back-translation loss to regularize the learning. To achieve the goal, as shown in Figure 3 , we transfer the input x 1 to the other style domain X 2 . We then transfer it back to the original domain X 1 by using its original style code s 1 . By doing so, the resulting sentence x 1\u21922\u21921 should be as similar as possible to the original input x 1 . In other words, we minimize the discrepancy between x 1 and x 1\u21922\u21921 given by",
"cite_spans": [
{
"start": 5,
"end": 24,
"text": "(Zhu et al., 2017a)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [
{
"start": 196,
"end": 204,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L 1 back = E x 1 ,x 2 [\u2212 log P (y k 1 |x k 1\u21922\u21921 ; \u03b8)]",
"eq_num": "(4)"
}
],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "\u03b8 = {\u03b8 E c 1 , \u03b8 E s 1 , \u03b8 G 1 , \u03b8 E 2 , \u03b8 E s 2 , \u03b8 G 2 }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "We also define L 2 back in a similar way. To avoid the non-differentiability of the beam search (Och and Ney, 2004; Sutskever et al., 2014), we substitute the hard decoding of x 1\u21922 by using a set of differentiable non-linear transformations between the decoder G 2 and the content encoder E c 1 when minimizing the back-translation loss. The non-linear transformations project the feature activation of the second last layer of the decoder G 2 to the second layer of the content encoder E c 1 . These non-linear projections are learned by the multilayer perceptron (MLP), which are trained jointly with the text style transfer task. We also apply the same mechanism to compute x 2\u21921 . This way, our model can be trained purely using back-propagation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "To ensure the MLP correctly project the feature activation to the second layer of E c 2 , we enforce the output of the MLP to be as similar as possible to the feature activation of the second layer of E c 1 . This is based on the idea that x 1 and x 1\u21922 should have the same content code across transfer, and their feature activation in the content encoder should also be the same. Accordingly, we apply Mean Square Error (MSE) loss function to achieve this objective:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "L 1 mse = E x 1 ,x 2 [||E c,h 2 (x 1\u21922 ) \u2212 E c,h 1 (x 1 )|| 2 2 ] (5) where E c,h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "1 and E c,h 2 denote the function for computing feature activation of the second layer of E c 1 and E c 2 , respectively. The loss L 2 mse for the other domain is defined in a similar way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "2.1"
},
{
"text": "s i = E s i (x i ) with the standard cross-entropy loss L i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Style classification loss. During learning, we enforce a style classification loss on the style code",
"sec_num": null
},
{
"text": "cls . This encourages the style code s i to capture the stylistic properties of the input sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Style classification loss. During learning, we enforce a style classification loss on the style code",
"sec_num": null
},
{
"text": "Adversarial loss. We use GANs (Goodfellow et al., 2014) for matching the distribution of the input latent code to the decoder from the reconstruction streams to the distribution of the input latent code to the decoder from the translation stream. That is (1) we match the distribution of z 1\u21922 to the distribution of z 2 , and (2) we match the distribution of z 2\u21921 to the distribution of z 1 . This way we ensure distribution of the transfer outputs matches distribution of the target style sentences since they use the same decoder. As we apply adversarial training to the latent representation, we also avoid dealing with the non-differentiability of beam search. The adversarial loss for the second domain is given by",
"cite_spans": [
{
"start": 30,
"end": 55,
"text": "(Goodfellow et al., 2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Style classification loss. During learning, we enforce a style classification loss on the style code",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L 2 adv = E x 1 ,x 2 [log(1 \u2212 D 2 (z 1\u21922 ))] + E x 2 [log(D 2 (z 2 ))] ,",
"eq_num": "(6)"
}
],
"section": "Style classification loss. During learning, we enforce a style classification loss on the style code",
"sec_num": null
},
{
"text": "where D 2 is the discriminator which aims at distinguishing the latent representation of the sentence z 1\u21922 from z 2 = C z (c 2 , s 2 ). The adversarial loss L 1 adv is defined in a similar manner. Overall learning objective. We then learn a one-to-many text style transfer model by solving",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Style classification loss. During learning, we enforce a style classification loss on the style code",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min E 1 ,E 2 ,G 1 ,G 2 max D 1 ,D 2 2 i=1 L i rec + L i back +L i mse + L i cls + L i adv .",
"eq_num": "(7)"
}
],
"section": "Style classification loss. During learning, we enforce a style classification loss on the style code",
"sec_num": null
},
{
"text": "3 Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Style classification loss. During learning, we enforce a style classification loss on the style code",
"sec_num": null
},
{
"text": "In the following, we first introduce the datasets and evaluation metrics and then present the experiment results with comparison to the competing methods. Datasets. We use the following datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Style classification loss. During learning, we enforce a style classification loss on the style code",
"sec_num": null
},
{
"text": "\u2022 Amazon product reviews (Amazon) (He and McAuley, 2016) contains 277, 228 positive and 277, 769 negative review sentences for training, and 500 positive and 500 negative review sentences for testing. The length of a sentence ranges from 8 to 25 words. We use this dataset for converting a negative product review to a positive one, and vice versa. Our evaluation follows the protocol described in We use this dataset for converting a negative restaurant review to a positive one, and vice versa. We use two evaluation settings: Yelp500 and Yelp25000. Yelp500 is proposed by , which includes randomly sampled 500 positive and 500 negative sentences from the test set, while Yelp25000 includes randomly sampled 25000 positive and 25000 negative sentences from the test set. Evaluation metrics. We evaluate a text style transfer model on several aspects. Firstly, the transfer output should carry the target style (style score). Secondly, the style-independent content should be preserved (content preservation score). We also measure the diversity of the style transfer outputs for an input sentence (diversity score).",
"cite_spans": [
{
"start": 34,
"end": 56,
"text": "(He and McAuley, 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Style classification loss. During learning, we enforce a style classification loss on the style code",
"sec_num": null
},
{
"text": "\u2022 Style score. We use a classifier to evaluate the fidelity of the style transfer results (Fu et al., 2018; . Specifically, we apply the Byte-mLSTM (Radford et al., 2017) to classify the output sentence generated by a text style transfer model. As transferring a negative sentence to a positive one, we expect a good transfer model should be able to generate a sentence that is classified positive by the classifier. The overall style transfer performance of a model is then given by the average accuracy on the test set measured by the classifier. \u2022 Content score. We build a style-independent distance metric that can quantify content similarity between two sentences, by comparing embeddings of the sentences after removing their style words. Specifically, we compute embedding of each non-style word in the sentence using the word2vec (Mikolov et al., 2013) . Next, we compute the average embedding, which serves as the content representation of the sentence. The content similarity between two sentences is given by the cosine distance of their average embeddings. We compute the relative n-gram frequency to determine which word is a style word based on the observation that the language style is largely encoded in the n-gram distribution (Xu et al., 2012) . This is in spirit similar to the term frequency-inverse document frequency analysis (Sparck Jones, 1972) . Let D 1 and D 2 be the n-gram frequencies of two corpora of different styles. The style magnitude of an n-gram u in style domain i is given by",
"cite_spans": [
{
"start": 90,
"end": 107,
"text": "(Fu et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 148,
"end": 170,
"text": "(Radford et al., 2017)",
"ref_id": "BIBREF31"
},
{
"start": 839,
"end": 861,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF25"
},
{
"start": 1246,
"end": 1263,
"text": "(Xu et al., 2012)",
"ref_id": "BIBREF47"
},
{
"start": 1350,
"end": 1370,
"text": "(Sparck Jones, 1972)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Style classification loss. During learning, we enforce a style classification loss on the style code",
"sec_num": null
},
{
"text": "s i (u) = D i (u) + \u03bb j =i D j (u) + \u03bb (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Style classification loss. During learning, we enforce a style classification loss on the style code",
"sec_num": null
},
{
"text": "where \u03bb is a small constant. We use 1gram. A word is considered a style word if min k\u2208{i,j} s k (u) is greater than a threshold. \u2022 Diversity score. To quantify the diversity of the style transfer outputs, we resort to the self-BLEU score proposed by Zhu et al. (2018) . Given an input sentence, we apply the style transfer model 5 times to obtain 5 outputs. We then compute self-BLEU scores between any two generated sentences (10 pairs). We apply this procedure to all the sentences in the test set and compute the average self-BLEU score v. After that, we define the diversity score as 100 \u2212 v. A model with a higher diversity score means that the model is better in generating diverse outputs. In the experiments, we denote Diversity-K as the diversity score computed by using self-BLEU-K. Implementation. We use the convolutional sequence-to-sequence model (Gehring et al., 2017) . Our content and style encoder consist of 3 convolution layers, respectively. The decoder has 4 convolution layers. The content and style codes are 256 dimensional. We use the pytorch (Paszke et al., 2017) and fairseq (Ott et al., 2019) libraries and train our model using a single GeForce GTX Ti GPU. We use the SGD algorithm with the learning rate set to 0.1. Once the content and style scores converge, we reduce the learning rate by an order of magnitude after every epoch until it reaches 0.0001. Detail model parameters are given in the appendix. Baselines. We compare the proposed approach to the following competing methods.",
"cite_spans": [
{
"start": 250,
"end": 267,
"text": "Zhu et al. (2018)",
"ref_id": "BIBREF52"
},
{
"start": 861,
"end": 883,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 1069,
"end": 1090,
"text": "(Paszke et al., 2017)",
"ref_id": "BIBREF29"
},
{
"start": 1103,
"end": 1121,
"text": "(Ott et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Style classification loss. During learning, we enforce a style classification loss on the style code",
"sec_num": null
},
{
"text": "\u2022 CAE is based on autoencoder and is trained using a GAN framework. It assumes a shared content latent space between different domains and computes the content code by using a content encoder. The output is generated with a pre-defined binary style code. \u2022 MD (Fu et al., 2018) extends the CAE to work with multiple style-specific decoders. It learns style-independent representation by adversarial training and generates output sentences by using style-specific decoders. \u2022 BTS (Prabhumoye et al., 2018) learns styleindependent representations by using backtranslation techniques. BTS assumes the latent representation of the sentence preserves the meaning after machine translation. \u2022 DR employs retrieval techniques to find similar sentences with desired style. They use neural networks to fuse the input and the retrieved sentences for generating the output. \u2022 CopyPast simply uses the input as the output, which serves as a reference for evaluation.",
"cite_spans": [
{
"start": 260,
"end": 277,
"text": "(Fu et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Style classification loss. During learning, we enforce a style classification loss on the style code",
"sec_num": null
},
{
"text": "Our model can generate different text style transfer outputs for an input sentence. To generate multiple outputs for an input, we randomly sample a style code from the target style training dataset during testing. Since the CAE and BTS (Prabhumoye et al., 2018) are not designed for the one-to-many style transfer, we extend their methods to achieve this capability by injecting random noise, termed CAE+noise and BTS+noise. Specifically, we add random Gaussian noise to the latent code of their models during training, which is based on the intuition that the randomness would result in different activations in the networks, leading to different outputs. Table 1 shows the average diversity scores achieved by the competing methods over 5 runs. We find that our method performs favorably against others. User Study. We conduct a user study to evaluate one-to-many style transfer performance using the Amazon Mechanical Turk (AMT) platform. We set up the pairwise comparison following Prabhumoye et al. (2018) . Given an input sentence and two sets of model-generated sentences (5 sentences per set), the workers are asked to choose which set has more diverse sentences with the same meaning, and which set provides more desirable sentences considering both content preservation and style transfer. These are denoted as Diversity, and Overall in Table 2 . The workers are also asked to compare the transfer quality in terms of grammatically and fluency, which is denoted as Fluency. For each comparison, a third option No Preference is given for cases that both are equally good or bad. We randomly sampled 250 sentences from Yelp500 test set for the user study. Each comparison is evaluated by at least three different workers. We received more than 3, 600 responses from the AMT, and the results are summarized in Table 2 . Our method outperforms the competing methods by a large margin in terms of diversity, fluency, and overall quality. In the appendix, we present further details of the comparisons with different variants of CAE+noise and BTS+noise. Our method achieves significantly better performance. Table 3 shows the qualitative results of the proposed method. Our Input: I will never go to this restaurant again. Output A: I will definitely go to this restaurant again. Output B: I will continue go to this restaurant again. Output C: I will definitely go to this place again.",
"cite_spans": [
{
"start": 256,
"end": 261,
"text": "2018)",
"ref_id": "BIBREF40"
},
{
"start": 1004,
"end": 1010,
"text": "(2018)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [
{
"start": 657,
"end": 664,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 1347,
"end": 1354,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 1817,
"end": 1824,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 2112,
"end": 2119,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results on One-to-Many Style Transfer",
"sec_num": "3.1"
},
{
"text": "Input: It was just a crappy experience over all. Output A: It was just a wonderful experience at all. Output B: Great place just a full experience over all. Output C: It was such a good experience as all.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on One-to-Many Style Transfer",
"sec_num": "3.1"
},
{
"text": "Input: This place just keeps getting worse and worse. Output A: This place just worth everything and good. Output B: Fantastic place just top notch prices and service. Output C: This place goes out pretty fast and fresh. Table 3 : One-to-many style transfer results computed by the proposed algorithm.",
"cite_spans": [],
"ref_spans": [
{
"start": 221,
"end": 228,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results on One-to-Many Style Transfer",
"sec_num": "3.1"
},
{
"text": "proposed method generates multiple different style transfer outputs for restaurant reviews.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on One-to-Many Style Transfer",
"sec_num": "3.1"
},
{
"text": "In addition to generating multiple style transfer outputs, our model can also generate high-quality style transfer outputs. In Figure 4 , we compare the quality of our style transfer outputs with those from the competing methods. We show the performance of our model using the style-content curve where each point in the curve is the achieved style score and the content score at different training iterations. In Figure 4a , given a fixed content preservation score, our method achieves a better style score on Amazon dataset. Similarly, given a fixed style score, our model achieves a better content preservation score. The results on Yelp500 and Yelp25000 datasets also demonstrate a similar trend as shown in Figure 4b and Figure 4c , respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 127,
"end": 135,
"text": "Figure 4",
"ref_id": null
},
{
"start": 414,
"end": 423,
"text": "Figure 4a",
"ref_id": null
},
{
"start": 713,
"end": 722,
"text": "Figure 4b",
"ref_id": null
},
{
"start": 727,
"end": 736,
"text": "Figure 4c",
"ref_id": null
}
],
"eq_spans": [],
"section": "More Results and Ablation Study",
"sec_num": "3.2"
},
{
"text": "The style-content curve also depicts the behavior of the proposed model during the entire learning process. As visualized in Figure 5 , we find that our model achieves a high style score but a low content score in the early training stage. With more iterations, our model improves the content score with the expense of a reduced style score. To strike a balance between the two scores, we decrease the learning rate when the model reaches a similar number for the two scores. User Study. We also conduct a user study on the transfer output quality. Given an input sentence with two generated style transferred sentences from two different models 1 , workers are asked to compare the transferred quality of the two generated sentences in terms of content preservation, style transfer, fluency, and overall performance, respectively. We received more than 2500 responses from AMT platform, and the results are summarized in Table 4 . We observe No Preference was chosen more often than others, which shows exiting methods may not fully satisfy human expectation. However, our method achieves comparable or better performance than the prior works. Ablation Study. We conduct a study where we consider three different designs of the proposed models. (1) full: This is the full version of the proposed model; (2) sharing-encoders: In this case, we have a content encoder and a style encoder that are shared by the two domains;",
"cite_spans": [],
"ref_spans": [
{
"start": 125,
"end": 133,
"text": "Figure 5",
"ref_id": null
},
{
"start": 922,
"end": 929,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "More Results and Ablation Study",
"sec_num": "3.2"
},
{
"text": "(3) sharingdecoder: In this case, we have a decoder that is shared by the two domains. Through this study, we aim for studying if regularization via weightsharing is beneficial to our approach. Table 5 shows the comparison of our method using different designs. The sharing-encoders baseline performs much better than the sharing-decoder baseline, and our full method performs the best. The results show that the style-specific decoder is more effective for generating target-style outputs. On the other hand, the style-specific encoder extracts more domain-specific style codes from the inputs. Weight-sharing schemes do not lead to a better performance. Impact of the loss terms. In the appendix, we present an ablation study on the loss terms, which shows that all the terms in our objective function are important.",
"cite_spans": [],
"ref_spans": [
{
"start": 194,
"end": 201,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "More Results and Ablation Study",
"sec_num": "3.2"
},
{
"text": "Language modeling is a core problem in natural language processing. It has a wide range of applications including machine translation (Johnson et al., 2017; Wu et al., 2016) , image captioning (Vinyals et al., 2015) , and dialogue systems (Li et al., 2016a,b) . Recent studies (Devlin et al., 2018; Gehring et al., 2017; Graves, 2013; Johnson et al., 2017; Radford et al., 2019; Wu et al., 2016) proposed to train deep neural networks using maximum-likelihood estimation (MLE) for computing the lexical translation probabilities in parallel corpus. Though effective, acquiring parallel corpus is difficult for many language tasks.",
"cite_spans": [
{
"start": 134,
"end": 156,
"text": "(Johnson et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 157,
"end": 173,
"text": "Wu et al., 2016)",
"ref_id": "BIBREF44"
},
{
"start": 193,
"end": 215,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF43"
},
{
"start": 239,
"end": 259,
"text": "(Li et al., 2016a,b)",
"ref_id": null
},
{
"start": 277,
"end": 298,
"text": "(Devlin et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 299,
"end": 320,
"text": "Gehring et al., 2017;",
"ref_id": "BIBREF8"
},
{
"start": 321,
"end": 334,
"text": "Graves, 2013;",
"ref_id": "BIBREF10"
},
{
"start": 335,
"end": 356,
"text": "Johnson et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 357,
"end": 378,
"text": "Radford et al., 2019;",
"ref_id": "BIBREF32"
},
{
"start": 379,
"end": 395,
"text": "Wu et al., 2016)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "4"
},
{
"text": "Text style transfer has a longstanding history (Kerpedjiev, 1992) . Early studies utilize strongly supervision on parallel corpus (Rao and Tetreault, 2018; Xu, 2017; Xu et al., 2012) . However, the lack of parallel training data renders existing methods non-applicable to many text style transfer tasks. Instead of training with paired sentences, recent studies (Fu et al., 2018; Hu et al., 2017; Prabhumoye et al., 2018; addressed this problem by using adversarial learning techniques. Recent studies further improve the performance by leveraging domain adaptation (Li et al., 2019) or contextual information (Cheng et al., 2020) . In this paper, we argue while the existing methods address the parallel data acquisition difficulty, they do not address the diversity problem in the translated outputs. We address the issue by formulating text style transfer as a one-to-many mapping problem and demonstrate one-to-many style transfer results.",
"cite_spans": [
{
"start": 47,
"end": 65,
"text": "(Kerpedjiev, 1992)",
"ref_id": "BIBREF19"
},
{
"start": 130,
"end": 155,
"text": "(Rao and Tetreault, 2018;",
"ref_id": "BIBREF33"
},
{
"start": 156,
"end": 165,
"text": "Xu, 2017;",
"ref_id": "BIBREF46"
},
{
"start": 166,
"end": 182,
"text": "Xu et al., 2012)",
"ref_id": "BIBREF47"
},
{
"start": 362,
"end": 379,
"text": "(Fu et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 380,
"end": 396,
"text": "Hu et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 397,
"end": 421,
"text": "Prabhumoye et al., 2018;",
"ref_id": "BIBREF30"
},
{
"start": 566,
"end": 583,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 610,
"end": 630,
"text": "(Cheng et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "4"
},
{
"text": "Generative adversarial network (GANs) (Arjovsky et al., 2017; Goodfellow et al., 2014; Salimans et al., 2016; Zhu et al., 2017a) have achieved great success on image generation (Huang et al., 2018; Zhu et al., 2017b) . Several attempts are made to applying GAN for the text generation task Yu et al., 2017; . However, these methods are based on unconditional GANs and tend to generate contextfree sentences. Our method is different in that our model is conditioned on the content and style codes, and our method allows a more controllable style transfer.",
"cite_spans": [
{
"start": 38,
"end": 61,
"text": "(Arjovsky et al., 2017;",
"ref_id": null
},
{
"start": 62,
"end": 86,
"text": "Goodfellow et al., 2014;",
"ref_id": "BIBREF9"
},
{
"start": 87,
"end": 109,
"text": "Salimans et al., 2016;",
"ref_id": "BIBREF34"
},
{
"start": 110,
"end": 128,
"text": "Zhu et al., 2017a)",
"ref_id": "BIBREF50"
},
{
"start": 177,
"end": 197,
"text": "(Huang et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 198,
"end": 216,
"text": "Zhu et al., 2017b)",
"ref_id": "BIBREF51"
},
{
"start": 290,
"end": 306,
"text": "Yu et al., 2017;",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "4"
},
{
"text": "We have presented a novel framework for generating different style transfer outputs for an input sentence. This was achieved by modeling the style transfer as a one-to-many mapping problem with a novel latent decomposition scheme. Experimental results showed that the proposed method achieves better performance than the baselines in terms of the diversity and the overall quality. Table 6 : Human preference comparison with the CAE on one-to-many style transfer results. The numbers are the user preference score of competing methods. 2018 ), for each pairwise comparison, a third option No Preference is given for cases that both are equally good or bad. Figure 8 and Figure 9 show the instructions and the guidelines of our questionnaire for human evaluation on Amazon Mechanical Turk platform. We refer the reader to Sec 3. in the submitted manuscript for the details of the human evaluation results.",
"cite_spans": [
{
"start": 536,
"end": 540,
"text": "2018",
"ref_id": "BIBREF40"
}
],
"ref_spans": [
{
"start": 382,
"end": 389,
"text": "Table 6",
"ref_id": null
},
{
"start": 657,
"end": 665,
"text": "Figure 8",
"ref_id": "FIGREF4"
},
{
"start": 670,
"end": 678,
"text": "Figure 9",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "To evaluate the performance of one-to-many style transfer, we extend the pair-wise comparison to set-wise comparison. Given an input sentence and two sets of model-generated sentences (5 sentences per set), the workers are asked to choose which set has more diverse sentences with the same meaning, and which set provides more desirable sentences considering both content preservation and style transfer. We also ask the workers to compare the transfer quality in terms of content preservation, style transfer, grammatically and fluency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We report further comparisons with different variants of CAE and BTS. We added random Gaussian noise to the style code of CAE and BTS, respec- tively. Specifically, we randomly sample the noise from the Gaussian distribution with \u00b5 = 0 and \u03c3 \u2208 {0.001, 0.01, 0.1, 1, 10}, respectively. We empirically found that the generations will be of poor quality when \u03c3 > 10. Thus, we evaluated the baselines with \u03c3 \u2264 10 in the experiments. On the other hand, we also explored different extensions to enhance the diversity of sequence generation of the baselines. For example, we expanded the generations by randomly select a beam search size k \u2208 {1, 5, 10, 15} per generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Diversity Baselines",
"sec_num": null
},
{
"text": "We report the human evaluation with comparisons to different variants of the CAE and BTS. Similar to the human study presented in the submitted manuscript, we conduct evaluation using Amazon Mechanical Turk. We randomly sampled 200 sentences from Yelp test set for user study. Each comparison is evaluated by at least three experts whose HIT Approval Rate is greater than 90%. We received more than 3600 responses, and the results are summarized in Table 6 and Table 7 . We ob- Table 8 : Empiricial analysis of the impact of each term in the proposed objective function for the proposed one-tomany style transfer task.",
"cite_spans": [],
"ref_spans": [
{
"start": 449,
"end": 468,
"text": "Table 6 and Table 7",
"ref_id": "TABREF9"
},
{
"start": 478,
"end": 485,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "C Additional One-to-Many Style Transfer User Study Results",
"sec_num": null
},
{
"text": "served previous models achieve higher style scores, but their output sentences are often in a generic format and may not preserve the content with correct grammar. In contrast, our method achieves significantly better performance than the baselines in terms of diversity, fluency, and overall quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Additional One-to-Many Style Transfer User Study Results",
"sec_num": null
},
{
"text": "The proposed objective function consists of five different learning objectives. We conduct ablation study to understand which loss function contributes to the performance. Since adversarial loss is essential for domain alignment, we evaluate loss functions by iterating different combination of the reconstruction loss, the back-translation loss (together with the mean square loss), and the style loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D Ablation Study on Objective Function",
"sec_num": null
},
{
"text": "We report the style score and the content preservation score in this experiment. We additionally present the BLEU score (Papineni et al., 2002) , which is a common metric for evaluating the performance of machine translation. A model with a higher BLEU score means that the model is better in translating reasonable sentences. As shown in Table 8 , we find that training without reconstruction loss may not produce reasonable sentences according to the BLEU score. Training with reconstruction loss works well for content preservation yet it performs less favorably for style transfer. Back-translation loss is able to improve style and content preservation scores since it encourage content and style representations to be disentangle. When training with the style loss, our model improves the style accuracy, yet performs worse on content preservation. Overall, we observe that training with all the objective terms achieves a balanced performance in terms of different evaluation scores. The results show that the reconstruction loss, the back-translation loss, and the style loss are important for style transfer.",
"cite_spans": [
{
"start": 120,
"end": 143,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 339,
"end": 346,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "D Ablation Study on Objective Function",
"sec_num": null
},
{
"text": "We design a sampling scheme that can lead to a more accurate style transfer. During inference, our network takes the input sentence as a query, and retrieves a pool of target style sentences whose content information is similar to the query. We measure the similarity by estimating the cosine similarity between the sentence embeddings. Next, we randomly sample a target style code from the retrieved pool, and generate the output sentence. The test-time sampling scheme improves the content preservation score from 83.11 to 83.41, and achieves similar style score from 82.64 to 82.66 on Yelp25000 test set. The results show that it is possible to improve the content preservation by using the top ranked target style sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Style Code Sampling Scheme",
"sec_num": null
},
{
"text": "We provide further analysis on the sampling scheme for the training phase. Specifically, during training, we sample the target style code from the pool of top ranked sentences in the target style domain. Figure 6 shows the content preservation scores of our method using different sampling schemes. The results suggest we can improve the content preservation by learning with the style codes extracted from the top ranked sentences in the target style domain. However, we noticed that this sampling scheme actually reduces the number of training data. It becomes more challenging for the model to learn the style transfer function as shown in Figure 7 . The results suggest that it is more suitable to apply the sampling scheme in the inference phase.",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 212,
"text": "Figure 6",
"ref_id": "FIGREF2"
},
{
"start": 643,
"end": 651,
"text": "Figure 7",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "E Style Code Sampling Scheme",
"sec_num": null
},
{
"text": "We use 256 hidden units for the content encoder, the style encoder, and the decoder. All embeddings in our model have dimensionality 256. We use the same dimensionalities for linear layers mapping between the hidden and embedding sizes. Addi- tionally, we modify the convolution block in the style encoder E s i to have max pooling layers for capturing the activation of the style words. On the other hand, we also modify the convolution block of the content encoder E c i to have average pooling layers for computing the average activation of the input. During inference, the decoder generates the output sentence with the multi-step attention mechanism (Gehring et al., 2017) .",
"cite_spans": [
{
"start": 655,
"end": 677,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "F Additional Implementation Details",
"sec_num": null
},
{
"text": "Our approach is general and can be applied to other sentences that are different from restaurant reviews. We have studied this capability by implementing our method on the Stylish descriptions dataset (Chen et al., 2019) , which has the country song lyrics and romance novel collections. Table 9 shows the example results of the proposed method.",
"cite_spans": [
{
"start": 201,
"end": 220,
"text": "(Chen et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 288,
"end": 295,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "G Application to Other Styles",
"sec_num": null
},
{
"text": "Although our approach performs more favorably against the previous methods, our model still fails in a couple of situations. Table 10 shows the common failure example generated by our model. We Lyrics input: My friends they told me you change like the weather; From one love to another you would go; But when I first met you your love was like the summer; Love I never dreamed of turning cold Romantic style: My friends they told me you change like the light; From one love to another you would go; But when I first met you your love was like the sun; Love I never dreamed of turning cold Romantic style: My lips they told me you change like the light; From one love to find you would go; But when I am you your love was like the mountain; Love I never wanted of me before Table 9 : One-to-many style transfer results computed by the proposed algorithm.",
"cite_spans": [],
"ref_spans": [
{
"start": 125,
"end": 133,
"text": "Table 10",
"ref_id": "TABREF2"
},
{
"start": 773,
"end": 780,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "H Failure Cases",
"sec_num": null
},
{
"text": "Input: I stayed here but was disappointed as its air conditioner does not work properly. Output: I love here but was but as well feel's me work too.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "H Failure Cases",
"sec_num": null
},
{
"text": "Input: I might as well been at a street fest it was so crowded everywhere. Output: I well as well a at a reasonable price it was so pleasant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "H Failure Cases",
"sec_num": null
},
{
"text": "Input: Free cheese puff -but had rye in it (I hate rye!). Output: It's not gourmet but it definitely satisfies my taste for good Mexican food. observe that it is challenging to preserve the content when the inputs are the lengthy sentences. It is also challenging to transfer the style if the sentence contains novel symbols or complicated structure. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "H Failure Cases",
"sec_num": null
},
{
"text": "The sentences generated by other methods have been made publicly available by.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their constructive comments. We thank NVIDIA for the donation of the GPU used for this research. We thank Dianqi Li for the helpful discussion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
},
{
"text": "To control the quality of human evaluation, we conduct pilot study to design and improve our evaluation questionnaire. We invite 23 participants who are native or proficient English speakers to evaluate the sentences generated by different methods. For each participant, we randomly present 10 sentences from Yelp500 test set, and the corresponding style transferred sentences generated by different models. We ask the participants to vote the transferred sentence which they think the sentence meaning is closely related to the original sentence with an opposite sentiment. However, we find that it may be difficult to interpret the evaluation results in terms of transfer quality in details.Therefore, instead of asking the participants to directly vote one sentence, we switch the task to evaluating the sentences in terms of four different aspects including style transfer, content preservation, fluency and grammatically, and overall performance. Following the literature (Prabhumoye et al.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A User Study Design",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Yelp Dataset Challenge",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yelp Dataset Challenge. https://www.yelp. com/dataset/challenge.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Back-translation for crosscultural research",
"authors": [
{
"first": "",
"middle": [],
"last": "Richard W Brislin",
"suffix": ""
}
],
"year": 1970,
"venue": "Journal of cross-cultural psychology",
"volume": "1",
"issue": "3",
"pages": "185--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard W Brislin. 1970. Back-translation for cross- cultural research. Journal of cross-cultural psychol- ogy, 1(3):185-216.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised stylish image description generation via domain layer norm",
"authors": [
{
"first": "Zhu",
"middle": [],
"last": "Cheng Kuan Chen",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Feng Pan",
"suffix": ""
},
{
"first": "Ming-Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cheng Kuan Chen, Zhu Feng Pan, Min Sun, and Ming- Yu Liu. 2019. Unsupervised stylish image descrip- tion generation via domain layer norm. In Proc. AAAI.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Contextual text style transfer",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Oussama",
"middle": [],
"last": "Elachqar",
"suffix": ""
},
{
"first": "Dianqi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.00136"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Cheng, Zhe Gan, Yizhe Zhang, Oussama Elachqar, Dianqi Li, and Jingjing Liu. 2020. Contextual text style transfer. arXiv preprint arXiv:2005.00136.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A learned representation for artistic style",
"authors": [
{
"first": "Jonathon",
"middle": [],
"last": "Vincent Dumoulin",
"suffix": ""
},
{
"first": "Manjunath",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kudlur",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. 2016. A learned representation for artistic style. In Proc. ICLR.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Style transfer in text: Exploration and evaluation",
"authors": [
{
"first": "Zhenxin",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Xiaoye",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Explo- ration and evaluation. In Proc. AAAI.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann N",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In Proc. ICML.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Generative adversarial nets",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Pouget-Abadie",
"suffix": ""
},
{
"first": "Mehdi",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Warde-Farley",
"suffix": ""
},
{
"first": "Sherjil",
"middle": [],
"last": "Ozair",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative ad- versarial nets. In Proc. NeurIPS.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Generating sequences with recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1308.0850"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Long text generation via adversarial training with leaked information",
"authors": [
{
"first": "Jiaxian",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Sidi",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. 2018. Long text generation via adversarial training with leaked information. In Prof. AAAI.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering",
"authors": [
{
"first": "Ruining",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Mcauley",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. WWW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In Proc. WWW.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Toward controlled generation of text",
"authors": [
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward con- trolled generation of text. In Proc. ICML.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Arbitrary style transfer in real-time with adaptive instance normalization",
"authors": [
{
"first": "Xun",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ICCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xun Huang and Serge Belongie. 2017. Arbitrary style transfer in real-time with adaptive instance normal- ization. In Proc. ICCV.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multimodal unsupervised image-toimage translation",
"authors": [
{
"first": "Xun",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ming-Yu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Kautz",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. ECCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. 2018. Multimodal unsupervised image-to- image translation. In Proc. ECCV.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Shakespearizing modern language using copy-enriched sequence to sequence models",
"authors": [
{
"first": "Harsh",
"middle": [],
"last": "Jhamtani",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Gangal",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nyberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. EMNLP Workshop on Stylistic Variation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harsh Jhamtani, Varun Gangal, Eduard Hovy, and Eric Nyberg. 2017. Shakespearizing modern language using copy-enriched sequence to sequence models. In Proc. EMNLP Workshop on Stylistic Variation.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Disentangled representation learning for non-parallel text style transfer",
"authors": [
{
"first": "Vineet",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Hareesh",
"middle": [],
"last": "Bahuleyan",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Vechtomova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2019. Disentangled representation learning for non-parallel text style transfer. In Proc. ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Googles multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Corrado",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, et al. 2017. Googles multilingual neural machine translation system: Enabling zero-shot translation. TACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Generation of informative texts with style",
"authors": [
{
"first": "Stephan",
"middle": [
"M"
],
"last": "Kerpedjiev",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan M. Kerpedjiev. 1992. Generation of informa- tive texts with style. In Proc. COLING.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Domain adaptive text style transfer",
"authors": [
{
"first": "Dianqi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Ming-Ting",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dianqi Li, Yizhe Zhang, Zhe Gan, Yu Cheng, Chris Brockett, Bill Dolan, and Ming-Ting Sun. 2019. Do- main adaptive text style transfer. In Proc. EMNLP- IJCNLP.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016a. A persona-based neural conversation model",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": null,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Georgios Sp- ithourakis, Jianfeng Gao, and Bill Dolan. 2016a. A persona-based neural conversation model. In Proc. ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Deep reinforcement learning for dialogue generation",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.01541"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jian- feng Gao, and Dan Jurafsky. 2016b. Deep rein- forcement learning for dialogue generation. arXiv preprint arXiv:1606.01541.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Delete, retrieve, generate: a simple approach to sentiment and style transfer",
"authors": [
{
"first": "Juncen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sen- timent and style transfer. In Proc. NAACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Adversarial ranking for language generation",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Dianqi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Zhengyou",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ming-Ting",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-Ting Sun. 2017. Adversarial ranking for language generation. In Proc. NeurIPS.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Proc. NeurIPS.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The alignment template approach to statistical machine translation",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2004. The align- ment template approach to statistical machine trans- lation. Computational linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. NAACL Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proc. NAACL Demonstrations.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proc. ACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Automatic differentiation in PyTorch",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. NeurIPS Autodiff Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In Proc. NeurIPS Autodiff Workshop.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Style transfer through back-translation",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Shrimai Prabhumoye",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Black",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhut- dinov, and Alan W Black. 2018. Style transfer through back-translation. In Proc. ACL.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learning to generate reviews and discovering sentiment",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.01444"
]
},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Rafal Jozefowicz, and Ilya Sutskever. 2017. Learning to generate reviews and discovering sentiment. arXiv preprint arXiv:1704.01444.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Tech Report.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Dear sir or madam, may i introduce the yafc corpus: Corpus, benchmarks and metrics for formality style transfer",
"authors": [
{
"first": "Sudha",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may i introduce the yafc corpus: Corpus, benchmarks and metrics for formality style transfer. In Proc. NAACL.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Improved techniques for training gans",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Salimans",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Vicki",
"middle": [],
"last": "Cheung",
"suffix": ""
},
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Xi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. 2016. Improved techniques for training gans. In Proc. NeurIPS.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.06709"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Style transfer from non-parallel text by cross-alignment",
"authors": [
{
"first": "Tianxiao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. NeruIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Proc. NeruIPS.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Zero-shot finegrained style transfer: Leveraging distributed continuous style representations to transfer to unseen styles",
"authors": [
{
"first": "Eric",
"middle": [
"Michael"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Gonzalez-Rico",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Y-Lan",
"middle": [],
"last": "Boureau",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.03914"
]
},
"num": null,
"urls": [],
"raw_text": "Eric Michael Smith, Diana Gonzalez-Rico, Emily Di- nan, and Y-Lan Boureau. 2019. Zero-shot fine- grained style transfer: Leveraging distributed con- tinuous style representations to transfer to unseen styles. arXiv preprint arXiv:1911.03914.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A statistical interpretation of term specificity and its application in retrieval",
"authors": [
{
"first": "Karen Sparck",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1972,
"venue": "Journal of documentation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Sparck Jones. 1972. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Multiple-attribute text style transfer",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.00552"
]
},
"num": null,
"urls": [],
"raw_text": "Multiple-attribute text style transfer. arXiv preprint arXiv:1811.00552.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proc. NeurIPS.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. NeurIPS.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Show and tell: A neural image caption generator",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Toshev",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im- age caption generator. In Proc. CVPR.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Google's neural machine translation system",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "Bridging the gap between human and machine translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between hu- man and machine translation. arXiv preprint arXiv:1609.08144.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach",
"authors": [
{
"first": "Jingjing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Sun Xu",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Xuancheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingjing Xu, SUN Xu, Qi Zeng, Xiaodong Zhang, Xuancheng Ren, Houfeng Wang, and Wenjie Li. 2018. Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach. In Proc. ACL.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "From shakespeare to twitter: What are language styles all about?",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. EMNLP Workshop on Stylistic Variation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Xu. 2017. From shakespeare to twitter: What are language styles all about? In Proc. EMNLP Work- shop on Stylistic Variation.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Paraphrasing for style. Proc. COLING",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, and Colin Cherry. 2012. Paraphrasing for style. Proc. COLING.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Seqgan: sequence generative adversarial nets with policy gradient",
"authors": [
{
"first": "L",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L Yu, W Zhang, J Wang, and Y Yu. 2017. Seqgan: sequence generative adversarial nets with policy gra- dient. In Proc. AAAI.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Adversarial feature matching for text generation",
"authors": [
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Henao",
"suffix": ""
},
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.03850"
]
},
"num": null,
"urls": [],
"raw_text": "Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. 2017. Adversarial feature matching for text generation. arXiv preprint arXiv:1706.03850.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Unpaired image-to-image translation using cycle-consistent adversarial networks",
"authors": [
{
"first": "Jun-Yan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Taesung",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Phillip",
"middle": [],
"last": "Isola",
"suffix": ""
},
{
"first": "Alexei",
"middle": [
"A"
],
"last": "Efros",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ICCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017a. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proc. ICCV.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Toward multimodal image-toimage translation",
"authors": [
{
"first": "Jun-Yan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Deepak",
"middle": [],
"last": "Pathak",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
},
{
"first": "Alexei",
"middle": [
"A"
],
"last": "Efros",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Eli",
"middle": [],
"last": "Shechtman",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A Efros, Oliver Wang, and Eli Shechtman. 2017b. Toward multimodal image-to- image translation. In Proc. NeurIPS.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Texygen: A benchmarking platform for text generation models",
"authors": [
{
"first": "Yaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Sidi",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Jiaxian",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texy- gen: A benchmarking platform for text generation models. Proc. SIGIR.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Comparison to different style transfer algorithms on output quality. Style-content trade-off curves. The vertical line indicates the iteration at which the learning rate is decreased.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Performance comparison of our model using different sampling schemes.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Performance comparison of our model using different sampling schemes.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Instruction of our questionnaire on Amazon Mechanical Turk platform.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "Example and guideline of our questionnaire on Amazon Mechanical Turk platform.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"text": ". \u2022 Yelp restaurant reviews (Yelp) (yel) contains a training set of 267, 314 positive and 176, 787 negative sentences, and a test set of 76, 392 positive and 50, 278 negative testing sentences. The length of a sentence ranges from 1 to 15 words.",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF2": {
"type_str": "table",
"text": "One-to-many text style transfer results.",
"num": null,
"html": null,
"content": "<table><tr><td>Method</td><td colspan=\"3\">Diversity Fluency Overall</td></tr><tr><td>CAE+noise</td><td>13.13</td><td>11.62</td><td>12.12</td></tr><tr><td>No Pref.</td><td>35.35</td><td>16.16</td><td>36.87</td></tr><tr><td>Ours</td><td>51.52</td><td>72.22</td><td>51.01</td></tr><tr><td>BTS+noise</td><td>13.13</td><td>11.11</td><td>16.16</td></tr><tr><td>No Pref.</td><td>42.93</td><td>22.22</td><td>40.40</td></tr><tr><td>Ours</td><td>43.94</td><td>66.67</td><td>43.43</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"text": "",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF5": {
"type_str": "table",
"text": "User study results. The numbers are the user preference scores of the competing methods.",
"num": null,
"html": null,
"content": "<table><tr><td>Model</td><td colspan=\"2\">Style Score Content Score</td></tr><tr><td>sharing-decoder</td><td>63.75</td><td>42.54</td></tr><tr><td>sharing-encoders</td><td>81.41</td><td>81.48</td></tr><tr><td>full</td><td>82.64</td><td>83.11</td></tr></table>"
},
"TABREF6": {
"type_str": "table",
"text": "Comparison of different design choices of the proposed framework.",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF9": {
"type_str": "table",
"text": "Human preference comparison with the BTS on one-to-many style transfer results. The numbers are the user preference score of competing methods.",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF11": {
"type_str": "table",
"text": "Example failure cases generated by the proposed method.",
"num": null,
"html": null,
"content": "<table/>"
}
}
}
}