{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:05:30.188911Z" }, "title": "Personalized Response Generation with Tensor Factorization", "authors": [ { "first": "Zhenghui", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "\u2021 Tsinghua University", "location": {} }, "email": "zhwang@gatech.edu" }, { "first": "Lingxiao", "middle": [], "last": "Luo", "suffix": "", "affiliation": { "laboratory": "", "institution": "\u2021 Tsinghua University", "location": {} }, "email": "" }, { "first": "Diyi", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "", "institution": "\u2021 Tsinghua University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Personalized response generation is essential for more human-like conversations. However, how to model user personalization information with no explicit user persona descriptions or demographics still remains underinvestigated. To tackle the data sparsity problem and the huge number of users, we utilize tensor factorization to model users' personalization information with their posting histories. Specifically, we introduce the personalized response embedding for all questionuser pairs and form them into a three-mode tensor, decomposed by Tucker decomposition. The personalized response embedding is fed to either the decoder of an LSTM-based Seq2Seq model or a transformer language model to help generate more personalized responses. To evaluate how personalized the generated responses are, we further propose a novel ranking-based metric called Per-Hits@k which measures how likely are the generated responses come from the corresponding users. Results on a large-scale English conversation dataset show that our proposed tensor factorization based models generate more personalized and higher quality responses compared to baselines. We have publicly released our code at https://github.com/GT-SALT/ personalized_response_generation.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Personalized response generation is essential for more human-like conversations. However, how to model user personalization information with no explicit user persona descriptions or demographics still remains underinvestigated. To tackle the data sparsity problem and the huge number of users, we utilize tensor factorization to model users' personalization information with their posting histories. Specifically, we introduce the personalized response embedding for all questionuser pairs and form them into a three-mode tensor, decomposed by Tucker decomposition. The personalized response embedding is fed to either the decoder of an LSTM-based Seq2Seq model or a transformer language model to help generate more personalized responses. To evaluate how personalized the generated responses are, we further propose a novel ranking-based metric called Per-Hits@k which measures how likely are the generated responses come from the corresponding users. Results on a large-scale English conversation dataset show that our proposed tensor factorization based models generate more personalized and higher quality responses compared to baselines. We have publicly released our code at https://github.com/GT-SALT/ personalized_response_generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Building human-like conversational systems has received much attention in artificial intelligence communities, and personalized response generation is one essential step towards this goal, as more personalized responses are often associated with increased user engagement (Shum et al., 2018; . To this end, we focus on the task of personalized response generation in this work, and argue that incorporating personalization into text generation can benefit many down-stream applications such as social chit-chat chatbots (Zhang et al., 2018) and auto-complete responses like Smart Replies (Kannan et al., 2016) .", "cite_spans": [ { "start": 272, "end": 291, "text": "(Shum et al., 2018;", "ref_id": "BIBREF25" }, { "start": 520, "end": 540, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF38" }, { "start": 588, "end": 609, "text": "(Kannan et al., 2016)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Prior text generation work on modeling personalization mainly relied on explicitly given persona or demographic information. For instance, (Zhang et al., 2018; Wolf et al., 2019; Xu et al., 2020) utilized a set of persona sentences to profile users, and other line of research leveraged demographics to model user personalization (Zheng et al., 2019 (Zheng et al., , 2020 . Despite its effectiveness, such approaches are limited when it comes to real world scenarios. First, explicit persona or demographic information is often not available. Second, collecting such personalization information is usually costly and time-consuming, which also suffers from either artificially designed persona descriptions from third-party annotators or subjective and unreliable self-reports from users themselves (Stone et al., 1999) . Although such explicit personalization information is often unavailable, content that users produce is generally ubiquitous and can indicate their preferences, personal information, styles, and knowledge in a relatively implicit but objective manner. Our work thus utilizes these posts and comments users made to learn latent representations of their personalization information.", "cite_spans": [ { "start": 139, "end": 159, "text": "(Zhang et al., 2018;", "ref_id": "BIBREF38" }, { "start": 160, "end": 178, "text": "Wolf et al., 2019;", "ref_id": "BIBREF31" }, { "start": 179, "end": 195, "text": "Xu et al., 2020)", "ref_id": "BIBREF33" }, { "start": 330, "end": 349, "text": "(Zheng et al., 2019", "ref_id": "BIBREF40" }, { "start": 350, "end": 371, "text": "(Zheng et al., , 2020", "ref_id": "BIBREF41" }, { "start": 799, "end": 819, "text": "(Stone et al., 1999)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Different generation models have been designed to learn user personalization information and further impose such representation on text generation. For instance, proposed the Speaker model based on Seq2Seq framework by introducing trainable speaker embedding for each user and feeding it to decoder at each step of decoding. However, there are always a large number of distinct users and users often participate in only a few conversations; as a result, the speaker embedding may be under-fitted given the limited data points associated with a user. Another line of research uses generative memory network (Zhang et al., 2018) , which first retrieves some most relevant responses to a user's input as the memory and then encodes them into an embedding. The difference between the embedding from memory network and speaker embedding is that the former encodes both information of question and user, while the latter represents only users. Nevertheless, the set of observable question-user pairs and their responses is still a small subset of the whole user and question sets, leading to the sparsity issue.", "cite_spans": [ { "start": 606, "end": 626, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Matrix Factorization (MF) has been widely used to infer latent relationships between users and items in recommender systems, especially for data sparsity issues (kumar Bokde et al., 2015) . Motivated by this, we propose to model latent interactions between questions and users by looking at who participated in which conversations, and infer user personalization information from data automatically, for personalized response generation tasks. Differently, as the score or rating used in recommender system usually denotes users' preferences towards items, such scalar is not enough to represent the semantic meaning of a response. Thus, we introduce a response vector to indicate the response content that a user will make for a given conversation, i.e., personalized response embedding, resulting in a tensor form representation for all question-user pairs. Decomposing this tensor (tensor factorization, TF) will lead to the factorized representations for each user, question, and dimension of the response embedding. We propose to augment response generation models with such TFinduced modules, which are model-agnostic and can be applied to many different generation models. Specifically, we introduce a TF module based framework on top of LSTM-based Seq2Seq model and transformer language model for personalized response generation, and further train them together in an end-to-end fashion. Evaluating response generation usually considers content relatedness and language quality to ensure that generated text is grammatically correct and fluent, using BLEU and Perplexity. However, evaluating personalization in personalized response generation is relatively challenging as there lacks effective metrics.", "cite_spans": [ { "start": 161, "end": 187, "text": "(kumar Bokde et al., 2015)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To this end, we propose a novel evaluation metric Per-Hits@k to model personalization , which for the response of a user first calculates its perplexity values via language models of all users, and then ranks the perplexity via this user's language model to examine whether it is ranked as top-k, based on a pre-trained GPT-2 language model (Radford et al., 2019) for each user. Our contributions are:", "cite_spans": [ { "start": 341, "end": 363, "text": "(Radford et al., 2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 propose a tensor factorization based framework to model personalization for response generation task;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 introduce a metric Per-Hits@k, to evaluate the personalization of the generated responses;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 experimental results on a large-scale personalized Reddit dataset show that our TF-based framework outperforms previous methods significantly in terms of both content generation quality and personalization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Personalized Response Generation Personalization has received much attention in the natural language processing community, such as personalized image captioning (Chunseong Park et al., 2017) , personalized machine translation (Rabinovich et al., 2017) , personalized response generation , personalized intent classification and personalized slot tagging (Liu et al., 2016) . Prior studies formulate the task of response generation as generating an output given an input text, mainly based on either the sequence-to-sequence (Seq2Seq) models (Vinyals and Le, 2015) or the pretrained models like GPT-2 (Radford et al., 2019) and BART (Lewis et al., 2019) . When it comes to personalized response generation, Speaker model extended traditional response generation models by assigning each user with a trainable speaker ID embedding. Another line of research focuses on leveraging persona descriptions or demographic attributes (Zheng et al., 2020; Qian et al.; Wolf et al., 2019; Luo et al., 2019) , building on recent personalized dialogue datasets such as PERSONA-CHAT (Zhang et al., 2018) and Per-sonalDialog (Zheng et al., 2019) . For instance, Xu et al. (2020) utilized the predefined user persona description together with their semantically correlated content for generating personalized responses in dialogue systems. Different learning paradigms have also been introduced for personalized response generation such as reinforcement learning (Mo et al., 2016; Yang et al., 2018; Xu et al., 2020) and transfer learning to benefit from a source domain with sufficient training data (Yang et al., 2017) . However, most aforementioned approaches require explicit persona or demographic information which is often unavailable in real world scenarios. To fill this gap, we propose to learn latent representation of personalized user information from users' posts and model personalization jointly together with traditional generation methods for personalized response generation.", "cite_spans": [ { "start": 161, "end": 190, "text": "(Chunseong Park et al., 2017)", "ref_id": "BIBREF3" }, { "start": 226, "end": 251, "text": "(Rabinovich et al., 2017)", "ref_id": "BIBREF22" }, { "start": 354, "end": 372, "text": "(Liu et al., 2016)", "ref_id": "BIBREF16" }, { "start": 541, "end": 563, "text": "(Vinyals and Le, 2015)", "ref_id": "BIBREF30" }, { "start": 600, "end": 622, "text": "(Radford et al., 2019)", "ref_id": "BIBREF23" }, { "start": 632, "end": 652, "text": "(Lewis et al., 2019)", "ref_id": "BIBREF13" }, { "start": 924, "end": 944, "text": "(Zheng et al., 2020;", "ref_id": "BIBREF41" }, { "start": 945, "end": 957, "text": "Qian et al.;", "ref_id": "BIBREF21" }, { "start": 958, "end": 976, "text": "Wolf et al., 2019;", "ref_id": "BIBREF31" }, { "start": 977, "end": 994, "text": "Luo et al., 2019)", "ref_id": "BIBREF17" }, { "start": 1068, "end": 1088, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF38" }, { "start": 1109, "end": 1129, "text": "(Zheng et al., 2019)", "ref_id": "BIBREF40" }, { "start": 1146, "end": 1162, "text": "Xu et al. (2020)", "ref_id": "BIBREF33" }, { "start": 1446, "end": 1463, "text": "(Mo et al., 2016;", "ref_id": "BIBREF18" }, { "start": 1464, "end": 1482, "text": "Yang et al., 2018;", "ref_id": "BIBREF35" }, { "start": 1483, "end": 1499, "text": "Xu et al., 2020)", "ref_id": "BIBREF33" }, { "start": 1584, "end": 1603, "text": "(Yang et al., 2017)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Evaluation Metrics for Personalized Response Generation Current automatic evaluation metrics for response generation can be broadly categorized into three classes. (1) Content relatedness measures how related a generated response is with its corresponding ground-truth, with representative metrics such as BLEU (Papineni et al., 2002) , NIST (Doddington, 2002) , and METEOR (Lavie and Agarwal, 2007) . Speaker sensitive responses evaluation model (SSREM) (Bak and Oh, 2020) enhances the relatedness score with a context-response classifier. (2) Language quality mainly refers to the fluency and diversity, where the former is measured via perplexity (Chen et al., 1998) and the latter is assessed via distinct diversity (Li et al., 2015; that indicates how diverse the generated responses are. (3) Style adherence aims to evaluate the adherence of the generated responses' language style to the user's own language style; example metrics include the average negative log-likelihood (NLL) of one poet's generated lyrics on it's poet specific language model (Vechtomova et al., 2018) , stylistic alignment (Syed et al., 2020 ) that looks at the language style alignment at the surface, lexical and syntactic level, and Hits@1/N (Dinan et al., 2019) that measures how accurate the generated response can be classified to its corresponding user by a classifier. Our proposed Per-Hits@k metric thus belongs to the style adherence class, a more fine-grained metric compared to the average NLL metric (Vechtomova et al., 2018) .", "cite_spans": [ { "start": 311, "end": 334, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF19" }, { "start": 342, "end": 360, "text": "(Doddington, 2002)", "ref_id": "BIBREF5" }, { "start": 374, "end": 399, "text": "(Lavie and Agarwal, 2007)", "ref_id": "BIBREF12" }, { "start": 455, "end": 473, "text": "(Bak and Oh, 2020)", "ref_id": "BIBREF0" }, { "start": 650, "end": 669, "text": "(Chen et al., 1998)", "ref_id": "BIBREF2" }, { "start": 720, "end": 737, "text": "(Li et al., 2015;", "ref_id": "BIBREF14" }, { "start": 1056, "end": 1081, "text": "(Vechtomova et al., 2018)", "ref_id": "BIBREF29" }, { "start": 1104, "end": 1122, "text": "(Syed et al., 2020", "ref_id": "BIBREF27" }, { "start": 1226, "end": 1246, "text": "(Dinan et al., 2019)", "ref_id": "BIBREF4" }, { "start": 1494, "end": 1519, "text": "(Vechtomova et al., 2018)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To learn latent association between users, questions and responses for personalized response generation, we choose Tucker decomposition, one widely used tensor factorization algorithm. Tucker decomposition (Tucker, 1966 ) decomposes a given 3-mode tensor X \u2208 R I\u00d7J\u00d7K into a core tensor G \u2208 R R 1 \u00d7R 2 \u00d7R 3 and three factor matrices", "cite_spans": [ { "start": 192, "end": 219, "text": "decomposition (Tucker, 1966", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Tucker Decomposition", "sec_num": "3.1" }, { "text": "A \u2208 R I\u00d7R 1 , B \u2208 R J\u00d7R 2 , C \u2208 R K\u00d7R 3 : X \u2248 G \u00d7 1 A \u00d7 2 B \u00d7 3 C", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tucker Decomposition", "sec_num": "3.1" }, { "text": "Here, \u00d7 i denotes the mode-i product of a tensor by a matrix (i \u2208 {1, 2, 3}). Any element X (i,j,k) in X can be approximated by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tucker Decomposition", "sec_num": "3.1" }, { "text": "R 1 r 1 =1 R 2 r 2 =1 R 3 r 3 =1 G (r 1 ,r 2 ,r 3 ) A (i,r 1 ) B (j,r 2 ) C (k,r 3 ) 3.2 LSTM-based Seq2Seq Model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tucker Decomposition", "sec_num": "3.1" }, { "text": "LSTM-based Seq2Seq model consists of an encoder LSTM, a decoder LSTM, and attention mechanism (Yao et al., 2015) . Suppose the source text is S = (x 1 , x 2 , . . . , x m ) and the target text is T = (x m+1 , x m+2 , . . . , x N ), the encoder LSTM first encodes S into hidden vector h e m and cell vector c e m , then the decoder LSTM has its initial hidden vector h d 0 and cell vector c d 0 as:", "cite_spans": [ { "start": 94, "end": 112, "text": "(Yao et al., 2015)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Tucker Decomposition", "sec_num": "3.1" }, { "text": "h d 0 = h e m c d 0 = c e m", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tucker Decomposition", "sec_num": "3.1" }, { "text": "The hidden vector of decoder at time step t is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tucker Decomposition", "sec_num": "3.1" }, { "text": "h d t = g(h d t\u22121 , c d t\u22121 , y * t ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tucker Decomposition", "sec_num": "3.1" }, { "text": "where g is the LSTM cell operation and y * t is the embedding of the input token at time step t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tucker Decomposition", "sec_num": "3.1" }, { "text": "Standard Seq2Seq models are not personalized, because there is no mechanism to incorporate userspecific information into their input. Speaker Model alleviates this by explicitly concatenating a trainable speaker embedding v j to y * t for user j. Therefore, the hidden vector of decoder of Speaker model at time step t is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tucker Decomposition", "sec_num": "3.1" }, { "text": "h d t = g(h d t\u22121 , c d t\u22121 , [y * t ; v j ]),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tucker Decomposition", "sec_num": "3.1" }, { "text": "DialoGPT ) is a pre-trained conversational response generation model. Based on the architecture of GPT-2 (Radford et al., 2019) , DialoGPT is trained on 147M Reddit discussions. For a question-user pair (i, j) with source input S and target response T , DialogGPT generates responses by modeling the conditional probability: ", "cite_spans": [ { "start": 105, "end": 127, "text": "(Radford et al., 2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Transformer Language Model", "sec_num": "3.3" }, { "text": "P (T | S) = N n=m+1 P (x n | x 1 , x 2 , . . . , x n\u22121 ) -th", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformer Language Model", "sec_num": "3.3" }, { "text": "We formulate the task of personalized response generation as follows: given a set of question-user pair (q, u) \u2208 S q \u00d7 S u where S q and S u refer to the question set and user set respectively, generate a response r for this question-user pair (q, u), i.e., posted by user u for question q. The overall model architecture is described in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 338, "end": 346, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "To enable personalized response generation, we first need to automatically infer personalized signals that users demonstrate in their participation such as questions that they might interact with, as such signatures are often not explicitly available. To this end, we introduce personalized response embedding p i,j , a K-dimensional vector, to represent the latent relationship between a question i and a user j. We then form a tensor using all p i,j over all question-user pairs and factorize this tensor, to learn latent interactions between questions, users, and their responses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tensor Factorization Module", "sec_num": "4.1" }, { "text": "Formally, for a dataset with I = |S q | questions and J = |S u | users, we have a tensor P \u2208 R I\u00d7J\u00d7K where P (i,j,:) = p i,j denotes each (i, j) pair. The notation P (i,j,:) refers to the mode-3 fiber (or tube) of the tensor P. P can be further formulated via Tucker Decomposition as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tensor Factorization Module", "sec_num": "4.1" }, { "text": "P = G \u00d7 1 Q \u00d7 2 U \u00d7 3 R Here Q \u2208 R I\u00d7R 1 , U \u2208 R K\u00d7R 2 , R \u2208 R K\u00d7R 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tensor Factorization Module", "sec_num": "4.1" }, { "text": "are the factor matrices, and G \u2208 R R 1 \u00d7R 2 \u00d7R 3 is a core tensor. Once these factor matrices and core tensor are determined, the personalized response embedding p i,j for any question-user pair (i, j) can be calculated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tensor Factorization Module", "sec_num": "4.1" }, { "text": "p i,j = P (i,j,:) = RG (3) (u j \u2297 q i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tensor Factorization Module", "sec_num": "4.1" }, { "text": "where q i and u j denote i-th and j-th row vector of Q and U respectively. \u2297 is the Kronecker product of two matrices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tensor Factorization Module", "sec_num": "4.1" }, { "text": "Next, we introduce different mechanisms to incorporate TF modules especially p i,j into traditional LSTM-based models and Transformer Language Models. This is essential to train better TF modules since it is impossible to directly supervise p i,j as no ground truth is available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tensor Factorization Module", "sec_num": "4.1" }, { "text": "To utilize TF module for standard LSTM-based Seq2Seq models, we propose to incorporate p i,j into the initial hidden vector and cell vector of the LSTM decoder to help generate more personalized response, as personalized response embedding p i,j is expected to also encode the target response:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LSTM-based Model with TF Module", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h d 0 = (1 \u2212 \u03bb) \u2022 h e m + \u03bb \u2022 p i,j c d 0 = (1 \u2212 \u03bb) \u2022 c e m + \u03bb \u2022 p i,j ,", "eq_num": "(1)" } ], "section": "LSTM-based Model with TF Module", "sec_num": "4.2" }, { "text": "Here \u03bb is a coefficient to balance the information from the LSTM encoder and the personalized response embedding. Note that our TF module is agnostic to encoder-decoder frameworks, and can be applied to any Seq2Seq model similarly, including but not limited to Seq2Seq, Speaker model , Seq2Seq with memory network (Zhang et al., 2018) , and Speaker model with memory network. Figure 1 describes how the TF module is integrated with an LSTM-based Seq2Seq model. The TF module is randomly initialized and trained together with the Seq2Seq model. This allows TF module to access the supervision from the output response, thus learn the latent interaction between users and questions and produce personalized response embedding for the decoder.", "cite_spans": [ { "start": 314, "end": 334, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF38" } ], "ref_spans": [ { "start": 376, "end": 384, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "LSTM-based Model with TF Module", "sec_num": "4.2" }, { "text": "Recent success of DialoGPT on conversational response generation shows the potential of (pre-trained) transformer language model for the task of response generation. Thus we propose to incorporate TF module with transformer language model, (DialoGPT in specific) for personalized response generation. Since DialoGPT is a language model rather than a Seq2Seq model, it does not have a encoder-decoder architecture but only one transformer model. Thus we cannot utilize p i,j as the initial hidden vector for decoder like that in Eq. 1. Instead, we propose to add personalized response embedding p i,j with the input token embedding, token type embedding and positional embedding together as the input embedding to DialoGPT model. As shown in Figure 2 , the personalized response embedding p i,j is added to token \"\", \"klein\" and \"bleu\" in the input to decode the j-th user's response for the i-th question. The TF module that produces p i,j is also trained together with the DialoGPT model in an end-to-end fashion.", "cite_spans": [], "ref_spans": [ { "start": 741, "end": 749, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Transformer with TF Module", "sec_num": "4.3" }, { "text": "To study the task of personalized response generation with no explicit personalization information, we used a personalized Reddit dataset PER-CHAT, consisting of 200,156 responses that users posted to different questions, from r/AskReddit 1 (Wu et al., 2021) . Building upon Wu et al. 2021, we used active users who joined more than average discussions, and popular questions that received more comments. This led to 4724 users under 39,187 questions. These users and questions were sampled because they were active users who joined more discussions or popular questions that received more comments. We filtered all forms of url links, emails and digits into unique tokens \"url\", \"email\" and \"digit\". Replicated words and punctuation were processed to their standard forms. We sampled 3 responses for each user for users in the validation and test set, and the rest are used for training. The proportion of split size of train, validation, test is 171812 : 14172 : 14172. ", "cite_spans": [ { "start": 241, "end": 258, "text": "(Wu et al., 2021)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "5.1" }, { "text": "We introduced several baselines for comparison with our proposed models. We introduced several baselines for comparison with our proposed models. 1 (3) Speaker model: Our implementation of the speaker model . Following (Kottur et al., 2017) , the Speaker embeddings were not initialized randomly but set as the average sentence embeddings from a user's all historical responses via sentence-BERT (Reimers and Gurevych, 2020) ; the dimension was reduced to 30 by principal component analysis. (4) Memory network: Our implementation of the generative memory network (Zhang et al., 2018) based on our Seq2Seq model with attention. We retrieved top-10 most relevant responses from a user for each question as the memory in the memory network; (5) Memory+Speaker: The generative memory network (Zhang et al., 2018) , together with the use of the speaker embedding . Our models were based on the aforementioned baseline models by further incorporating our proposed TF module, i.e., the personalized response embedding from the TF module. Di-aloGPT+TF is a DialoGPT model with personalized response embedding added to each time step at the decoding stage shown in Figure 2 . Seq2Seq+TF, Speaker+TF, Memory+TF, Mem-ory+Speaker+TF are constructed on top of our baseline models with personalized response embedding added to the decoder as Eq. 1.", "cite_spans": [ { "start": 219, "end": 240, "text": "(Kottur et al., 2017)", "ref_id": "BIBREF11" }, { "start": 396, "end": 424, "text": "(Reimers and Gurevych, 2020)", "ref_id": "BIBREF24" }, { "start": 564, "end": 584, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF38" }, { "start": 789, "end": 809, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF38" } ], "ref_spans": [ { "start": 1157, "end": 1165, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Baselines and Our Models", "sec_num": "5.2" }, { "text": "We evaluated different models with F1, BLEU, Distinct-N, perplexity (PPL), and our proposed Per-Hits@k. Here, F1 (Dinan et al., 2019) refers to the harmonic mean of precision and recall computed based on the tokens between generated and ground truth response. BLEU (Papineni et al., 2002) was first proposed for machine translation but is also widely used for evaluating response generation. Distinct-N (Li et al., 2015) aims to evaluate lexical diversity and we tested distinct unigrams (Distinct-1) and bigrams (Distinct-2). We used perplexity to evaluate the fluency of the generation model. Per-Hits@k for Personalization Evaluation To evaluate the personalization in generated responses for a user, one needs to have a good understanding of that particular user who might sometimes have a very long posting history (500 responses per user on average in our dataset), making it hard for annotators to evaluate how personalized the generated response is for a user. Besides, not every response from a user will reveal their personalization information. Thus, we propose an automatic evaluation metric to evaluate the personalization degree of different generation models called Per-Hits@k. Suppose we have N users and there are M i responses generated for user i to be evaluated. We firstly train a user-specific language model LM i for each user i on all their responses in training set. We then test the j-th response's perplexity of user i on all users' language models, and denote its perplexity on user-n's language model as ppl n i,j . We rank the perplexity of user i's j-th response over N user language models (the lower the perplexity, the higher rank), and denote the ranking of the perplexity on user i's language model LM i with rank(ppl i i,j ). We define the value of Per-Hits@k in Per-Hits@k metric as:", "cite_spans": [ { "start": 113, "end": 133, "text": "(Dinan et al., 2019)", "ref_id": "BIBREF4" }, { "start": 265, "end": 288, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF19" }, { "start": 403, "end": 420, "text": "(Li et al., 2015)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.3" }, { "text": "Per-Hits@k = 1 N i=1 M i N i=1 M i j=1 1 x\u2264k (rank(ppl i i,j ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.3" }, { "text": "This measures how likely the generated response will be ranked as top-k with its corresponding user language model among N users. In our implementation, we fine-tuned GPT-2 (small) (Radford et al., 2019) for each user i to instantiate this user i's language model LM i . To ensure the quality of LM i , we only consider a subset of users (N = 500) and choose these users who have the most responses.", "cite_spans": [ { "start": 181, "end": 203, "text": "(Radford et al., 2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.3" }, { "text": "We implemented our models with PyTorch (Paszke et al., 2019) . For TF module, the core tensor is of size 50 \u00d7 50 \u00d7 50, dimension of personalized response embedding is 512 for all Seq2Seqbased models with TF module (denote as Seq2Seq-based+TF), while it is 1024 for the DialoGPT+TF model. For any Seq2Seq-based+TF model, both encoder and decoder have 2 LSTM layers with hidden size of 512, while DialoGPT+TF model is based on the pre-trained medium DialoGPT model with hidden size of 1024. Any word appears more than three times were included in the vocabulary of Seq2Seq-based+TF models, and the size of the vocabulary is 30K. DialoGPT+TF model uses the pre-trained Byte-Pair-Encoding (BPE) tokenizer of size 50,257. The \u03bb coefficient in Eq. 1 is set to 0.2. Adam (Kingma and Ba, 2014) is used as the optimizer and the learning rate was set to 1e-3 for TF-Speaker model and 1e-5 for TF-DialoGPT by grid search. Top-k (k = 2) sampling (Fan et al., 2018) was used without any re-scoring techniques to generate response at test stage. We selected models with the highest average Per-Hits@k (k = 1, 2, 3, 4, 5) on validation set.", "cite_spans": [ { "start": 39, "end": 60, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF20" }, { "start": 934, "end": 952, "text": "(Fan et al., 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "5.4" }, { "text": "As shown in Table 1 , we reported F1, BLEU, Distinct-N and Per-Hits@k on test data. Distinct-N and Per-Hits@k on ground truth test data and Per-Hits@k on random ranking were also reported. Overall, we found that TF based models significantly improved the personalization metric Per-Hits@k compared to all baselines, with comparable and even better performances in terms of other metrics. Specifically, our proposed Seq2Seq+TF model had an average hist@k score 4 times higher than the Seq2Seq baseline and the Memory+Speaker+TF model had the highest personalization score. This demonstrates that our proposed TF module can model users' personalization well using users' posting history. Furthermore, 1) Per-Hits@k on ground truth data was far below its upper bound 100% but still much higher than Per-Hits@k of generation models, showing the effectiveness of our Per-Hits@k metric to evaluate user personalization. For example, a Per-Hits@1 score of 9.47% indicated that 9.47% of the ground truth responses were ranked as top-1 by its users' language model over Per-Hits@k and paired t-test was performed for other metrics, the significant ones (p < 0.05) over its baseline are marked as * .", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "5.5" }, { "text": "the 500 users. One explanation why Per-Hits@1 on ground truth data was far below 100% might be that these responses from a user do not necessarily always reveal their persona. 2) Although both Seq2Seq and DialoGPT did not model user personalization explicitly, they had higher than random Per-Hist@k. 3) Compared to Seq2Seq, both Speaker and Memory model had about double Per-Hits@k and some degree of improvements over BLEU, F1, and Distinct-N. Combining the Memory and Speaker models led to further improvement on Per-Hits@k. Seq2Seq model with personalized response embedding form TF module (Seq2Seq+TF) achieved higher Per-Hits@k than all baselines, and our Memory+Speaker+TF model showed the highest Per-Hits@k score, demonstrating the effectiveness of our proposed TF module in capturing user personalization by learning the latent interactions between questions, users, and their responses. 4) Compared to Seq2Seq model, DialoGPT performed worse on content relatedness measures like BLEU and F1 and personal-ization measure Per-Hits@k. But our TF module still improved the personalization on top of Di-aloGPT model, as well as the diversity measure Distinct-N. Note that the perplexity could not be compared between DialoGPT and LSTM-based models since they have different vocabulary sets. 5) Memory+Speaker model had better Per-Hits@k but lower BLEU than Seq2Seq model, while our TF module improved Memory+Speaker model's BLEU and Per-Hits@k at the same time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.5" }, { "text": "Due to the open-ended nature of these discussions, we observed relatively low BLEUs across different models, in line with prior work on personalized generation (Zheng et al., 2020; ).", "cite_spans": [ { "start": 160, "end": 180, "text": "(Zheng et al., 2020;", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.5" }, { "text": "Since we have relatively high Per-Hits@k on the ground truth test set, we hypothesize that those top ranked responses in the ground truth test set by Per-Hits@k might be more likely to contain user personalization information. In other words, for certain question-user pairs, a user is more likely to respond with some personalized content that could be better recognized by their language model. We denote these question-user pairs that are ranked top-k by the Per-Hits@k from the test set as the top-m focused set. We evaluated Per-Hits@k of Seq2Seq+TF on different top-m (m = 1, 2, 3, 4, 5) test set in Table 4 . Note that top-500 is the full test set we used for Per-Hits@k in Table 1 . Per-Hits@k was higher on smaller top-m test set, showing the effectiveness of our Per-Hits@k measure, because Per-Hits@k of the same Seq2Seq+TF model was higher on the focused question-user subset when m is small, while lower on the larger and general test set. We then evaluated the baselines and our proposed models on top-1 focused test set in Table 2. Compared to the results on the full test set (Table 1) , the gaps between our models and baselines on BLEU, F1, and Per-Hits@k are larger on this top-1 test set. This suggests that our TF module can help generate more personalized response for a user, especially in a context where a user is more likely to write personalized response.", "cite_spans": [], "ref_spans": [ { "start": 606, "end": 613, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 681, "end": 688, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 1092, "end": 1101, "text": "(Table 1)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "5.5" }, { "text": "The Rank of Tucker Decomposition We first studied the influence of the rank of Tucker decomposition used in our TF module, i.e. the shape of the core tenser G. We trained Seq2Seq+TF model with core tensor of shape 20, 30, 40, 50, 60, 70, 80, 90, 100} . From Figure 3(a) , we found that Per-Hits@k first increased along with the rank, indicating that TF module with higher rank might better model latent Per-Hits@5 Per-Hits@4 Per-Hits@3 Per-Hits@2 Per-Hits@1 Average user-questioninteractions. When the rank reaches around 50, there seems to be limited averaged gains on Per-Hits@k. Thus, we chose core tensor of shape 50 \u00d7 50 \u00d7 50 for our TF module.", "cite_spans": [ { "start": 214, "end": 250, "text": "20, 30, 40, 50, 60, 70, 80, 90, 100}", "ref_id": null } ], "ref_spans": [ { "start": 253, "end": 269, "text": "From Figure 3(a)", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Analysis and Ablation Studies", "sec_num": "5.6" }, { "text": "R \u00d7 R \u00d7 R, R \u2208 {10,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis and Ablation Studies", "sec_num": "5.6" }, { "text": "The Balancer \u03bb We then studied the influence of the \u03bb coefficient in Eq 1 which is used to balance the question information from the encoder and personalized response embedding from the TF module. We varied Seq2Seq+TF model's \u03bb from 0 to 1, as shown in Figure 3(b) . Note that Seq2Seq+TF with \u03bb = 0 is the Seq2Seq baseline. We observed that Per-Hits@k increased a lot when \u03bb changed from 0 to 0.1, confirming the effectiveness of our proposed TF module in modeling user personalization. Moreover, TF module was not sensitive to the hyper-parameter \u03bb as Per-Hits@k were stable for \u03bb \u2208 [0.1, 0.4]. Per-Hits@k decreased when \u03bb was larger than 0.4, suggesting the importance to balance the encoder and TF module.", "cite_spans": [], "ref_spans": [ { "start": 253, "end": 264, "text": "Figure 3(b)", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Analysis and Ablation Studies", "sec_num": "5.6" }, { "text": "User Factor Matrix To examine whether the TF module has learned user personalization information in user factor matrix U, we trained a Speaker model that initialized the speaker embeddings with user embeddings in U and other initialization methods. Specifically we studied the user factor matrix (TF-u) from the Seq2Seq+TF model in Table 1 and compared it with: 1) random speaker embeddings 0 2 4 6 Per-Hits@k-KenLM 0 2 4 6 Per-Hits@k-GPT2 (Random) and 2) average sentence embeddings of each user's historical responses (History) which is used in our Speaker model baseline; 3) we further concatenated the history embeddings and our user embeddings in U to be the initial Speaker embeddings (History+TF-u). The results of the four variants of Speaker model are shown in Table 3 . We found that both History and TF-u initialization improved Per-Hits@k over Random to some extent, suggesting that our TF module has learned some degree of user personalization in its user factor matrix U. Although TF-u had smaller Per-Hits@k improvement over Random, History+TF-u has the best Per-Hits@k, indicating that the personalization information learned by TF module is different to that from users' posting history.", "cite_spans": [], "ref_spans": [ { "start": 332, "end": 339, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 770, "end": 777, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Analysis and Ablation Studies", "sec_num": "5.6" }, { "text": "To test the robustness of our Per-Hits@k metric, we trained trigram language models with the KenLM toolkit (Heafield et al., 2013) for the user specific language models used in Per-Hits@k. While GPT-2 is a transformer-based language model pretrained on large corpus and can be fine-tuned on each user's corpus, KenLM is impossible to follow this approach because it can only be trained in an end-to-end way, i.e. language models of KenLM is directly trained on each user's corpus. Thus we had two Per-Hits@k variants: Per-Hits@k-GPT2 (the one we used in previous sections) and Per-Hits@k-KenLM. We evaluated Per-Hits@k-GPT2 and Per-Hits@k-KenLM for all the models we trained with different settings and plot all (Per-Hits@k-KenLM, Per-Hits@k-GPT2) pairs for k \u2208 {1, 2, 3, 4, 5} in Figure 4 . With a correlation of 0.941 between two variants, we conclude that Per-Hits@k is robust because it produces consistent and similar judgements regardless of which language model it uses.", "cite_spans": [ { "start": 107, "end": 130, "text": "(Heafield et al., 2013)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 781, "end": 789, "text": "Figure 4", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Robustness of Personalization Metric", "sec_num": null }, { "text": "This work proposed a tensor factorization module to model user personalization from users' posting history for the task of personalized response generation, where explicit persona or demographic information is unavailable. To automatically evaluate the personalization of generated response, we proposed a new evaluation metric called Per-Hits@k. Extensive experiments on a large-scale dataset show that our proposed TF module outperforms previous methods significantly in terms of its content generation quality and also the personalization of generated responses. Our ablation studies further demonstrated the effectiveness and robustness of our TF based generation framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Discussion", "sec_num": "6" }, { "text": "One limitation to note for our work is that our tensor factorization based framework to model personalization has only been tested on a corpus derived from Reddit (Wu et al., 2021) . We acknowledge that potential user population bias might be introduced in this process. Another limitation of our results lies in dealing with new users, i.e., the cold start problem. Future research could further examine these issues, build upon our work to examine how different types of implicit information such as social knowledge and commonsense might be learned together with these user profiles in this tensor factorization manner, and model personalization in multi-turn dialogue systems.", "cite_spans": [ { "start": 163, "end": 180, "text": "(Wu et al., 2021)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Discussion", "sec_num": "6" } ], "back_matter": [ { "text": "We would like to thank the anonymous reviewers, and the members of Georgia Tech SALT group for their feedback. This work is supported in part by grants from Amazon, Salesforce, and the Institute for Data Engineering and Science (IDEaS).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgment", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Speaker sensitive response evaluation model", "authors": [ { "first": "Jinyeong", "middle": [], "last": "Bak", "suffix": "" }, { "first": "Alice", "middle": [], "last": "Oh", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6376--6385", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.568" ] }, "num": null, "urls": [], "raw_text": "JinYeong Bak and Alice Oh. 2020. Speaker sensitive response evaluation model. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 6376-6385, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Role of matrix factorization model in collaborative filtering algorithm: A survey", "authors": [ { "first": "Sheetal", "middle": [], "last": "Dheeraj Kumar Bokde", "suffix": "" }, { "first": "Debajyoti", "middle": [], "last": "Girase", "suffix": "" }, { "first": "", "middle": [], "last": "Mukhopadhyay", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dheeraj kumar Bokde, Sheetal Girase, and Debajyoti Mukhopadhyay. 2015. Role of matrix factorization model in collaborative filtering algorithm: A survey. CoRR, abs/1503.07475.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Evaluation metrics for language models", "authors": [ { "first": "Douglas", "middle": [], "last": "Stanley F Chen", "suffix": "" }, { "first": "Roni", "middle": [], "last": "Beeferman", "suffix": "" }, { "first": "", "middle": [], "last": "Rosenfeld", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanley F Chen, Douglas Beeferman, and Roni Rosen- feld. 1998. Evaluation metrics for language models.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Attend to you: Personalized image captioning with context sequence memory networks", "authors": [ { "first": "Byeongchang", "middle": [], "last": "Cesc Chunseong Park", "suffix": "" }, { "first": "Gunhee", "middle": [], "last": "Kim", "suffix": "" }, { "first": "", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "895--903", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cesc Chunseong Park, Byeongchang Kim, and Gunhee Kim. 2017. Attend to you: Personalized image cap- tioning with context sequence memory networks. In Proceedings of the IEEE conference on computer vi- sion and pattern recognition, pages 895-903.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The second conversational intelligence challenge (convai2)", "authors": [ { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Varvara", "middle": [], "last": "Logacheva", "suffix": "" }, { "first": "Valentin", "middle": [], "last": "Malykh", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Kurt", "middle": [], "last": "Shuster", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Urbanek", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Arthur", "middle": [], "last": "Szlam", "suffix": "" }, { "first": "Iulian", "middle": [], "last": "Serban", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Lowe", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1902.00098" ] }, "num": null, "urls": [], "raw_text": "Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, et al. 2019. The second conversational intelligence challenge (convai2). arXiv preprint arXiv:1902.00098.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automatic evaluation of machine translation quality using n-gram cooccurrence statistics", "authors": [ { "first": "George", "middle": [], "last": "Doddington", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the second international conference on Human Language Technology Research", "volume": "", "issue": "", "pages": "138--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co- occurrence statistics. In Proceedings of the second international conference on Human Language Tech- nology Research, pages 138-145.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Hierarchical neural story generation", "authors": [ { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Dauphin", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.04833" ] }, "num": null, "urls": [], "raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- erarchical neural story generation. arXiv preprint arXiv:1805.04833.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Scalable modified kneserney language model estimation", "authors": [ { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Pouzyrevsky", "suffix": "" }, { "first": "Jonathan", "middle": [ "H" ], "last": "Clark", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "690--696", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H Clark, and Philipp Koehn. 2013. Scalable modified kneser- ney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 690-696.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Challenges in building intelligent open-domain dialog systems", "authors": [ { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2020, "venue": "ACM Transactions on Information Systems (TOIS)", "volume": "38", "issue": "3", "pages": "1--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2020. Challenges in building intelligent open-domain dia- log systems. ACM Transactions on Information Sys- tems (TOIS), 38(3):1-32.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Smart reply: Automated response suggestion for email", "authors": [ { "first": "Anjuli", "middle": [], "last": "Kannan", "suffix": "" }, { "first": "Karol", "middle": [], "last": "Kurach", "suffix": "" }, { "first": "Sujith", "middle": [], "last": "Ravi", "suffix": "" }, { "first": "Tobias", "middle": [], "last": "Kaufmann", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Tomkins", "suffix": "" }, { "first": "Balint", "middle": [], "last": "Miklos", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "L\u00e1szl\u00f3", "middle": [], "last": "Luk\u00e1cs", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Ganea", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Young", "suffix": "" } ], "year": 2016, "venue": "KDD", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anjuli Kannan, Karol Kurach, Sujith Ravi, Tobias Kaufmann, Andrew Tomkins, Balint Miklos, Greg Corrado, L\u00e1szl\u00f3 Luk\u00e1cs, Marina Ganea, Peter Young, et al. 2016. Smart reply: Automated re- sponse suggestion for email. In KDD.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Exploring personalized neural conversational models", "authors": [ { "first": "Satwik", "middle": [], "last": "Kottur", "suffix": "" }, { "first": "Xiaoyu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "V\u00edtor", "middle": [], "last": "Carvalho", "suffix": "" } ], "year": 2017, "venue": "IJCAI", "volume": "", "issue": "", "pages": "3728--3734", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satwik Kottur, Xiaoyu Wang, and V\u00edtor Carvalho. 2017. Exploring personalized neural conversational models. In IJCAI, pages 3728-3734.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments", "authors": [ { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "Abhaya", "middle": [], "last": "Agarwal", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the second workshop on statistical machine translation", "volume": "", "issue": "", "pages": "228--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceed- ings of the second workshop on statistical machine translation, pages 228-231.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal ; Abdelrahman Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Ves", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.13461" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A diversity-promoting objective function for neural conversation models", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1510.03055" ] }, "num": null, "urls": [], "raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objec- tive function for neural conversation models. arXiv preprint arXiv:1510.03055.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A persona-based neural conversation model", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Georgios", "middle": [], "last": "Spithourakis", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "994--1003", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Georgios Sp- ithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 994-1003.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Personalized natural language understanding", "authors": [ { "first": "Xiaohu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ruhi", "middle": [], "last": "Sarikaya", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Ni", "suffix": "" }, { "first": "Yi-Cheng", "middle": [], "last": "Pan", "suffix": "" } ], "year": 2016, "venue": "INTERSPEECH", "volume": "", "issue": "", "pages": "1146--1150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaohu Liu, Ruhi Sarikaya, Liang Zhao, Yong Ni, and Yi-Cheng Pan. 2016. Personalized natural language understanding. In INTERSPEECH, pages 1146- 1150.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Learning personalized end-to-end goal-oriented dialog", "authors": [ { "first": "Liangchen", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Wenhao", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Zaiqing", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "6794--6801", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liangchen Luo, Wenhao Huang, Qi Zeng, Zaiqing Nie, and Xu Sun. 2019. Learning personalized end-to-end goal-oriented dialog. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 33, pages 6794-6801.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Personalizing a dialogue system with transfer reinforcement learning", "authors": [ { "first": "Kaixiang", "middle": [], "last": "Mo", "suffix": "" }, { "first": "Shuangyin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1610.02891" ] }, "num": null, "urls": [], "raw_text": "Kaixiang Mo, Shuangyin Li, Yu Zhang, Jiajun Li, and Qiang Yang. 2016. Personalizing a dialogue system with transfer reinforcement learning. arXiv preprint arXiv:1610.02891.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" } ], "year": 2019, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "8026--8037", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Ad- vances in neural information processing systems, pages 8026-8037.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Assigning personality/profile to a chatting machine for coherent conversation generation", "authors": [ { "first": "Qiao", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Haizhou", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Jingfang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qiao Qian, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. Assigning personality/profile to a chatting machine for coherent conversation gener- ation.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Personalized machine translation: Preserving original author traits", "authors": [ { "first": "Ella", "middle": [], "last": "Rabinovich", "suffix": "" }, { "first": "Raj", "middle": [ "Nath" ], "last": "Patel", "suffix": "" }, { "first": "Shachar", "middle": [], "last": "Mirkin", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Shuly", "middle": [], "last": "Wintner", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1074--1084", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ella Rabinovich, Raj Nath Patel, Shachar Mirkin, Lu- cia Specia, and Shuly Wintner. 2017. Personal- ized machine translation: Preserving original author traits. In Proceedings of the 15th Conference of the European Chapter of the Association for Computa- tional Linguistics: Volume 1, Long Papers, pages 1074-1084.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI Blog", "volume": "1", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Making monolingual sentence embeddings multilingual using knowledge distillation", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.09813" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2020. Mak- ing monolingual sentence embeddings multilin- gual using knowledge distillation. arXiv preprint arXiv:2004.09813.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "From eliza to xiaoice: challenges and opportunities with social chatbots", "authors": [ { "first": "Heung-Yeung", "middle": [], "last": "Shum", "suffix": "" }, { "first": "Xiao-Dong", "middle": [], "last": "He", "suffix": "" }, { "first": "Di", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Frontiers of Information Technology & Electronic Engineering", "volume": "19", "issue": "1", "pages": "10--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heung-Yeung Shum, Xiao-dong He, and Di Li. 2018. From eliza to xiaoice: challenges and opportunities with social chatbots. Frontiers of Information Tech- nology & Electronic Engineering, 19(1):10-26.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The science of self-report: Implications for research and practice", "authors": [ { "first": "A", "middle": [], "last": "Arthur", "suffix": "" }, { "first": "Christine", "middle": [ "A" ], "last": "Stone", "suffix": "" }, { "first": "Jared", "middle": [ "B" ], "last": "Bachrach", "suffix": "" }, { "first": "", "middle": [], "last": "Jobe", "suffix": "" }, { "first": "Virginia", "middle": [ "S" ], "last": "Howard S Kurtzman", "suffix": "" }, { "first": "", "middle": [], "last": "Cain", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arthur A Stone, Christine A Bachrach, Jared B Jobe, Howard S Kurtzman, and Virginia S Cain. 1999. The science of self-report: Implications for research and practice. Psychology Press.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Adapting language models for non-parallel author-stylized rewriting", "authors": [ { "first": "Bakhtiyar", "middle": [], "last": "Syed", "suffix": "" }, { "first": "Gaurav", "middle": [], "last": "Verma", "suffix": "" } ], "year": 2020, "venue": "AAAI", "volume": "", "issue": "", "pages": "9008--9015", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bakhtiyar Syed, Gaurav Verma, Balaji Vasan Srini- vasan, Anandhavelu Natarajan, and Vasudeva Varma. 2020. Adapting language models for non-parallel author-stylized rewriting. In AAAI, pages 9008- 9015.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Some mathematical notes on three-mode factor analysis", "authors": [ { "first": "R", "middle": [], "last": "Ledyard", "suffix": "" }, { "first": "", "middle": [], "last": "Tucker", "suffix": "" } ], "year": 1966, "venue": "Psychometrika", "volume": "31", "issue": "3", "pages": "279--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ledyard R Tucker. 1966. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279-311.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Generating lyrics with variational autoencoder and multi-modal artist embeddings", "authors": [ { "first": "Olga", "middle": [], "last": "Vechtomova", "suffix": "" }, { "first": "Hareesh", "middle": [], "last": "Bahuleyan", "suffix": "" }, { "first": "Amirpasha", "middle": [], "last": "Ghabussi", "suffix": "" }, { "first": "Vineet", "middle": [], "last": "John", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1812.08318" ] }, "num": null, "urls": [], "raw_text": "Olga Vechtomova, Hareesh Bahuleyan, Amirpasha Ghabussi, and Vineet John. 2018. Generating lyrics with variational autoencoder and multi-modal artist embeddings. arXiv preprint arXiv:1812.08318.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A neural conversational model", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1506.05869" ] }, "num": null, "urls": [], "raw_text": "Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. arXiv preprint arXiv:1506.05869.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Transfertransfo: A transfer learning approach for neural network based conversational agents", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.08149" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. arXiv preprint arXiv:1901.08149.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Personalized response generation via generative split memory network", "authors": [ { "first": "Yuwei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Diyi", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2021, "venue": "North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuwei Wu, Xuezhe Ma, and Diyi Yang. 2021. Person- alized response generation via generative split mem- ory network. In North American Chapter of the As- sociation for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A neural topical expansion framework for unstructured persona-oriented dialogue generation", "authors": [ { "first": "Minghong", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Piji", "middle": [], "last": "Li", "suffix": "" }, { "first": "Haoran", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Pengjie", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Zhaochun", "middle": [], "last": "Ren", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.02153" ] }, "num": null, "urls": [], "raw_text": "Minghong Xu, Piji Li, Haoran Yang, Pengjie Ren, Zhaochun Ren, Zhumin Chen, and Jun Ma. 2020. A neural topical expansion framework for unstruc- tured persona-oriented dialogue generation. arXiv preprint arXiv:2002.02153.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Planning and generating natural and diverse disfluent texts as augmentation for disfluency detection", "authors": [ { "first": "Jingfeng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Diyi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zhaoran", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1450--1460", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingfeng Yang, Diyi Yang, and Zhaoran Ma. 2020. Planning and generating natural and diverse disflu- ent texts as augmentation for disfluency detection. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1450-1460.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Investigating deep reinforcement learning techniques in personalized dialogue generation", "authors": [ { "first": "Min", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Jia", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zhou", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Joshua", "middle": [ "Z" ], "last": "Huang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 SIAM International Conference on Data Mining", "volume": "", "issue": "", "pages": "630--638", "other_ids": {}, "num": null, "urls": [], "raw_text": "Min Yang, Qiang Qu, Kai Lei, Jia Zhu, Zhou Zhao, Xiaojun Chen, and Joshua Z Huang. 2018. Inves- tigating deep reinforcement learning techniques in personalized dialogue generation. In Proceedings of the 2018 SIAM International Conference on Data Mining, pages 630-638. SIAM.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Personalized response generation via domain adaptation", "authors": [ { "first": "Min", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zhou", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jia", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Lianqiang", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Zigang", "middle": [], "last": "Cao", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "1021--1024", "other_ids": {}, "num": null, "urls": [], "raw_text": "Min Yang, Zhou Zhao, Wei Zhao, Xiaojun Chen, Jia Zhu, Lianqiang Zhou, and Zigang Cao. 2017. Per- sonalized response generation via domain adapta- tion. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1021-1024.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Attention with intention for a neural network conversation model", "authors": [ { "first": "Kaisheng", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Zweig", "suffix": "" }, { "first": "Baolin", "middle": [], "last": "Peng", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1510.08565" ] }, "num": null, "urls": [], "raw_text": "Kaisheng Yao, Geoffrey Zweig, and Baolin Peng. 2015. Attention with intention for a neural network conver- sation model. arXiv preprint arXiv:1510.08565.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Personalizing dialogue agents: I have a dog", "authors": [ { "first": "Saizheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Urbanek", "suffix": "" }, { "first": "Arthur", "middle": [], "last": "Szlam", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1801.07243" ] }, "num": null, "urls": [], "raw_text": "Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Dialogpt: Large-scale generative pre-training for conversational response generation", "authors": [ { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Siqi", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Yen-Chun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2020, "venue": "ACL, system demonstration", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. Dialogpt: Large-scale generative pre-training for conversational response generation. In ACL, system demonstration.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Personalized dialogue generation with diversified traits", "authors": [ { "first": "Yinhe", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Guanyi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Song", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xuan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.09672" ] }, "num": null, "urls": [], "raw_text": "Yinhe Zheng, Guanyi Chen, Minlie Huang, Song Liu, and Xuan Zhu. 2019. Personalized dialogue generation with diversified traits. arXiv preprint arXiv:1901.09672.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "A pre-training based personalized dialogue generation model with persona-sparse data", "authors": [ { "first": "Yinhe", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Rongsheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xiaoxi", "middle": [], "last": "Mao", "suffix": "" } ], "year": 2020, "venue": "AAAI", "volume": "", "issue": "", "pages": "9693--9700", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yinhe Zheng, Rongsheng Zhang, Minlie Huang, and Xiaoxi Mao. 2020. A pre-training based personal- ized dialogue generation model with persona-sparse data. In AAAI, pages 9693-9700.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "LSTM-based Seq2Seq model with our proposed tensor factorization module. The cell vector c e m from the encoder and the attention mechanism are omitted for brevity." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "https://www.reddit.com/r/AskReddit/ Input representation for DialoGPT model with TF module. TF module's personalized response embedding pi,j is added with response token's word embedding, token type embedding and positional embedding." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "DialoGPT: A response generation model based on DialoGPT-medium provided in Zhang et al. 2019; (2) Seq2Seq: A standard Seq2Seq model with attention mechanisms with no personalization information;" }, "FIGREF4": { "uris": null, "num": null, "type_str": "figure", "text": "Per-Hits@k of Seq2Seq+TF model with (a): different Tucker's rank; (b): different balancer \u03bb." }, "FIGREF5": { "uris": null, "num": null, "type_str": "figure", "text": "Per-Hits@k calculated by GPT2 and KenLM for different models. Pearson correlation r=0.941, with p < .001." }, "TABREF1": { "num": null, "text": "15.95 * 105.21 2.07 * 3.20 * 4.53 * 5.40 * 5.80 * 4.20 3.33 * 4.00 * 5.00 * 5.60 * 4.07 Performance comparison with baselines. A Wilcoxon signed-rank test was performed for Per-Hits@k and paired t-test was performed for other metrics, the significant ones (p < 0.05) over its baseline are marked as * .", "content": "
MethodF1 %BLEU %Distinct-N % D-1 D-2PPL12Per-Hits@k % 3 45Avg.
Random ranking-----0.200.400.600.801.000.60
Ground truth--7.2545.51-9.4715.73 19.93 23.00 25.40 18.71
DialoGPT13.640.863.2418.6827.200.400.671.001.271.600.99
Seq2Seq14.421.220.664.0192.240.600.871.001.201.531.04
Speaker15.341.412.7914.2798.751.001.932.933.403.802.61
Memory14.421.283.3416.36108.271.272.202.733.203.672.61
Memory+Speaker14.601.113.5217.64110.311.532.733.474.335.003.41
DialoGPT+TF13.610.803.6120.4027.010.531.001.271.671.731.24 *
Seq2Seq+TF 3.20 Speaker+TF 15.40 * 1.58 * 15.33 1.59 3.35 * 17.19 * 107.88 2.07 * 3.33 *3.804.675.40 * 3.85 *
Memory+TF 107.46 2.40 Memory+Speaker+TF 14.99 * 14.99 * 1.38 * 3.52 16.69 1.40 * 3.34 16.29 107.55 2.60 Method F1 BLEU Distinct-N % PPL % % D-1 D-2 12Per-Hits@k % 3 45Avg.
Ground truth--26.5173.31-100100100100100100
DialoGPT15.670.1128.3365.1820.961.412.112.822.822.822.39
Seq2Seq16.050.4721.6952.2960.102.112.112.824.934.933.38
Speaker19.694.4019.4248.4755.813.526.349.159.869.867.75
Memory19.024.3323.0854.5660.444.237.048.459.159.867.75
Memory+Speaker19.893.1822.9858.5159.034.937.7511.9714.7916.2011.13
DialoGPT+TF15.110.1730.2965.1921.302.112.822.823.523.522.96 *
Seq2Seq+TF 9.86 Speaker+TF 22.98 * 5.77 20.43 49.75 57.38 20.70 4.16 22.72 14.0815.4915.4913.80 *
Memory+TF21.31 *3.1023.4555.1057.65 11.27 *12.6813.3815.4916.2013.80 *
Memory+Speaker+TF20.792.3123.6758.5857.64 10.56
", "html": null, "type_str": "table" }, "TABREF2": { "num": null, "text": "Performance comparison with baselines on top-1 focused test set. A Wilcoxon signed-rank test was performed for", "content": "", "html": null, "type_str": "table" }, "TABREF3": { "num": null, "text": "15.60 * 1.59 * 3.02 15.42 * 101.30 1.33 2.67 3.67 * 4.20 * 4.93 * 3.36 *", "content": "
MethodF1 %BLEU Distinct-N % % D-1 D-2PPL12Per-Hits@k % 3 45Average
Random15.451.363.09 14.9097.05 1.33 1.73 1.872.532.932.08
History15.341.41 * 2.79 14.2798.75 1.00 1.93 2.93 * 3.403.802.61
TF-u15.241.48 * 2.48 12.52 101.77 1.07 1.60 2.402.732.932.15
History+TF-u
", "html": null, "type_str": "table" }, "TABREF4": { "num": null, "text": "Speaker model with different speaker embedding initialization methods. A Wilcoxon signed-rank test was performed for Per-Hits@k and paired t-test was performed for other metrics, the significant ones (p < 0.05) over Random are marked as *", "content": "
top-m1Per-Hits@k from Seq2Seq+TF 2 3 4 5Avg.
19.86 13.38 16.90 16.90 17.61 14.93
26.789.3211.86 11.86 12.71 10.51
36.358.3611.04 11.37 12.049.83
45.517.549.8611.01 11.599.10
54.996.828.929.9710.508.24
5002.073.204.535.405.804.20
", "html": null, "type_str": "table" }, "TABREF5": { "num": null, "text": "Per-Hits@k on different top-m focused test sets.", "content": "", "html": null, "type_str": "table" } } } }