diff --git a/-dE4T4oBgHgl3EQfDwsm/content/tmp_files/2301.04871v1.pdf.txt b/-dE4T4oBgHgl3EQfDwsm/content/tmp_files/2301.04871v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..aab0c95832c39e737482ba16bca0948896a22906 --- /dev/null +++ b/-dE4T4oBgHgl3EQfDwsm/content/tmp_files/2301.04871v1.pdf.txt @@ -0,0 +1,1365 @@ +Learning to Memorize Entailment and Discourse Relations for Persona-Consistent +Dialogues +Ruijun Chen1, Jin Wang1*, Liang-Chih Yu2 and Xuejie Zhang1 +1School of Information Science and Engineering, Yunnan University, Yunnan, China +2Department of Information Management, Yuan Ze University, Taiwan +chenrj@mail.ynu.edu.cn, wangjin@ynu.edu.cn, lcyu@saturn.yzu.edu.tw, xjzhang@ynu.edu.cn +Abstract +Maintaining engagement and consistency is particularly im- +portant in dialogue systems. Existing works have improved +the performance of dialogue systems by intentionally learn- +ing interlocutor personas with sophisticated network struc- +tures. One issue with this approach is that it requires more +personal corpora with annotations. Additionally, these mod- +els typically perform the next utterance prediction to gener- +ate a response but neglect the discourse coherence in the en- +tire conversation. To address these issues, this study proposes +a method of learning to memorize entailment and discourse +relations for persona-consistent dialogue tasks. Entailment +text pairs in natural language inference dataset were applied +to learn latent entailment relations as external memories by +premise-to-hypothesis generation task. Furthermore, an in- +ternal memory with a similar architecture was applied to the +discourse information in the dialogue. Placing orthogonality +restrictions on these two memory spaces ensures that the la- +tent entailment relations remain dialogue-independent. Both +memories collaborate to obtain entailment and discourse +representation for the generation, allowing a deeper under- +standing of both consistency and coherence. Experiments on +two large public datasets, PersonaChat and DSTC7-AVSD, +demonstrated the effectiveness of the proposed method. Both +automatic and human evaluations indicate that the proposed +model outperforms several strong baselines in terms of both +persona consistency and response coherence. Our source +code is available at https://github.com/Chenrj233/LMEDR. +Introduction +Traditional chit-chat models lack specificity and personal- +ity consistency. Only when they access a sufficiently large +dataset will they have the opportunity to generate piecemeal +and uninformative responses in a chit-chat setting. For two +consecutive questions with similar meanings in a two-round +dialogue, that is, what is your job and what do you do, the +model replied to the former: I am a lawyer, while the lat- +ter: I am a doctor (Welleck et al. 2020). This issue arises +because of the lack of a consistent personality as well as +an explicit memory towards plausibility as they are typically +trained to produce a response given only recent dialogue his- +tory (Shum, He, and Li 2018). +*Corresponding author +Copyright © 2023, Association for the Advancement of Artificial +Intelligence (www.aaai.org). All rights reserved. +* I attend church regularly. +* I love to watch scary things. +* My favorite hobby is reading novels. +* I work at a veterinarians office. +* I enjoy exercising. +* I enjoy teaching things to children. +* I work for no money. +* We have a boy and a girl. +Persona (Human) +Persona (Model) + Hi, what do you do for fun? + Hi, i usually go to yoga a few times a week. What about you? + Well, i like going to watch scary movies and i love reading. + I am actually an english teacher. + It is good to know there are good people out there! + Thank you. That's what i try to instill in my kids. + How many kids do you have? + I have 2 of my own , as well as 30 in my class. +* I attend church regularly. +* I love to watch scary things. +* My favorite hobby is reading novels. +* I work at a veterinarians office. +* I enjoy exercising. +* I enjoy teaching things to children. +* I work for no money. +* We have a boy and a girl. +Persona (Human) +Persona (Model) + Hi, what do you do for fun? + Hi, i usually go to yoga a few times a week. What about you? + Well, i like going to watch scary movies and i love reading. + I am actually an english teacher. + It is good to know there are good people out there! + Thank you. That's what i try to instill in my kids. + How many kids do you have? + I have 2 of my own , as well as 30 in my class. +Entailment +Entailment +Incoherent +Incoherent +Coherent +Coherent +Entailment +Incoherent +Coherent +Figure 1: The conceptual diagram of introducing natural lan- +guage inference in persona-based dialogue. +One solution that maintains consistency in a dialogue sys- +tem is to provide a set of persona profiles that describe the +character and then generate responses according to the per- +sona. Persona can be defined as the composition of identity +elements, such as profiles and background personal facts. +The expected outcome is that dialogue models gener- +ate a response consistent with the given persona. The Per- +sonaChat dataset (Zhang et al. 2018), widely adopted to sup- +port the training of persona-consistent dialogues, was man- +ually annotated by two annotators to act as part of a prede- +fined persona and chat naturally to know each other during +the conversation. However, given the time and effort needed +to annotate more persona corpora to cover all possibilities, it +is difficult to extend the application of such persona-related +information to the daily usage of dialogue. +As humans, our knowledge of the concepts and the se- +mantic relationship behind the language can allow us to re- +arrange unstructured data so that we can understand and an- +alyze it. Essentially, we can robustly learn novel concepts +with minimal supervision, benefitting from the well-known +ability of natural language inference (NLI). Figure 1 shows +an example of introducing NLI in a persona-based dialogue. +Given a persona as a premise, we can determine whether +the hypothesis of the response utterance is true (entailment), +false (contradiction), or undetermined (neutral). +Recent studies have sought to improve the consistency +arXiv:2301.04871v1 [cs.CL] 12 Jan 2023 + +[SOP] +[z] +P1 +P|P| +[EOP] +P2 +... +BART Encoder +h[z] +h[SOP] +h[EOP] +Premise +Transformer 1 +Transformer 11 +Transformer 12 +e[z] +e[SOP] +e[EOP] +... + +M +z +H1 +[SOH] +H2 +H|H| +H3 +... +BART Decoder +H1 +H2 +H3 +H4 +H|H| +[EOP] +Transformer 1 +Transformer 11 +Transformer 12 +e[SOH] +... +Entailment +Relation Memory +Hypothesis +1Pe +2 +Pe +2 +P +h +1P +h +| | +Pe +P +| | +P +h +P +| +| 1 +H H +1 +H +e +2 +H +e +3 +H +e +| | 1 +H +e +H +| | +H +e +H +... +... +ERM +Figure 2: Learning to memorize the entailment relations in latent variables. +of the dialogue system by modeling the understanding be- +tween interlocutors (Liu et al. 2020). Song et al. (2021) +disentangled persona-based dialogue generation into two +subtasks—response generation and consistency understand- +ing—and used unlikelihood training to make the decoder +generate contradictory dialogue responses as few as possi- +ble. However, multiple subtasks require multiple encoders, +leading to a complex generation model structure. Nie et al. +(2021) introduced a contradiction detection task to evaluate +the consistency in dialogues. +Despite continuing efforts to improve the engagement and +consistency of dialogue systems, understanding persona- +response consistency is still difficult. The key challenges +are twofold: 1) Existing methods apply sophisticated struc- +tures to learn persona consistency, which requires more an- +notated corpora for training. However, persona-based cor- +pora are still insufficient and difficult to collect. 2) Dialogue- +generating models typically neglect discourse information. +Discourse coherence is a crucial component of the effec- +tiveness of a conversation, encompassing how utterances are +connected and how the entire dialogue is organized to con- +vey information to the interlocutor. Existing models usually +perform the next utterance prediction for response genera- +tion but ignore the dialogue discourse coherence. As indi- +cated in Figure 1, I am actually an English teacher seems +to be an appropriate and persona-consistent response to the +query. However, this response is incoherent in the context of +an entire conversation. +To address these issues, this study proposes a method of +learning to memorize entailment and discourse relations for +persona-consistent dialogue tasks. We applied an encoder- +decoder architecture from BART (Lewis et al. 2020). To ex- +plicitly understand the consistency of personas, we designed +an external memory to store the latent entailment relations +between premises and the entailment hypothesis, indepen- +dent of dialogue itself. In addition, discourse relations were +learned and stored in internal latent memory. The latent en- +tailment relations are ensured to be dialogue-independent +by imposing orthogonality constraints on the two memory +spaces. Given personas and dialogue queries, both memories +work jointly to obtain the entailment and discourse repre- +sentation by the BART encoder. The generation was finally +accomplished by the BART decoder with two extra training +objectives, which further acquired the ability to understand +both consistency and coherence. +Comparative experiments were conducted using the Per- +sonaChat (Dinan et al. 2020) and DSTC7-AVSD (Alamri +et al. 2019). Both automatic and human evaluations show +that the proposed method generalizes well under different +settings and outperforms several strong baselines on most +metrics, especially persona consistency, indicating that the +proposed method can produce better persona-consistent dia- +logue responses. +The remainder of this paper is organized as follows. Sec- +tion 2 provides a brief review of the related work. Section +3 describes the proposed model, which learns to memorize +entailment and discourse relations by using latent variables. +Section 4 summarizes the specific experimental setup for the +two public dialogue datasets and the corresponding analysis +of the results. Finally, conclusions are drawn in Section 5. +Related Work +Persona-based Dialogues +Generation-based +dialogue +systems +usually +use +the +sequence-to-sequence (seq2seq) model (Sutskever, Vinyals, +and Le 2014) as the backbone. After the persona is intro- +duced into the dialogue, it is necessary to adopt an effective +method to integrate role information into the dialogue, such +as persona embedding (Li et al. 2016b). Subsequently, with +the development of large-scale pre-trained language models, +an increasing number of methods (Wolf et al. 2019; Roller +et al. 2021; Lin et al. 2021; Zheng et al. 2020) have lever- +aged pre-training and fine-tuning to improve persona-based +dialogue, but the problem of dialogue consistency remains +unsolved. Therefore, Liu et al. (2020) have attempted to +model the understanding between interlocutors to improve +the consistency of dialogue systems. A new perspective +(Song et al. 2021) decomposes persona-based dialogue +tasks into consistent understanding and dialogue generation +significantly improves dialogue-consistent generation based +on natural language inference. + +Latent Modeling +In a dialogue scene, the factors that associate dialogue con- +text with dialogue responses are often difficult to observe +and explain; therefore, modeling the latent space of dialogue +can help improve the performance of dialogue generation. +Optimus (Li et al. 2020) combines the advantages of BERT +(Devlin et al. 2019) and GPT-2 (Radford et al. 2020) for +large-scale pre-training in the form of VAE (Kingma and +Welling 2014) to model the latent variable space. PLATO +(Bao et al. 2020) introduces discrete latent variables to +solve the one-to-many relationship in response generation. +DialogVED (Chen et al. 2022) introduces continuous la- +tent variables into an enhanced encoder-decoder pre-training +framework to improve the relevance and diversity of dia- +logue responses. All these methods show great promise for +modeling dialogue-related features in latent space. This pa- +per extents the idea by additionally memorizing NLI rela- +tions as latent dialogue-independent features. +Learning to Memorize for Persona-consistent +Dialogue +The task of dialogue generation can be defined as the +next utterance prediction, where a target response utter- +ance R = [r1, r2, ..., r|R|] is predicted given a conversa- +tion query Q = [q1, q2, ..., q|Q|] according to given persona +constraints C = [c1, c2, ...c|C|]. For convenience, the sen- +tences (R, Q, C) are mapped to the vector representation +x = {R, Q, C}. Further, natural language inference data +(Welleck et al. 2020; Williams, Nangia, and Bowman 2018) +N = {P (n), H(n)}N +n=1, which consists of the entailed text +pairs of premise and hypothesis, was used to learn the entail- +ment relation to preserve consistency in dialogue generation. +Figure 2 shows the overall architecture of the proposed +learning to memorize the entailment and discourse rela- +tions model for a persona-consistent dialogue. The backbone +model is based on BART (Lewis et al. 2020), which per- +forms repeated two-stage training, i.e., learning to memorize +and persona-consistent dialogue generation. The key insight +of the proposed model is that it maps both the entailment +relation and discourse information to latent spaces. Based +on this information, an external memory module enforces +premise-to-hypothesis generation to map the textual entailed +pair to the Dialogue-Independent latent space, which can +be memorized and stored in a memory structure M. Simi- +larly, the discourse information was mapped using an inter- +nal memory module N to learn the Dialogue-Related fea- +tures. For generation, both entailment and discourse repre- +sentation can be obtained from memory and enhance per- +sona consistency in dialogue generation with additional en- +tailment and discourse information. +Learning to Memorize +Entailment Relation Memory (ERM). +ERM is an exter- +nal memory which is used to learn and store entailment re- +lations for persona consistency. If a given hypothesis H can +be inferred from the premise P, the relationship of the pair +is entailment. For persona-based dialogue, such an entail- +ment relationship can be introduced to generate consistent +responses. +Given +a +dataset +of +textual +entailed +pairs +N += +{P (n), H(n)}N +n=1, +textual +entailment +generation +was +adopted to learn a latent variable z, which represents the +latent form of entailment relations in natural language +inference, defined as +p(H, z |P ) = p(z |P )p(H |z , P) +(1) +Based on the BART encoder, we introduce a special la- +tent token [z], a start-of-premise token [SOP], and an end- +of-premise token [EOP] to the premise for latent entailment +relation learning. By using the tokenizer and adding position +embeddings, the input of the premise is transformed as +EERM = [e[z], e[SOP], ep1, ep2, ..., ep|P |, e[EOP]] +(2) +We introduce a latent entailment relation memory struc- +ture M parameterized by θ, where each element represents +a certain latent factor, defined as +M = [M1, ..., Mk] ∈ Rk×d +(3) +where k is the number of latent factors in entailment rela- +tions, and d is the dimension of the memory element. The +hidden state of the last layer of the BART encoder, that is, +h[z] corresponding to e[z], was applied to learn the distribu- +tion of the latent entailment relations z ∼ p(z |P ) by +π = softmax(Wπh[z] + bπ) +(4) +where π represents the probability of each element in M. +Then, the latent entailment representation z can be easily +obtained from M: +z = +k +� +i=1 +πiMi +(5) +To memorize the latent entailment relations, we use en- +tailment representation z from the memory M with the +weights π, along with the premise to generate the corre- +sponding hypothesis. The obtained entailment representa- +tion z was added to the special start-of-hypothesis token +[SOH] of the decoder, denote as +ˆe[SOH] = e[SOH] + z +(6) +Latent memory can keep track of the entailment relation +with the representation of the source premise by both read- +ing and writing during generation. Notably, it can be up- +dated by backpropagation of the premise-to-hypothesis gen- +eration. +The objective of the pre-training is to optimize memory +M and model parameters ϕ by minimizing the language +modeling loss: +LERM = − Ez∼pθ,ϕ(z|P ) log pθ,ϕ(H |z, P) += − Ez∼pθ,ϕ(z|P ) +|H| +� +t=1 +log pθ,ϕ(Ht |z, P, H