{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:25:00.347909Z" }, "title": "Diversifying Content Generation for Commonsense Reasoning with Mixture of Knowledge Graph Experts", "authors": [ { "first": "Wenhao", "middle": [], "last": "Yu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Notre Dame \u2661 University of Washington", "location": {} }, "email": "" }, { "first": "Chenguang", "middle": [], "last": "Zhu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Notre Dame \u2661 University of Washington", "location": {} }, "email": "chezhu@microsoft.com" }, { "first": "Lianhui", "middle": [], "last": "Qin", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Notre Dame \u2661 University of Washington", "location": {} }, "email": "lianhuiq@cs.washington.edu" }, { "first": "Zhihan", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Notre Dame \u2661 University of Washington", "location": {} }, "email": "zzhang23@nd.edu" }, { "first": "Tong", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Notre Dame \u2661 University of Washington", "location": {} }, "email": "tzhao2@nd.edu" }, { "first": "Meng", "middle": [], "last": "Jiang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Notre Dame \u2661 University of Washington", "location": {} }, "email": "mjiang2@nd.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Generative commonsense reasoning (GCR) in natural language is to reason about the commonsense while generating coherent text. Recent years have seen a surge of interest in improving the generation quality of commonsense reasoning tasks. Nevertheless, these approaches have seldom investigated diversity in the GCR tasks, which aims to generate alternative explanations for a real-world situation or predict all possible outcomes. Diversifying GCR is challenging as it expects to generate multiple outputs that are not only semantically different but also grounded in commonsense knowledge. In this paper, we propose MoKGE, a novel method that diversifies the generative reasoning by a mixture of expert (MoE) strategy on commonsense knowledge graphs (KG). A set of knowledge experts seek diverse reasoning on KG to encourage various generation outputs. Empirical experiments demonstrated that MoKGE can significantly improve the diversity while achieving on par performance on accuracy on two GCR benchmarks, based on both automatic and human evaluations.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Generative commonsense reasoning (GCR) in natural language is to reason about the commonsense while generating coherent text. Recent years have seen a surge of interest in improving the generation quality of commonsense reasoning tasks. Nevertheless, these approaches have seldom investigated diversity in the GCR tasks, which aims to generate alternative explanations for a real-world situation or predict all possible outcomes. Diversifying GCR is challenging as it expects to generate multiple outputs that are not only semantically different but also grounded in commonsense knowledge. In this paper, we propose MoKGE, a novel method that diversifies the generative reasoning by a mixture of expert (MoE) strategy on commonsense knowledge graphs (KG). A set of knowledge experts seek diverse reasoning on KG to encourage various generation outputs. Empirical experiments demonstrated that MoKGE can significantly improve the diversity while achieving on par performance on accuracy on two GCR benchmarks, based on both automatic and human evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "An important desideratum of natural language generation (NLG) is to produce outputs that are not only correct but also diverse (Tevet and Berant, 2021) . The term \"diversity\" in NLG is defined as the ability of a generative model to create a set of possible outputs that are each valid given the input and vary as widely as possible in terms of content, language style, and word variability (Gupta et al., 2018) . This research problem is also referred as one-to-many generation (Shen et al., 2019; Cho et al., 2019; Shen et al., 2022) .", "cite_spans": [ { "start": 127, "end": 151, "text": "(Tevet and Berant, 2021)", "ref_id": "BIBREF34" }, { "start": 391, "end": 411, "text": "(Gupta et al., 2018)", "ref_id": "BIBREF17" }, { "start": 479, "end": 498, "text": "(Shen et al., 2019;", "ref_id": "BIBREF31" }, { "start": 499, "end": 516, "text": "Cho et al., 2019;", "ref_id": "BIBREF10" }, { "start": 517, "end": 535, "text": "Shen et al., 2022)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Diversity in NLG has been extensively studied for various tasks in the past few years, such as machine translation (Shen et al., 2019) and paraphrase \u00a7 Codes of our model and baselines are available at https://github.com/DM2-ND/MoKGE. art soccer instrument song key [1] : UsedFor [2] : PartOf [3] : IsA [4]: RelatedTo [1] [1]", "cite_spans": [ { "start": 115, "end": 134, "text": "(Shen et al., 2019)", "ref_id": "BIBREF31" }, { "start": 266, "end": 269, "text": "[1]", "ref_id": "BIBREF2" }, { "start": 280, "end": 283, "text": "[2]", "ref_id": null }, { "start": 293, "end": 296, "text": "[3]", "ref_id": null }, { "start": 318, "end": 321, "text": "[1]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "[4] [3] [1 ] [3] [4]", "cite_spans": [ { "start": 4, "end": 7, "text": "[3]", "ref_id": null }, { "start": 13, "end": 16, "text": "[3]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "[4] [1] [3] [4] [2] [4] [1] (1) You can produce music when pressing keys on the piano, so it is an instrument . 2Piano is a musical instrument used in songs to produce different musical tones . 3Piano is a kind of art form . generation (Gupta et al., 2018) . In these tasks, output spaces are constrained by input context, i.e., the contents of multiple outputs should be similar, and globally, under the same topic. However, many NLG tasks, e.g., generative commonsense reasoning, pose unique challenges for generating multiple reasonable outputs that are semantically different. Figure 1 shows an example in the commonsense explanation generation (ComVE) task. The dataset has collected explanations to counterfactual statements for sense-making from three annotators (Wang et al., 2020) . From the annotations, we observed that different annotators gave explanations to the unreasonable statement from different perspectives to make them diverse in terms of content, e.g., wrong effect and inappropriate usage.", "cite_spans": [ { "start": 4, "end": 7, "text": "[1]", "ref_id": "BIBREF2" }, { "start": 16, "end": 19, "text": "[2]", "ref_id": null }, { "start": 24, "end": 27, "text": "[1]", "ref_id": "BIBREF2" }, { "start": 236, "end": 256, "text": "(Gupta et al., 2018)", "ref_id": "BIBREF17" }, { "start": 770, "end": 789, "text": "(Wang et al., 2020)", "ref_id": "BIBREF38" } ], "ref_spans": [ { "start": 581, "end": 589, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to create diversity, existing methods attempted to produce uncertainty by introducing random noise into a latent variable (Gupta et al., 2018) or sampling next token widely from the vo- cabulary . However, these methods were not able to explicitly control varying semantics units and produce outputs of diverse content. Meanwhile, the input text alone contains too limited knowledge to support diverse reasoning and produce multiple reasonable outputs (Yu et al., 2022c) . As an example, Table 1 shows the human evaluation results on two GCR tasks. While human annotators were able to produce 2.60 different yet reasonable explanations on the ComVE dataset, one SoTA diversity-promoting method (i.e., nucleus sampling ) could produce only 2.15 reasonable explanations.", "cite_spans": [ { "start": 131, "end": 151, "text": "(Gupta et al., 2018)", "ref_id": "BIBREF17" }, { "start": 461, "end": 479, "text": "(Yu et al., 2022c)", "ref_id": "BIBREF44" } ], "ref_spans": [ { "start": 497, "end": 504, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To improve the diversity in outputs for GCR tasks, we investigated the ComVE task and found that 75% of the concepts (nouns and verbs) in human annotations were among 2-hop neighbors of the concepts contained in the input sequence on the commonsense KG ConceptNet 1 . Therefore, to produce diverse GCR, our idea is enabling NLG models to reason from different perspectives of knowledge on commonsense KG and use them to generate diverse outputs like the human annotators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Thus, we present a novel Mixture of Knowledge Graph Expert (MoKGE) method for diverse generative commonsense reasoning on KG. MoKGE contains two major components: (i) a knowledge graph (KG) enhanced generative reasoning module to reasonably associate relevant concepts into the generation process, and (ii) a mixture of expert (MoE) module to produce diverse reasonable outputs. Specifically, the generative reasoning module performs compositional operations on KG to obtain structure-aware representations of concepts and relations. Then, each expert uses these representations to seek different yet relevant sets of concepts and sends them into a standard Transformer model to generate the corresponding output. To encourage different experts to specialize in different reasoning abilities, we employ the stochastic hard-EM algorithm by assigning full responsibility of the largest joint probability to each expert.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We conducted experiments on two GCR benchmarks, i.e., commonsense explanation generation and abductive commonsense reasoning. Empirical experiments demonstrated that our proposed MoKGE can outperform existing diversitypromoting generation methods in diversity, while achieving on par performance in quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To the best of our knowledge, this is the first work to boost diversity in NLG by diversifying knowledge reasoning on commonsense KG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Generating multiple valid outputs given a source sequence has a wide range of applications, such as machine translation (Shen et al., 2019) , paraphrase generation (Gupta et al., 2018) , question generation (Cho et al., 2019) , dialogue system (Dou et al., 2021) , and story generation . For example, in machine translation, there are often many plausible and semantically equivalent translations due to information asymmetry between different languages (Lachaux et al., 2020) .", "cite_spans": [ { "start": 120, "end": 139, "text": "(Shen et al., 2019)", "ref_id": "BIBREF31" }, { "start": 164, "end": 184, "text": "(Gupta et al., 2018)", "ref_id": "BIBREF17" }, { "start": 207, "end": 225, "text": "(Cho et al., 2019)", "ref_id": "BIBREF10" }, { "start": 244, "end": 262, "text": "(Dou et al., 2021)", "ref_id": "BIBREF13" }, { "start": 454, "end": 476, "text": "(Lachaux et al., 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work 2.1 Diversity Promoting Text Generation", "sec_num": "2" }, { "text": "Methods of improving diversity in NLG have been explored from various perspectives. Sampling-based decoding is one of the most effective solutions to improve diversity. For example, nucleus sampling samples next tokens from the dynamic nucleus of tokens containing the vast majority of the probability mass, instead of decoding text by maximizing the likelihood. Another line of work focused on introducing random noise (Gupta et al., 2018) or changing latent variables (Lachaux et al., 2020) to produce uncertainty. In addition, Shen et al. (2019) adopted a mixture of experts to diversify machine translation, where a minimum-loss predictor is assigned to each source input. Shi et al. (2018) employed an inverse reinforcement learning approach for unconditional diverse text generation.", "cite_spans": [ { "start": 420, "end": 440, "text": "(Gupta et al., 2018)", "ref_id": "BIBREF17" }, { "start": 470, "end": 492, "text": "(Lachaux et al., 2020)", "ref_id": "BIBREF22" }, { "start": 677, "end": 694, "text": "Shi et al. (2018)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work 2.1 Diversity Promoting Text Generation", "sec_num": "2" }, { "text": "However, no existing work considered performing diverse knowledge reasoning to generate multiple reasonable outputs of different contents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work 2.1 Diversity Promoting Text Generation", "sec_num": "2" }, { "text": "Incorporating external knowledge is essential for many NLG tasks to augment the limited textual Figure 2 : The overall architecture of MoKGE. The MoKGE consists of four steps: (S1) the model constructs a sequence-associated subgraph from the commonsense KG; (S2) a relational-GCN iteratively updates the representation of a concept node by aggregating information from its neighboring nodes and edges; (S3) each knowledge expert selects different salient concepts that should be considered during generation; (S4) the model generates the outputs by integrating the token embeddings of the input sequence and the top-ranked entities.", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 104, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Knowledge Graph for Text Generation", "sec_num": "2.2" }, { "text": "information (Yu et al., 2022c; Dong et al., 2021; Yu et al., 2022b) . Some recent work explored using graph neural networks (GNN) to reason over multihop relational knowledge graph (KG) paths (Zhou et al., 2018; Jiang et al., 2019; Zhang et al., 2020a; Wu et al., 2020; Yu et al., 2022a; Zeng et al., 2021) . For example, Zhou et al. (2018) enriched the context representations of the input sequence with neighbouring concepts on ConceptNet using graph attention. Ji et al. (2020) performed dynamic multi-hop reasoning on multi-relational paths extracted from the external commonsense KG. Recently, some work attempted to integrate external commonsense knowledge into generative pretrained language models (Guan et al., 2020; Bhagavatula et al., 2020; Liu et al., 2021) . For example, Guan et al. (2020) conducted post-training on sythetic data constructed from commonsense KG by translating triplets into natural language texts using templates. Yu et al. (2022c) wrote a comprehensive survey for more detailed comparisons of different knowledge graph enhanced NLG methods.", "cite_spans": [ { "start": 12, "end": 30, "text": "(Yu et al., 2022c;", "ref_id": "BIBREF44" }, { "start": 31, "end": 49, "text": "Dong et al., 2021;", "ref_id": "BIBREF12" }, { "start": 50, "end": 67, "text": "Yu et al., 2022b)", "ref_id": "BIBREF43" }, { "start": 192, "end": 211, "text": "(Zhou et al., 2018;", "ref_id": "BIBREF50" }, { "start": 212, "end": 231, "text": "Jiang et al., 2019;", "ref_id": "BIBREF20" }, { "start": 232, "end": 252, "text": "Zhang et al., 2020a;", "ref_id": "BIBREF47" }, { "start": 253, "end": 269, "text": "Wu et al., 2020;", "ref_id": "BIBREF41" }, { "start": 270, "end": 287, "text": "Yu et al., 2022a;", "ref_id": "BIBREF42" }, { "start": 288, "end": 306, "text": "Zeng et al., 2021)", "ref_id": "BIBREF46" }, { "start": 322, "end": 340, "text": "Zhou et al. (2018)", "ref_id": "BIBREF50" }, { "start": 464, "end": 480, "text": "Ji et al. (2020)", "ref_id": "BIBREF19" }, { "start": 706, "end": 725, "text": "(Guan et al., 2020;", "ref_id": "BIBREF16" }, { "start": 726, "end": 751, "text": "Bhagavatula et al., 2020;", "ref_id": "BIBREF8" }, { "start": 752, "end": 769, "text": "Liu et al., 2021)", "ref_id": "BIBREF26" }, { "start": 785, "end": 803, "text": "Guan et al. (2020)", "ref_id": "BIBREF16" }, { "start": 946, "end": 963, "text": "Yu et al. (2022c)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Graph for Text Generation", "sec_num": "2.2" }, { "text": "Problem formulation. In this paper, we focus on diversifying the outputs of generative commonsense reasoning (GCR) tasks, e.g. commonsense explanation generation and abductive commonsense reasoning. These tasks require one-to-many generation, i.e., creating a set of reasonable outputs that vary as widely as possible in terms of con-tents, language style and word variability. Formally, given a source input x, our goal is to model a conditional distribution for the target outputs p(y|x) that assigns high values to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Method", "sec_num": "3" }, { "text": "{p(y 1 |x), \u2022 \u2022 \u2022 , p(y K |x)} for K mappings, i.e., {x \u2192 y 1 , \u2022 \u2022 \u2022 , x \u2192 y K }. Mean- while, the outputs {y 1 , \u2022 \u2022 \u2022 , y K }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Method", "sec_num": "3" }, { "text": "are expected to be diverse with each other in terms of contents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Method", "sec_num": "3" }, { "text": "Existing diversity-promoting methods only varied the language styles and failed to perform different knowledge reasoning to generate diverse contents (Cho et al., 2019; Shen et al., 2019; . Here, incorporating commonsense KG is essential for the generative reasoning (GR) tasks because the KG cannot only augment the limited information in the input text, but also provide a rich searching space for knowledge reasoning. Therefore, we propose to employ commonsense KG to play the central role of performing diverse knowledge reasoning, then use different sets of selected concepts to produce diverse outputs.", "cite_spans": [ { "start": 150, "end": 168, "text": "(Cho et al., 2019;", "ref_id": "BIBREF10" }, { "start": 169, "end": 187, "text": "Shen et al., 2019;", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Method", "sec_num": "3" }, { "text": "Model Outline. Our model has two major components: (i) a knowledge graph (KG) enhanced generative reasoning module to reasonably associate relevant concepts and background into the generation process, and (ii) a mixture of expert (MoE) module to diversify the generation process and produce multiple reasonable outputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Method", "sec_num": "3" }, { "text": "The KG-enhanced generative reasoning module is illustrated in Figure 2 . It consists of four steps.", "cite_spans": [], "ref_spans": [ { "start": 62, "end": 70, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "KG-enhanced Generative Reasoning", "sec_num": "3.1" }, { "text": "First, a sequence-associated subgraph is retrieved from the KG given the input sequence ( \u00a73.1.1). Then, a multi-relational graph encoder iteratively updates the representation of each node by aggregating information from its neighboring nodes and edges ( \u00a73. 1.2) . Next, the model selects salient concepts that should be considered during generation ( \u00a73. 1.3) . Finally, the model generates outputs by integrating the token embeddings of both the input sequence and the top-ranked concepts ( \u00a73.1.4).", "cite_spans": [ { "start": 260, "end": 264, "text": "1.2)", "ref_id": null }, { "start": 358, "end": 362, "text": "1.3)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "KG-enhanced Generative Reasoning", "sec_num": "3.1" }, { "text": "To facilitate the reasoning process, we resort to an external commonsense knowledge graph G = {V, E}, where V denotes the concept set and E denotes the edges with relations. Since direct reasoning on the entire graph is intractable, we extract a sequence-associated subgraph", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-aware subgraph construction", "sec_num": "3.1.1" }, { "text": "G x = {V x , E x }, where V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-aware subgraph construction", "sec_num": "3.1.1" }, { "text": "x consists of the concepts extracted from the input sequence (denoted as C x ) and their inter-connected concepts within two hops, i.e., Figure 2 , C x = {piano, sport, kind} and V x = {piano, sport, kind, art, music, press, ...}. Next, the generation task is to maximize the conditional probability p(y|x, G x ).", "cite_spans": [], "ref_spans": [ { "start": 137, "end": 145, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Sequence-aware subgraph construction", "sec_num": "3.1.1" }, { "text": "V x = {C x \u222a N (C x ) \u222a N (N (C x ))}. For exam- ple, in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-aware subgraph construction", "sec_num": "3.1.1" }, { "text": "To model the relational information in the commonsen KG, we employ the relational graph convolutional network (R-GCN) (Schlichtkrull et al., 2018) which generalizes GCN with relation specific weight matrices. We follow Vashishth et al. (2020) and Ji et al. (2020) to use a non-parametric compositional operation \u03d5(\u2022) to combine the concept node embedding and the relation embedding. Specifically, given the input subgraph G x = {V x , E x } and an R-GCN with L layers, we update the embedding of each node v \u2208 V x at the (l+1)-th layer by aggregating information from the embeddings of its neighbours in N (v) at the l-th layer:", "cite_spans": [ { "start": 118, "end": 146, "text": "(Schlichtkrull et al., 2018)", "ref_id": "BIBREF29" }, { "start": 219, "end": 242, "text": "Vashishth et al. (2020)", "ref_id": "BIBREF35" }, { "start": 247, "end": 263, "text": "Ji et al. (2020)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Multi-relational graph encoding", "sec_num": "3.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "o l v = 1 |N (v)| (u,v,r)\u2208E W l N \u03d5(h l u , h l r ), (1) h l+1 v = ReLU(o l v + W l S h l v ),", "eq_num": "(2)" } ], "section": "Multi-relational graph encoding", "sec_num": "3.1.2" }, { "text": "where h v and h r are node embedding and relation embedding. We define the compositional operation as \u03d5(h u , h r ) = h u \u2212h r inspired by the TransE (Bordes et al., 2013) . The relation embedding is also updated via another linear transformation:", "cite_spans": [ { "start": 150, "end": 171, "text": "(Bordes et al., 2013)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Multi-relational graph encoding", "sec_num": "3.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h l+1 r = W l R h l r .", "eq_num": "(3)" } ], "section": "Multi-relational graph encoding", "sec_num": "3.1.2" }, { "text": "Finally, we obtain concept embedding h L v that encodes the sequence-associated subgraph context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-relational graph encoding", "sec_num": "3.1.2" }, { "text": "Not all concepts in G appear in the outputs. Thus, we design a concept selection module to choose salient concepts that should be considered during generation. For each concept v \u2208 V x , we calculate its probability of being selected by taking a multilayer perception (MLP) on the top of graph encoder:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept selection on knowledge graph", "sec_num": "3.1.3" }, { "text": "p v = P r[v is selected|x] = MLP(h L v )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept selection on knowledge graph", "sec_num": "3.1.3" }, { "text": ". To supervise the concept selection process, we use the overlapping concepts between concepts appearing in the output sequence C y and concepts in input sequence associated subgraph G x , i.e., V x \u2229 C y , as a simple proxy for the ground-truth supervision. So, the concept selection loss (here only for one expert, see MoE loss in Eq. 8)is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept selection on knowledge graph", "sec_num": "3.1.3" }, { "text": "L concept = \u2212 v\u2208Vx\u2229Cy v log p v (4) + v\u2208Vx\u2212Cy (1 \u2212 v) log(1 \u2212 p v ) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept selection on knowledge graph", "sec_num": "3.1.3" }, { "text": "Finally, the top-N ranked concepts on the subgraph", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept selection on knowledge graph", "sec_num": "3.1.3" }, { "text": "G x (denoted as v 1 , ..., v N )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept selection on knowledge graph", "sec_num": "3.1.3" }, { "text": "are selected as the additional input to the generation process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept selection on knowledge graph", "sec_num": "3.1.3" }, { "text": "We utilize a standard Transformer (Vaswani et al., 2017 ) as our generation model. It takes the concatenation of the sequence x and all the selected concepts v 1 , ..., v N as input and auto-regressively generates the outputs y. We adopt the cross-entropy loss, which can be written as:", "cite_spans": [ { "start": 34, "end": 55, "text": "(Vaswani et al., 2017", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Concept-aware sequence generation", "sec_num": "3.1.4" }, { "text": "L generation = \u2212 log p(y|x, v 1 , \u2022 \u2022 \u2022 , v N ) (5) = \u2212 |y| t=1 log p(y t |x, v 1 , \u2022 \u2022 \u2022 , v N , y ComVE \u03b1-NLGAvg. # human references3.004.20Avg. # meanings (\u21d1)Human references Nucleus sampling2.60 2.153.79 3.35MoKGE (our method)2.633.72", "html": null, "num": null, "type_str": "table", "text": "Under human evaluation, the performance of existing diversity promoting methods is still far from that of humans. Our method MoKGE can exceed the human performance on the ComVE task." }, "TABREF1": { "content": "
Concept Selection (S3)Top-ranked conceptsExpert 1
entertainmentsoccer
source conceptsinstrumentexercisePiano is \u2026 sport music press \u2026
KGlocate subKG(S1)piano playYou can produce music when pressing \u2026 Transformer (S4)
artentertainmentsoccermusicsportTop-ranked conceptsExpert 2
musicpianoplaypianistoccupationaction
actionsportPiano is \u2026 sport art form \u2026
songGNN Encoder (S2) press instrumentkindartformkindpressPiano is a kind of art form . Transformer (S4)
", "html": null, "num": null, "type_str": "table", "text": "" }, "TABREF2": { "content": "
MethodsModel VariantConcept diversity #Uni.C(\u21d1) Jaccard (\u21d3) SB-3 (\u21d3) SB-4 (\u21d3) D-2(\u21d1) E-4(\u21d1) B-4 (\u21d1) R-L (\u21d1) Pairwise diversity Corpus diversity Quality
CVAEz = 16 z = 32 z = 644.56 0.1 5.03 0.3 4.67 0.064.74 0.3 47.27 0.8 54.69 0.866.66 0.4 59.20 1.3 55.02 0.862.83 0.5 54.30 1.5 49.58 1.033.75 0.5 9.13 0.1 16.67 0.3 41.52 0.3 32.86 1.1 9.07 0.5 17.04 0.2 42.17 0.5 32.55 0.5 9.07 0.2 15.54 0.4 41.03 0.3
Truncated samplingk = 5 k = 20 k = 504.37 0.0 4.60 0.0 4.68 0.171.38 0.7 63.42 1.2 60.98 1.874.20 0.2 64.47 2.1 61.39 2.471.38 0.2 60.33 2.4 56.93 2.831.32 0.4 9.18 0.1 16.44 0.2 40.99 0.2 33.69 0.6 9.26 0.1 17.70 0.2 42.58 0.5 34.80 0.3 9.29 0.1 17.48 0.4 42.44 0.5
Nucleus samplingp = .5 p = .75 p = .954.19 0.1 4.41 0.1 4.70 0.172.78 1.0 67.01 1.7 61.92 2.677.66 0.8 71.41 2.5 63.43 3.475.14 0.9 68.22 2.9 59.23 3.828.36 0.6 9.05 0.3 16.09 0.6 40.95 0.5 31.21 0.3 9.16 0.1 17.07 0.5 41.88 0.7 34.17 0.3 9.27 0.2 17.68 0.4 42.60 0.8
MoEembed prompt5.41 0.0 5.45 0.247.55 0.5 47.54 0.433.64 0.2 33.42 0.328.21 0.1 28.40 0.346.57 0.2 9.61 0.1 18.66 0.5 43.72 0.2 46.93 0.2 9.60 0.2 18.91 0.4 43.71 0.5
MoKGE (ours)embed prompt5.35 0.2 5.48 0.248.18 0.5 44.37 0.435.36 1.1 30.93 0.929.71 1.2 25.30 1.147.51 0.4 9.63 0.1 19.13 0.1 43.70 0.1 48.44 0.2 9.67 0.2 19.01 0.1 43.83 0.3
Human6.27 0.026.49 0.012.36 0.08.01 0.063.02 0.0 9.55 0.0 100.0 0.0 100.0 0.0
", "html": null, "num": null, "type_str": "table", "text": "Diversity and quality evaluation on the ComVE (upper part) and \u03b1-NLG (lower part) datasets. Each model is required to generate three outputs. All experiments are run three times with different random seeds, and the average results on the test set is calculated as the final performance, with standard deviations as subscripts." }, "TABREF4": { "content": "
ComVE (left part: diversity; right part: quality)\u03b1-NLG (left part: diversity; right part: quality)
MethodsSB-4 (\u21d3) D-2 (\u21d1) E-4 (\u21d1) B-4 (\u21d1) R-L (\u21d1) SB-4 (\u21d3) D-2 (\u21d1) E-4 (\u21d1) B-4 (\u21d1) R-L (\u21d1)
\u22a2 w/o MoE74.15 0.2 31.92 0.1 9.14 0.0 15.87 0.1 40.24 0.2 77.34 0.2 19.19 0.1 10.10 0.0 12.84 0.1 37.52 0.2
", "html": null, "num": null, "type_str": "table", "text": "Ablation studies. When not suing MoE (line -w/o MoE), we set beam as three to generate three outputs.MoKGE 25.30 1.1 48.44 0.2 9.67 0.2 19.01 0.1 43.83 0.3 22.43 2.4 38.01 0.6 10.88 0.2 14.17 0.2 38.82 0.7 \u22a2 w/o KG 28.40 0.3 46.93 0.2 9.60 0.2 18.91 0.4 43.71 0.5 23.18 1.9 36.71 0.1 10.85 0.0 14.26 0.3 38.78 0.4" }, "TABREF5": { "content": "
MethodsDiversityComVE QualityFlu. & Gra.Diversity\u03b1-NLG QualityFlu. & Gra.
Truncated samp.2.15\u00b10.762.22\u00b11.013.47\u00b10.752.31\u00b10.762.63\u00b10.773.89\u00b10.36
Nucleus samp. MoKGE (ours) Human Ref.2.03\u00b10.73 2.63\u00b10.51* 2.60\u00b10.592.29\u00b11.03 2.10\u00b10.99 3.003.52\u00b10.70 3.46\u00b10.81 4.002.39\u00b10.73 2.66\u00b10.51* 2.71\u00b10.572.67\u00b10.72 2.57\u00b10.71 3.003.91\u00b10.28 3.87\u00b10.34 4.00
", "html": null, "num": null, "type_str": "table", "text": "Human evaluations by independent scoring based on diveristy, quality, flency and grammar. In addition, * indicates p-value < 0.05 under paired t-test between MoKGE and baseline methods." }, "TABREF6": { "content": "
Against methodsWin (%)ComVE Tie (%)Lose (%)Win (%)\u03b1-NLG Tie (%)Lose (%)
v.s.
", "html": null, "num": null, "type_str": "table", "text": "Human evaluations by pairwise comparison: MoKGE v.s. two baseline methods based on diversity. Truncated samp. 47.85\u00b15.94 37.09\u00b14.56 15.06\u00b13.31 45.35\u00b15.06 43.19\u00b12.78 11.46\u00b12.31 v.s. Nucleus samp. 54.30\u00b14.62 36.02\u00b12.74 9.68\u00b13.48 41.53\u00b11.55 46.99\u00b12.04 11.48\u00b12.36" } } } }