ACL-OCL / Base_JSON /prefixD /json /dlg4nlp /2022.dlg4nlp-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:25:00.347909Z"
},
"title": "Diversifying Content Generation for Commonsense Reasoning with Mixture of Knowledge Graph Experts",
"authors": [
{
"first": "Wenhao",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Notre Dame \u2661 University of Washington",
"location": {}
},
"email": ""
},
{
"first": "Chenguang",
"middle": [],
"last": "Zhu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Notre Dame \u2661 University of Washington",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Lianhui",
"middle": [],
"last": "Qin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Notre Dame \u2661 University of Washington",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Zhihan",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Notre Dame \u2661 University of Washington",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Tong",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Notre Dame \u2661 University of Washington",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Meng",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Notre Dame \u2661 University of Washington",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Generative commonsense reasoning (GCR) in natural language is to reason about the commonsense while generating coherent text. Recent years have seen a surge of interest in improving the generation quality of commonsense reasoning tasks. Nevertheless, these approaches have seldom investigated diversity in the GCR tasks, which aims to generate alternative explanations for a real-world situation or predict all possible outcomes. Diversifying GCR is challenging as it expects to generate multiple outputs that are not only semantically different but also grounded in commonsense knowledge. In this paper, we propose MoKGE, a novel method that diversifies the generative reasoning by a mixture of expert (MoE) strategy on commonsense knowledge graphs (KG). A set of knowledge experts seek diverse reasoning on KG to encourage various generation outputs. Empirical experiments demonstrated that MoKGE can significantly improve the diversity while achieving on par performance on accuracy on two GCR benchmarks, based on both automatic and human evaluations.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Generative commonsense reasoning (GCR) in natural language is to reason about the commonsense while generating coherent text. Recent years have seen a surge of interest in improving the generation quality of commonsense reasoning tasks. Nevertheless, these approaches have seldom investigated diversity in the GCR tasks, which aims to generate alternative explanations for a real-world situation or predict all possible outcomes. Diversifying GCR is challenging as it expects to generate multiple outputs that are not only semantically different but also grounded in commonsense knowledge. In this paper, we propose MoKGE, a novel method that diversifies the generative reasoning by a mixture of expert (MoE) strategy on commonsense knowledge graphs (KG). A set of knowledge experts seek diverse reasoning on KG to encourage various generation outputs. Empirical experiments demonstrated that MoKGE can significantly improve the diversity while achieving on par performance on accuracy on two GCR benchmarks, based on both automatic and human evaluations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "An important desideratum of natural language generation (NLG) is to produce outputs that are not only correct but also diverse (Tevet and Berant, 2021) . The term \"diversity\" in NLG is defined as the ability of a generative model to create a set of possible outputs that are each valid given the input and vary as widely as possible in terms of content, language style, and word variability (Gupta et al., 2018) . This research problem is also referred as one-to-many generation (Shen et al., 2019; Cho et al., 2019; Shen et al., 2022) .",
"cite_spans": [
{
"start": 127,
"end": 151,
"text": "(Tevet and Berant, 2021)",
"ref_id": "BIBREF34"
},
{
"start": 391,
"end": 411,
"text": "(Gupta et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 479,
"end": 498,
"text": "(Shen et al., 2019;",
"ref_id": "BIBREF31"
},
{
"start": 499,
"end": 516,
"text": "Cho et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 517,
"end": 535,
"text": "Shen et al., 2022)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Diversity in NLG has been extensively studied for various tasks in the past few years, such as machine translation (Shen et al., 2019) and paraphrase \u00a7 Codes of our model and baselines are available at https://github.com/DM2-ND/MoKGE. art soccer instrument song key [1] : UsedFor [2] : PartOf [3] : IsA [4]: RelatedTo [1] [1]",
"cite_spans": [
{
"start": 115,
"end": 134,
"text": "(Shen et al., 2019)",
"ref_id": "BIBREF31"
},
{
"start": 266,
"end": 269,
"text": "[1]",
"ref_id": "BIBREF2"
},
{
"start": 280,
"end": 283,
"text": "[2]",
"ref_id": null
},
{
"start": 293,
"end": 296,
"text": "[3]",
"ref_id": null
},
{
"start": 318,
"end": 321,
"text": "[1]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "[4] [3] [1 ] [3] [4]",
"cite_spans": [
{
"start": 4,
"end": 7,
"text": "[3]",
"ref_id": null
},
{
"start": 13,
"end": 16,
"text": "[3]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "[4] [1] [3] [4] [2] [4] [1] (1) You can produce music when pressing keys on the piano, so it is an instrument . 2Piano is a musical instrument used in songs to produce different musical tones . 3Piano is a kind of art form . generation (Gupta et al., 2018) . In these tasks, output spaces are constrained by input context, i.e., the contents of multiple outputs should be similar, and globally, under the same topic. However, many NLG tasks, e.g., generative commonsense reasoning, pose unique challenges for generating multiple reasonable outputs that are semantically different. Figure 1 shows an example in the commonsense explanation generation (ComVE) task. The dataset has collected explanations to counterfactual statements for sense-making from three annotators (Wang et al., 2020) . From the annotations, we observed that different annotators gave explanations to the unreasonable statement from different perspectives to make them diverse in terms of content, e.g., wrong effect and inappropriate usage.",
"cite_spans": [
{
"start": 4,
"end": 7,
"text": "[1]",
"ref_id": "BIBREF2"
},
{
"start": 16,
"end": 19,
"text": "[2]",
"ref_id": null
},
{
"start": 24,
"end": 27,
"text": "[1]",
"ref_id": "BIBREF2"
},
{
"start": 236,
"end": 256,
"text": "(Gupta et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 770,
"end": 789,
"text": "(Wang et al., 2020)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 581,
"end": 589,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to create diversity, existing methods attempted to produce uncertainty by introducing random noise into a latent variable (Gupta et al., 2018) or sampling next token widely from the vo- cabulary . However, these methods were not able to explicitly control varying semantics units and produce outputs of diverse content. Meanwhile, the input text alone contains too limited knowledge to support diverse reasoning and produce multiple reasonable outputs (Yu et al., 2022c) . As an example, Table 1 shows the human evaluation results on two GCR tasks. While human annotators were able to produce 2.60 different yet reasonable explanations on the ComVE dataset, one SoTA diversity-promoting method (i.e., nucleus sampling ) could produce only 2.15 reasonable explanations.",
"cite_spans": [
{
"start": 131,
"end": 151,
"text": "(Gupta et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 461,
"end": 479,
"text": "(Yu et al., 2022c)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [
{
"start": 497,
"end": 504,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To improve the diversity in outputs for GCR tasks, we investigated the ComVE task and found that 75% of the concepts (nouns and verbs) in human annotations were among 2-hop neighbors of the concepts contained in the input sequence on the commonsense KG ConceptNet 1 . Therefore, to produce diverse GCR, our idea is enabling NLG models to reason from different perspectives of knowledge on commonsense KG and use them to generate diverse outputs like the human annotators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Thus, we present a novel Mixture of Knowledge Graph Expert (MoKGE) method for diverse generative commonsense reasoning on KG. MoKGE contains two major components: (i) a knowledge graph (KG) enhanced generative reasoning module to reasonably associate relevant concepts into the generation process, and (ii) a mixture of expert (MoE) module to produce diverse reasonable outputs. Specifically, the generative reasoning module performs compositional operations on KG to obtain structure-aware representations of concepts and relations. Then, each expert uses these representations to seek different yet relevant sets of concepts and sends them into a standard Transformer model to generate the corresponding output. To encourage different experts to specialize in different reasoning abilities, we employ the stochastic hard-EM algorithm by assigning full responsibility of the largest joint probability to each expert.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We conducted experiments on two GCR benchmarks, i.e., commonsense explanation generation and abductive commonsense reasoning. Empirical experiments demonstrated that our proposed MoKGE can outperform existing diversitypromoting generation methods in diversity, while achieving on par performance in quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, this is the first work to boost diversity in NLG by diversifying knowledge reasoning on commonsense KG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Generating multiple valid outputs given a source sequence has a wide range of applications, such as machine translation (Shen et al., 2019) , paraphrase generation (Gupta et al., 2018) , question generation (Cho et al., 2019) , dialogue system (Dou et al., 2021) , and story generation . For example, in machine translation, there are often many plausible and semantically equivalent translations due to information asymmetry between different languages (Lachaux et al., 2020) .",
"cite_spans": [
{
"start": 120,
"end": 139,
"text": "(Shen et al., 2019)",
"ref_id": "BIBREF31"
},
{
"start": 164,
"end": 184,
"text": "(Gupta et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 207,
"end": 225,
"text": "(Cho et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 244,
"end": 262,
"text": "(Dou et al., 2021)",
"ref_id": "BIBREF13"
},
{
"start": 454,
"end": 476,
"text": "(Lachaux et al., 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work 2.1 Diversity Promoting Text Generation",
"sec_num": "2"
},
{
"text": "Methods of improving diversity in NLG have been explored from various perspectives. Sampling-based decoding is one of the most effective solutions to improve diversity. For example, nucleus sampling samples next tokens from the dynamic nucleus of tokens containing the vast majority of the probability mass, instead of decoding text by maximizing the likelihood. Another line of work focused on introducing random noise (Gupta et al., 2018) or changing latent variables (Lachaux et al., 2020) to produce uncertainty. In addition, Shen et al. (2019) adopted a mixture of experts to diversify machine translation, where a minimum-loss predictor is assigned to each source input. Shi et al. (2018) employed an inverse reinforcement learning approach for unconditional diverse text generation.",
"cite_spans": [
{
"start": 420,
"end": 440,
"text": "(Gupta et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 470,
"end": 492,
"text": "(Lachaux et al., 2020)",
"ref_id": "BIBREF22"
},
{
"start": 677,
"end": 694,
"text": "Shi et al. (2018)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work 2.1 Diversity Promoting Text Generation",
"sec_num": "2"
},
{
"text": "However, no existing work considered performing diverse knowledge reasoning to generate multiple reasonable outputs of different contents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work 2.1 Diversity Promoting Text Generation",
"sec_num": "2"
},
{
"text": "Incorporating external knowledge is essential for many NLG tasks to augment the limited textual Figure 2 : The overall architecture of MoKGE. The MoKGE consists of four steps: (S1) the model constructs a sequence-associated subgraph from the commonsense KG; (S2) a relational-GCN iteratively updates the representation of a concept node by aggregating information from its neighboring nodes and edges; (S3) each knowledge expert selects different salient concepts that should be considered during generation; (S4) the model generates the outputs by integrating the token embeddings of the input sequence and the top-ranked entities.",
"cite_spans": [],
"ref_spans": [
{
"start": 96,
"end": 104,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Knowledge Graph for Text Generation",
"sec_num": "2.2"
},
{
"text": "information (Yu et al., 2022c; Dong et al., 2021; Yu et al., 2022b) . Some recent work explored using graph neural networks (GNN) to reason over multihop relational knowledge graph (KG) paths (Zhou et al., 2018; Jiang et al., 2019; Zhang et al., 2020a; Wu et al., 2020; Yu et al., 2022a; Zeng et al., 2021) . For example, Zhou et al. (2018) enriched the context representations of the input sequence with neighbouring concepts on ConceptNet using graph attention. Ji et al. (2020) performed dynamic multi-hop reasoning on multi-relational paths extracted from the external commonsense KG. Recently, some work attempted to integrate external commonsense knowledge into generative pretrained language models (Guan et al., 2020; Bhagavatula et al., 2020; Liu et al., 2021) . For example, Guan et al. (2020) conducted post-training on sythetic data constructed from commonsense KG by translating triplets into natural language texts using templates. Yu et al. (2022c) wrote a comprehensive survey for more detailed comparisons of different knowledge graph enhanced NLG methods.",
"cite_spans": [
{
"start": 12,
"end": 30,
"text": "(Yu et al., 2022c;",
"ref_id": "BIBREF44"
},
{
"start": 31,
"end": 49,
"text": "Dong et al., 2021;",
"ref_id": "BIBREF12"
},
{
"start": 50,
"end": 67,
"text": "Yu et al., 2022b)",
"ref_id": "BIBREF43"
},
{
"start": 192,
"end": 211,
"text": "(Zhou et al., 2018;",
"ref_id": "BIBREF50"
},
{
"start": 212,
"end": 231,
"text": "Jiang et al., 2019;",
"ref_id": "BIBREF20"
},
{
"start": 232,
"end": 252,
"text": "Zhang et al., 2020a;",
"ref_id": "BIBREF47"
},
{
"start": 253,
"end": 269,
"text": "Wu et al., 2020;",
"ref_id": "BIBREF41"
},
{
"start": 270,
"end": 287,
"text": "Yu et al., 2022a;",
"ref_id": "BIBREF42"
},
{
"start": 288,
"end": 306,
"text": "Zeng et al., 2021)",
"ref_id": "BIBREF46"
},
{
"start": 322,
"end": 340,
"text": "Zhou et al. (2018)",
"ref_id": "BIBREF50"
},
{
"start": 464,
"end": 480,
"text": "Ji et al. (2020)",
"ref_id": "BIBREF19"
},
{
"start": 706,
"end": 725,
"text": "(Guan et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 726,
"end": 751,
"text": "Bhagavatula et al., 2020;",
"ref_id": "BIBREF8"
},
{
"start": 752,
"end": 769,
"text": "Liu et al., 2021)",
"ref_id": "BIBREF26"
},
{
"start": 785,
"end": 803,
"text": "Guan et al. (2020)",
"ref_id": "BIBREF16"
},
{
"start": 946,
"end": 963,
"text": "Yu et al. (2022c)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph for Text Generation",
"sec_num": "2.2"
},
{
"text": "Problem formulation. In this paper, we focus on diversifying the outputs of generative commonsense reasoning (GCR) tasks, e.g. commonsense explanation generation and abductive commonsense reasoning. These tasks require one-to-many generation, i.e., creating a set of reasonable outputs that vary as widely as possible in terms of con-tents, language style and word variability. Formally, given a source input x, our goal is to model a conditional distribution for the target outputs p(y|x) that assigns high values to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "{p(y 1 |x), \u2022 \u2022 \u2022 , p(y K |x)} for K mappings, i.e., {x \u2192 y 1 , \u2022 \u2022 \u2022 , x \u2192 y K }. Mean- while, the outputs {y 1 , \u2022 \u2022 \u2022 , y K }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "are expected to be diverse with each other in terms of contents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "Existing diversity-promoting methods only varied the language styles and failed to perform different knowledge reasoning to generate diverse contents (Cho et al., 2019; Shen et al., 2019; . Here, incorporating commonsense KG is essential for the generative reasoning (GR) tasks because the KG cannot only augment the limited information in the input text, but also provide a rich searching space for knowledge reasoning. Therefore, we propose to employ commonsense KG to play the central role of performing diverse knowledge reasoning, then use different sets of selected concepts to produce diverse outputs.",
"cite_spans": [
{
"start": 150,
"end": 168,
"text": "(Cho et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 169,
"end": 187,
"text": "Shen et al., 2019;",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "Model Outline. Our model has two major components: (i) a knowledge graph (KG) enhanced generative reasoning module to reasonably associate relevant concepts and background into the generation process, and (ii) a mixture of expert (MoE) module to diversify the generation process and produce multiple reasonable outputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "The KG-enhanced generative reasoning module is illustrated in Figure 2 . It consists of four steps.",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 70,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "KG-enhanced Generative Reasoning",
"sec_num": "3.1"
},
{
"text": "First, a sequence-associated subgraph is retrieved from the KG given the input sequence ( \u00a73.1.1). Then, a multi-relational graph encoder iteratively updates the representation of each node by aggregating information from its neighboring nodes and edges ( \u00a73. 1.2) . Next, the model selects salient concepts that should be considered during generation ( \u00a73. 1.3) . Finally, the model generates outputs by integrating the token embeddings of both the input sequence and the top-ranked concepts ( \u00a73.1.4).",
"cite_spans": [
{
"start": 260,
"end": 264,
"text": "1.2)",
"ref_id": null
},
{
"start": 358,
"end": 362,
"text": "1.3)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "KG-enhanced Generative Reasoning",
"sec_num": "3.1"
},
{
"text": "To facilitate the reasoning process, we resort to an external commonsense knowledge graph G = {V, E}, where V denotes the concept set and E denotes the edges with relations. Since direct reasoning on the entire graph is intractable, we extract a sequence-associated subgraph",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-aware subgraph construction",
"sec_num": "3.1.1"
},
{
"text": "G x = {V x , E x }, where V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-aware subgraph construction",
"sec_num": "3.1.1"
},
{
"text": "x consists of the concepts extracted from the input sequence (denoted as C x ) and their inter-connected concepts within two hops, i.e., Figure 2 , C x = {piano, sport, kind} and V x = {piano, sport, kind, art, music, press, ...}. Next, the generation task is to maximize the conditional probability p(y|x, G x ).",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 145,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sequence-aware subgraph construction",
"sec_num": "3.1.1"
},
{
"text": "V x = {C x \u222a N (C x ) \u222a N (N (C x ))}. For exam- ple, in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-aware subgraph construction",
"sec_num": "3.1.1"
},
{
"text": "To model the relational information in the commonsen KG, we employ the relational graph convolutional network (R-GCN) (Schlichtkrull et al., 2018) which generalizes GCN with relation specific weight matrices. We follow Vashishth et al. (2020) and Ji et al. (2020) to use a non-parametric compositional operation \u03d5(\u2022) to combine the concept node embedding and the relation embedding. Specifically, given the input subgraph G x = {V x , E x } and an R-GCN with L layers, we update the embedding of each node v \u2208 V x at the (l+1)-th layer by aggregating information from the embeddings of its neighbours in N (v) at the l-th layer:",
"cite_spans": [
{
"start": 118,
"end": 146,
"text": "(Schlichtkrull et al., 2018)",
"ref_id": "BIBREF29"
},
{
"start": 219,
"end": 242,
"text": "Vashishth et al. (2020)",
"ref_id": "BIBREF35"
},
{
"start": 247,
"end": 263,
"text": "Ji et al. (2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-relational graph encoding",
"sec_num": "3.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "o l v = 1 |N (v)| (u,v,r)\u2208E W l N \u03d5(h l u , h l r ), (1) h l+1 v = ReLU(o l v + W l S h l v ),",
"eq_num": "(2)"
}
],
"section": "Multi-relational graph encoding",
"sec_num": "3.1.2"
},
{
"text": "where h v and h r are node embedding and relation embedding. We define the compositional operation as \u03d5(h u , h r ) = h u \u2212h r inspired by the TransE (Bordes et al., 2013) . The relation embedding is also updated via another linear transformation:",
"cite_spans": [
{
"start": 150,
"end": 171,
"text": "(Bordes et al., 2013)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-relational graph encoding",
"sec_num": "3.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h l+1 r = W l R h l r .",
"eq_num": "(3)"
}
],
"section": "Multi-relational graph encoding",
"sec_num": "3.1.2"
},
{
"text": "Finally, we obtain concept embedding h L v that encodes the sequence-associated subgraph context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-relational graph encoding",
"sec_num": "3.1.2"
},
{
"text": "Not all concepts in G appear in the outputs. Thus, we design a concept selection module to choose salient concepts that should be considered during generation. For each concept v \u2208 V x , we calculate its probability of being selected by taking a multilayer perception (MLP) on the top of graph encoder:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concept selection on knowledge graph",
"sec_num": "3.1.3"
},
{
"text": "p v = P r[v is selected|x] = MLP(h L v )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concept selection on knowledge graph",
"sec_num": "3.1.3"
},
{
"text": ". To supervise the concept selection process, we use the overlapping concepts between concepts appearing in the output sequence C y and concepts in input sequence associated subgraph G x , i.e., V x \u2229 C y , as a simple proxy for the ground-truth supervision. So, the concept selection loss (here only for one expert, see MoE loss in Eq. 8)is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concept selection on knowledge graph",
"sec_num": "3.1.3"
},
{
"text": "L concept = \u2212 v\u2208Vx\u2229Cy v log p v (4) + v\u2208Vx\u2212Cy (1 \u2212 v) log(1 \u2212 p v ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concept selection on knowledge graph",
"sec_num": "3.1.3"
},
{
"text": "Finally, the top-N ranked concepts on the subgraph",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concept selection on knowledge graph",
"sec_num": "3.1.3"
},
{
"text": "G x (denoted as v 1 , ..., v N )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concept selection on knowledge graph",
"sec_num": "3.1.3"
},
{
"text": "are selected as the additional input to the generation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concept selection on knowledge graph",
"sec_num": "3.1.3"
},
{
"text": "We utilize a standard Transformer (Vaswani et al., 2017 ) as our generation model. It takes the concatenation of the sequence x and all the selected concepts v 1 , ..., v N as input and auto-regressively generates the outputs y. We adopt the cross-entropy loss, which can be written as:",
"cite_spans": [
{
"start": 34,
"end": 55,
"text": "(Vaswani et al., 2017",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Concept-aware sequence generation",
"sec_num": "3.1.4"
},
{
"text": "L generation = \u2212 log p(y|x, v 1 , \u2022 \u2022 \u2022 , v N ) (5) = \u2212 |y| t=1 log p(y t |x, v 1 , \u2022 \u2022 \u2022 , v N , y <t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concept-aware sequence generation",
"sec_num": "3.1.4"
},
{
"text": "Note that since the selected concepts do not have a rigorous order, we only apply positional encodings (used in Transformer) to the input sequence x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concept-aware sequence generation",
"sec_num": "3.1.4"
},
{
"text": "We jointly optimizes the following loss:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall objective",
"sec_num": "3.1.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = L generation + \u03bb \u2022 L concept .",
"eq_num": "(6)"
}
],
"section": "Overall objective",
"sec_num": "3.1.5"
},
{
"text": "where \u03bb is a hyperparameter to control the importance of different tasks 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall objective",
"sec_num": "3.1.5"
},
{
"text": "To empower the generation model to produce multiple reasonable outputs, we employ a mixture of expert (MoE) module to model uncertainty and generate diverse outputs. While the MoE models have primarily been explored as a means of increasing model capacity, they are also being used to boost diverse generation process (Shen et al., 2019; Cho et al., 2019) . Formally, the MoE module introduces a multinomial latent variable z \u2208 {1, \u2022 \u2022 \u2022 , K}, and decomposes the marginal likelihood as follows:",
"cite_spans": [
{
"start": 318,
"end": 337,
"text": "(Shen et al., 2019;",
"ref_id": "BIBREF31"
},
{
"start": 338,
"end": 355,
"text": "Cho et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MoE-Promoted Diverse Generation",
"sec_num": "3.2"
},
{
"text": "p(y|x, G x ) = K z=1 p(z|x, G x )p(y|z, x, G x ). (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MoE-Promoted Diverse Generation",
"sec_num": "3.2"
},
{
"text": "Training. We minimize the loss function (in Eq.(6)) using the MoE decomposition,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MoE-Promoted Diverse Generation",
"sec_num": "3.2"
},
{
"text": "\u2207 log p(y|x, G x ) (8) = K z=1 p(z|x, y, G x ) \u2022 \u2207 log p(y, z|x, G x ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MoE-Promoted Diverse Generation",
"sec_num": "3.2"
},
{
"text": "and train the model with the EM algorithm (Dempster et al., 1977) . Ideally, we would like different experts to specialize in different reasoning abilities so that they can generate diverse outputs. The specialization of experts means that given the input, only one element in {p(y, z|x, G x )} K z=1 should dominate in value (Shen et al., 2019) . To encourage this, we employ a hard mixture model to maximize max z p(y, z|x, G x ) by assigning full responsibility to the expert with the largest joint probability. Training proceeds via hard-EM can be written as:",
"cite_spans": [
{
"start": 42,
"end": 65,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF11"
},
{
"start": 326,
"end": 345,
"text": "(Shen et al., 2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MoE-Promoted Diverse Generation",
"sec_num": "3.2"
},
{
"text": "\u2022 E-step: estimate the responsibilities of each",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MoE-Promoted Diverse Generation",
"sec_num": "3.2"
},
{
"text": "expert r z \u2190 1[z = arg max z p(y, z|x, G x )]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MoE-Promoted Diverse Generation",
"sec_num": "3.2"
},
{
"text": "using the current parameters \u03b8; \u2022 M-step: update the parameters with gradients of the chosen expert (r z = 1) from E-step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MoE-Promoted Diverse Generation",
"sec_num": "3.2"
},
{
"text": "Expert parameterization. Independently parameterizing each expert may exacerbate overfitting since the number of parameters increases linearly with the number of experts (Shen et al., 2019) . We follow the parameter sharing schema in Cho et al. (2019) ; Shen et al. (2019) to avoid this issue. This only requires a negligible increase in parameters over the baseline model that does not uses MoE. In our experiments, we compared adding a unique expert embedding to each input token with adding an expert prefix token before the input text sequence, where they achieved very similar performance.",
"cite_spans": [
{
"start": 170,
"end": 189,
"text": "(Shen et al., 2019)",
"ref_id": "BIBREF31"
},
{
"start": 234,
"end": 251,
"text": "Cho et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 254,
"end": 272,
"text": "Shen et al. (2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MoE-Promoted Diverse Generation",
"sec_num": "3.2"
},
{
"text": "Producing K outputs during inference. In order to generate K different outputs on test set, we follow Shen et al. (2019) to enumerate all latent variables z and then greedily decoding each token by\u0177 t = arg max p(y|\u0177 1:t\u22121 , z, x). In other words, we ask each expert to seek different sets of concepts on the knowledge graph, and use the selected concepts to generate K different outputs. Notably, this decoding procedure is efficient and easily parallelizable. Furthermore, to make fair comparisons with sampling-based methods, we use greedy decoding without any sampling strategy.",
"cite_spans": [
{
"start": 102,
"end": 120,
"text": "Shen et al. (2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MoE-Promoted Diverse Generation",
"sec_num": "3.2"
},
{
"text": "Commonsense explanation generation. It aims to generate an explanation given a counterfactual statement for sense-making (Wang et al., 2019) . We use the benchmark dataset ComVE from SemEval-2020 Task 4 (Wang et al., 2020) . The dataset contains 10,000 / 997 / 1,000 examples for training / development / test sets, respectively. The average input/output length is 7.7 / 9.0 words. All examples in the dataset have 3 references. Abductive commonsense reasoning. It is also referred as \u03b1-NLG. It is the task of generating a valid hypothesis about the likely explanations to partially observable past and future. We use the ART benchmark dataset (Bhagavatula et al., 2020) that consists of 50,481 / 1,779 / 3,560 examples for training / development / test sets. The average input/output length is 17.4 / 10.8 words. Each example in the ART dataset has 1 to 5 references.",
"cite_spans": [
{
"start": 121,
"end": 140,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF39"
},
{
"start": 203,
"end": 222,
"text": "(Wang et al., 2020)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments 4.1 Tasks and Datasets",
"sec_num": "4"
},
{
"text": "We note that as we targeted at the one-to-many generation problem, we excluded those baseline methods mentioned in the related work that cannot produce multiple outputs, e.g., Zhang To the best of our knowledge, we are the first work to explore diverse knowledge reasoning on commonsense KG to generate multiple diverse output sequences. Therefore, we only compared our MoKGE with existing diversity-promoting baselines without using knowledge graph. VAE-based method. The variational auto-encoder (VAE) (Kingma and Welling, 2014) is a deep generative latent variable model. VAE-based methods produce diverse outputs by sampling different latent variables from an approximate posterior distribution. CVAE-SVG (SVG is short for sentence variant generation) (Gupta et al., 2018 ) is a conditional VAE model that can produce multiple outputs based an original sentence as input. MoE-based method. Mixture models provide an alternative approach to generate diverse outputs by sampling different mixture components. We compare against two mixture of experts (MoE) implementations by Shen et al. (2019) and Cho et al. (2019) . We refer them as MoE-prompt (Shen et al., 2019) and MoE-embed (Cho et al., 2019) . Sampling-based method. Sampling methods create diverse outputs by sampling next token widely from the vocabulary. We compare against two sampling algorithms for decoding, including truncated sampling (Fan et al., 2018) and nucleus sampling . Truncated sampling (Fan et al., 2018) randomly samples words from top-k probability candidates of the predicted distribution at each decoding step. Nucleus sampling ) avoids text degeneration by truncating the unreliable tails and sampling from the dynamic nucleus of tokens containing the vast majority of the probability mass.",
"cite_spans": [
{
"start": 176,
"end": 181,
"text": "Zhang",
"ref_id": null
},
{
"start": 756,
"end": 775,
"text": "(Gupta et al., 2018",
"ref_id": "BIBREF17"
},
{
"start": 1078,
"end": 1096,
"text": "Shen et al. (2019)",
"ref_id": "BIBREF31"
},
{
"start": 1101,
"end": 1118,
"text": "Cho et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 1149,
"end": 1168,
"text": "(Shen et al., 2019)",
"ref_id": "BIBREF31"
},
{
"start": 1183,
"end": 1201,
"text": "(Cho et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 1404,
"end": 1422,
"text": "(Fan et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 1465,
"end": 1483,
"text": "(Fan et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Methods",
"sec_num": "4.2"
},
{
"text": "All baseline methods were built on the Transformer architecture with 6-layer encoder and decoder, and initialized with pre-trained parameters from BARTbase (Lewis et al., 2020) , which is one of the stateof-the-art pre-trained Transformer models for natural language generation (Gehrmann et al., 2021) . In our MoKGE, the Transformer parameters were also initialized by BART-base, in order to make fair comparison with all baseline methods. The R-GCN parameters were random initialized.",
"cite_spans": [
{
"start": 156,
"end": 176,
"text": "(Lewis et al., 2020)",
"ref_id": "BIBREF23"
},
{
"start": 278,
"end": 301,
"text": "(Gehrmann et al., 2021)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "For model training, we used Adam with batch size of 60, learning rate of 3e-5, L2 weight decay of 0.01, learning rate warm up over the first 10,000 steps, and linear decay of learning rate. Our models were trained by one Tesla V100 GPU card with 32GB memory, and implemented on PyTorch with the Huggingface's Transformer (Wolf et al., 2020) . All Transformer-based methods were trained with 30 epochs, taken about 4-5 hours on the ComVE dataset and 7-9 hours on the \u03b1-NLG dataset.",
"cite_spans": [
{
"start": 321,
"end": 340,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "In addition to our MoKGE implementation, we also provide the baseline implementation code on GitHub https://github.com/DM2-ND/MoKGE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "We evaluated the performance of different generation models from two aspects: quality (or say accuracy) and diversity. Quality tests the appropriateness of the generated response with respect to the context, and diversity tests the lexical and semantic diversity of the appropriate sequences generated by the model. These evaluation metrics have been widely used in existing work (Ott et al., 2018; Vijayakumar et al., 2018; Cho et al., 2019; .",
"cite_spans": [
{
"start": 380,
"end": 398,
"text": "(Ott et al., 2018;",
"ref_id": "BIBREF27"
},
{
"start": 399,
"end": 424,
"text": "Vijayakumar et al., 2018;",
"ref_id": "BIBREF37"
},
{
"start": 425,
"end": 442,
"text": "Cho et al., 2019;",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.4"
},
{
"text": "Quality metrics (\u21d1). The quality is measured by standard N-gram based metrics, including the BLEU score (Papineni et al., 2002) and the ROUGE score (Lin, 2004) . This measures the highest accuracy comparing the best hypothesis among the top-K with the target (Vijayakumar et al., 2018) . Concretely, we generate hypotheses {\u0176 (1) , \u2022 \u2022 \u2022\u0176 (K) } from each source X and keep the hypothesis\u0176 best that achieves the best sentencelevel metric with the target Y . Then we calculate a corpus-level metric with the greedily-selected hypotheses",
"cite_spans": [
{
"start": 104,
"end": 127,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF28"
},
{
"start": 148,
"end": 159,
"text": "(Lin, 2004)",
"ref_id": "BIBREF25"
},
{
"start": 259,
"end": 285,
"text": "(Vijayakumar et al., 2018)",
"ref_id": "BIBREF37"
},
{
"start": 326,
"end": 329,
"text": "(1)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.4"
},
{
"text": "{Y (i),best } N i=1 and references {Y (i) } N i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.4"
},
{
"text": ". The diversity of evaluated by three aspects: concept, pairwise and corpus diversity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.4"
},
{
"text": "Concept diversity. The number of unique concepts (short as Uni.C) measures how many unique concepts on the commonsense KG are covered in the generated outputs. A higher value indicates the higher concept diversity. Besides, we also measure the pairwise concept diversity by using Jaccard similarity. It is defined as the size of the intersection divided by the size of the union of two sets. Lower value indicates the higher concept diversity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.4"
},
{
"text": "Pairwise diversity (\u21d3). Referred as \"self-\" (e.g., self-BLEU) , it measures the within-distribution similarity. This metric computes the average of sentence-level metrics between all pairwise combinations of hypotheses {Y (1) , \u2022 \u2022 \u2022 , Y (K) } generated from each source sequence X. Lower pairwise metric indicates high diversity between generated hypotheses.",
"cite_spans": [
{
"start": 222,
"end": 225,
"text": "(1)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.4"
},
{
"text": "Corpus diversity (\u21d1). Distinct-k (Li et al., 2016) measures the total number of unique k-grams normalized by the total number of generated k-gram tokens to avoid favoring long sentences. Entropyk reflects how evenly the empirical k-gram distribution is for a given sentence when word frequency is considered. ",
"cite_spans": [
{
"start": 33,
"end": 50,
"text": "(Li et al., 2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.4"
},
{
"text": "#Uni.C(\u21d1) Jaccard (\u21d3) SB-3 (\u21d3) SB-4 (\u21d3) D-2(\u21d1) E-4(\u21d1) B-4 (\u21d1) R-L (\u21d1) CVAE z =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.4"
},
{
"text": "Comparison with baseline methods. We evaluated our proposed MoKGE and baseline methods based on both quality and diversity. As shown in Table 2 , MoE-based methods achieved the best performance among all baseline methods. MoKGE can further boost diversity by at least 1.57% and 1.83% on Self-BLEU-3 and Self-BLEU-4, compared with the vanilla MoE methods. At the same time, MoKGE achieved on par performance with other baseline methods based on the quality evaluation. Specifically, on the ComVE dataset, MoKGE achieved the best performance on BLEU-4 and ROUGE-L, and on the \u03b1-NLG dataset, the perfor-mance gap between MoKGE and the best baseline method was always less than 0.5% on BLEU-4.",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 143,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "4.4.1"
},
{
"text": "Ablation study. We conducted an ablation study to analyze the two major components in the MoKGE. The experimental results are shown in Table 3 . First, we note that when not using MoE (line -w/o MoE), we used the most basic decoding strategy -beam search -to generate multiple outputs. We observed that the outputs generated by beam search differed only on punctuation and minor morphological variations, and typically only the last few words were different from others. Besides, integrating commonsense knowledge graph into the MoEbased generation model brought both quality and diversity improvement on the ComVE, but might sacrifice a little quality (less than 0.5% on BLEU-4) on the \u03b1-NLG dataset. Overall, our MoKGE benefited from KG and MoE modules, and achieved great performance on both diversity and quality.",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 142,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "4.4.1"
},
{
"text": "Automatic diversity evaluation (e.g., Self-BLEU, Distinct-k) cannot reflect the content-level diversity. Therefore, we conducted extensive human evaluations to assess both the quality and diversity of outputs generated from different models. The human evaluation was divided into two parts: independent scoring and pairwise comparisons. All evaluations were conducted on Amazon Mechanical Turk (AMT), and each evaluation form was answered by at least three AMT workers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "4.5"
},
{
"text": "Independent scoring. In this part, human annotators were asked to evaluate the generated outputs from a single model. We first presented top-3 generated outputs from a certain model to human annotators. The annotators would first evaluate the diversity by answering \"How many different meanings do three outputs express?\" Then we presented human-written outputs to the annotators. The annotator would evaluate the quality by comparing machine generated outputs and human-written outputs, and answering \"How many machine generated out-puts are correct?\" The diversity and quality scores are normalized to the range from 0 to 3. Besides, the annotators need to give a fluency and grammar score from 1 to 4 for each generated output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "4.5"
},
{
"text": "Pairwise comparisons. In this part, the annotators were given two sets of top-3 generated explanations from two different methods each time and instructed to pick the more diverse set. The choices are \"win,\" \"lose,\" or \"tie.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "4.5"
},
{
"text": "As shown in Table 4 -5, our MoKGE can significantly outperform the state-of-the-art samplingbased methods in diversity evaluation (p-value < 0.05 under paired t-test), even slightly better than human performance on the ComVE task. At the same time, we can observe MoKGE is able to obtain on par performance with other methods based on quality evaluation. The p-value is not smaller than 0.05 (i.e., not significant difference) under paired t-test between MoKGE and baseline methods based on the quality evaluation. Figure 3 demonstrates human-written explanations and generated explanations from different diversitypromoting methods, including nucleus sampling, mixture of experts (MoE) and our MoKGE. Overall, we observed that the nucleus sampling and MoE methods typically expressed very similar -NLG --Input: Billy had received good grades on his report card. [ ]. He decided as he got home that elephants were his new favorite animal. [1] : AtLocation [2] : HasProperty [3] : IsA [4]: RelatedTo",
"cite_spans": [
{
"start": 939,
"end": 942,
"text": "[1]",
"ref_id": "BIBREF2"
},
{
"start": 956,
"end": 959,
"text": "[2]",
"ref_id": null
},
{
"start": 974,
"end": 977,
"text": "[3]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 515,
"end": 523,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "4.5"
},
{
"text": "(1) Billy's parents took him to the zoo as a reward.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.6"
},
{
"text": "(2) Billy wanted to go to the zoo. He saw elephants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.6"
},
{
"text": "(3) Billy went to the store and bought an elephant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.6"
},
{
"text": "(1) Billy's parents sent him on an African safari for a reward.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.6"
},
{
"text": "(2) He went to the zoo later in the day and saw elephants. (1) Billy wanted to go to the zoo and see elephants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.6"
},
{
"text": "(2) Billy was excited to go on his trip to the zoo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.6"
},
{
"text": "(3) Billy went to the zoo to see the animals. (1) Cars are not made of fuel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.6"
},
{
"text": "(2) Cars burn fuel to produce energy and work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nucleus sampling",
"sec_num": null
},
{
"text": "(3) Fuel is a liquid which cannot make cars. Figure 3 : Case studies. MoKGE can produce diverse knowledge reasoning on commonsense KG, select different relevant concepts (in shades of different colors), then generate diverse outputs. The outputs diversity of MoKGE is significantly better than that of beam search and nucleus sampling, and close to human performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 53,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Nucleus sampling",
"sec_num": null
},
{
"text": "meanings, e.g., \"go to the zoo and see elephants\" and \"took him to the zoo and see elephants\" in the \u03b1-NLG case. On the contrary, MoKGE can generate semantically richer and more diverse contents than the other two methods by incorporating more commonsense concepts on the knowledge graph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nucleus sampling",
"sec_num": null
},
{
"text": "Improving content diversity in NLG. Most of the existing diversity-promoting work has focused on improving syntactic and lexical diversity, such as different language style in machine translation (Shen et al., 2019) and word variability in paraphrase generation (Gupta et al., 2018) . Nevertheless, methods for improving content diversity in NLG systems have been rarely studied in the existing literature. We believe that generating diverse content is one of the most promising aspects of machine intelligence, which can be applied to a wide range of real-world applications, not only limited to commonsense reasoning.",
"cite_spans": [
{
"start": 196,
"end": 215,
"text": "(Shen et al., 2019)",
"ref_id": "BIBREF31"
},
{
"start": 262,
"end": 282,
"text": "(Gupta et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Directions",
"sec_num": "5"
},
{
"text": "Besides, leveraging knowledge graph is not the only way to promote content diversity as it is a highly knowledge-intensive task. Many existing knowledge-enhanced methods (Yu et al., 2022c) can be used to acquire different external knowledge for producing diverse outputs, e.g., taking different retrieved documents as conditions for generator.",
"cite_spans": [
{
"start": 170,
"end": 188,
"text": "(Yu et al., 2022c)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Directions",
"sec_num": "5"
},
{
"text": "Designing neural diversity metrics. In spite of growing interest in NLG models that produce diverse outputs, there is currently no principled neu-ral method for evaluating the diversity of an NLG system. As described in Tevet and Berant (2021) , existing automatic diversity metrics (e.g. Self-BLEU) perform worse than humans on the task of estimating content diversity, indicating a low correlation between metrics and human judgments.",
"cite_spans": [
{
"start": 220,
"end": 243,
"text": "Tevet and Berant (2021)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Directions",
"sec_num": "5"
},
{
"text": "Therefore, neural-based diversity metrics are highly demanded. Intuitively, the metrics should include computational comparisons of multiple references and hypotheses by projecting them into the same semantic space, unlike metrics for evaluating the generation quality, e.g., BERTScore (Zhang et al., 2020b) and BLEURT (Sellam et al., 2020) , which only measures the correlation between a pair of reference and hypothesis.",
"cite_spans": [
{
"start": 286,
"end": 307,
"text": "(Zhang et al., 2020b)",
"ref_id": "BIBREF48"
},
{
"start": 319,
"end": 340,
"text": "(Sellam et al., 2020)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Directions",
"sec_num": "5"
},
{
"text": "In this paper, we proposed a novel method that diversified the generative reasoning by a mixture of expert strategy on commonsense knowledge graph. To the best of our knowledge, this is the first work to boost diversity in NLG by diversifying knowledge reasoning on commonsense knowledge graph. Experiments on two generative commonsense reasoning benchmarks demonstrated that MoKGE outperformed state-of-the-art methods on diversity, while achieving on par performance on quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "ConceptNet: https://conceptnet.io/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We performed a hyperparameter search and found when \u03bb was around 0.3, the model performed the best. Therefore, we set \u03bb = 0.3 in the following experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The work is supported by NSF IIS-1849816, CCF-1901059, IIS-2119531 and IIS-2142827. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fuel is not a vehicle material. (2) Fuel is not used to make cars. They use gasoline. (3) Cars are not made of fuel",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "(1) Fuel is not a vehicle material. (2) Fuel is not used to make cars. They use gasoline. (3) Cars are not made of fuel. They are made of metal.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Cars are made of metal. but not fuel. (2) Cars are made of aluminum, not made by fuel. (3) Fuel is used to make cars more efficient",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "(1) Cars are made of metal. but not fuel. (2) Cars are made of aluminum, not made by fuel. (3) Fuel is used to make cars more efficient, not less so.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Cars are made of rubber. Fuel is not used to make cars",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cars are made of rubber. Fuel is not used to make cars.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Cars are made of aluminum, which is not fuel",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cars are made of aluminum, which is not fuel.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Cars are powered by electric motors and not by fuel. (1) Billy went to the zoo to see the animals",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cars are powered by electric motors and not by fuel. (1) Billy went to the zoo to see the animals.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Billy was excited to go to the zoo with his friends",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Billy was excited to go to the zoo with his friends.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Billy's parents took him to the zoo to see elephants",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Billy's parents took him to the zoo to see elephants.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Abductive commonsense reasoning",
"authors": [
{
"first": "Chandra",
"middle": [],
"last": "References",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Le Bras",
"suffix": ""
},
{
"first": "Keisuke",
"middle": [],
"last": "Malaviya",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference for Learning Representation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Han- nah Rashkin, Doug Downey, Scott Wen-tau Yih, and Yejin Choi. 2020. Abductive commonsense reason- ing. In International Conference for Learning Repre- sentation (ICLR).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Translating embeddings for modeling multirelational data",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Garcia-Duran",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Oksana",
"middle": [],
"last": "Yakhnenko",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems (NeurIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Advances in Neural Information Processing Systems (NeurIPS).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Mixture content selection for diverse sequence generation",
"authors": [
{
"first": "Jaemin",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jaemin Cho, Minjoon Seo, and Hannaneh Hajishirzi. 2019. Mixture content selection for diverse sequence generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Process- ing (EMNLP).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Maximum likelihood from incomplete data via the em algorithm",
"authors": [
{
"first": "P",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "Nan",
"middle": [
"M"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "Donald B",
"middle": [],
"last": "Laird",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "In Journal of the Royal Statistical Society (Methodological)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur P Dempster, Nan M Laird, and Donald B Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. In Journal of the Royal Statistical Society (Methodological). Wiley Online Library.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Injecting entity types into entity-guided text generation",
"authors": [
{
"first": "Xiangyu",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Wenhao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Chenguang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2021,
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiangyu Dong, Wenhao Yu, Chenguang Zhu, and Meng Jiang. 2021. Injecting entity types into entity-guided text generation. In Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multitalk: A highly-branching dialog testbed for diverse conversations",
"authors": [
{
"first": "Yao",
"middle": [],
"last": "Dou",
"suffix": ""
},
{
"first": "Maxwell",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2021,
"venue": "AAAI Conference on Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yao Dou, Maxwell Forbes, Ari Holtzman, and Yejin Choi. 2021. Multitalk: A highly-branching dialog testbed for diverse conversations. In AAAI Confer- ence on Artificial Intelligence (AAAI).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Hierarchical neural story generation",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The gem benchmark: Natural language generation, its evaluation and metrics",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": ""
},
{
"first": "Tosin",
"middle": [],
"last": "Adewumi",
"suffix": ""
},
{
"first": "Karmanya",
"middle": [],
"last": "Aggarwal",
"suffix": ""
},
{
"first": "Pawan",
"middle": [],
"last": "Sasanka Ammanamanchi",
"suffix": ""
},
{
"first": "Anuoluwapo",
"middle": [],
"last": "Aremu",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bosselut",
"suffix": ""
},
{
"first": "Miruna-Adriana",
"middle": [],
"last": "Khyathi Raghavi Chandu",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Clinciu",
"suffix": ""
},
{
"first": "Kaustubh",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dhole",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 1st Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Anuoluwapo Aremu, Antoine Bosselut, Khy- athi Raghavi Chandu, Miruna-Adriana Clinciu, Dipanjan Das, Kaustubh Dhole, et al. 2021. The gem benchmark: Natural language generation, its evaluation and metrics. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A knowledge-enhanced pretraining model for commonsense story generation",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Guan",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Zhihao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics (TACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, and Minlie Huang. 2020. A knowledge-enhanced pre- training model for commonsense story generation. In Transactions of the Association for Computational Linguistics (TACL).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A deep generative framework for paraphrase generation",
"authors": [
{
"first": "Ankush",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Prawaan",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Rai",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI Conference on Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2018. A deep generative framework for paraphrase generation. In AAAI Conference on Artificial Intelligence (AAAI).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The curious case of neural text degeneration",
"authors": [
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Maxwell",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference for Learning Representation (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text de- generation. In International Conference for Learning Representation (ICLR).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Language generation with multi-hop reasoning on commonsense knowledge graph",
"authors": [
{
"first": "Haozhe",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Pei",
"middle": [],
"last": "Ke",
"suffix": ""
},
{
"first": "Shaohan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haozhe Ji, Pei Ke, Shaohan Huang, Furu Wei, Xiaoyan Zhu, and Minlie Huang. 2020. Language generation with multi-hop reasoning on commonsense knowl- edge graph. In Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The role of\" condition\" a novel scientific knowledge graph representation and construction model",
"authors": [
{
"first": "Tianwen",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Nitesh",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Chawla",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2019,
"venue": "ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V Chawla, and Meng Jiang. 2019. The role of\" condition\" a novel scientific knowledge graph representation and construction model. In ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Autoencoding variational bayes",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference for Learning Representation (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Max Welling. 2014. Auto- encoding variational bayes. In International Confer- ence for Learning Representation (ICLR).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Target conditioning for one-to-many generation",
"authors": [
{
"first": "Marie-Anne",
"middle": [],
"last": "Lachaux",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
}
],
"year": 2020,
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP-Findings)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Anne Lachaux, Armand Joulin, and Guillaume Lample. 2020. Target conditioning for one-to-many generation. In Conference on Empirical Methods in Natural Language Processing (EMNLP-Findings).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Abdelrahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for nat- ural language generation, translation, and compre- hension. In Annual Meeting of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A diversity-promoting objective function for neural conversation models",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting ob- jective function for neural conversation models. In Conference of the North American Chapter of the Association for Computational Linguistics (NAACL- HLT).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text summarization branches out",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Kg-bart: Knowledge graph-augmented bart for generative commonsense reasoning",
"authors": [
{
"first": "Ye",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Lifang",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Philip S",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2021,
"venue": "AAAI Conference on Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ye Liu, Yao Wan, Lifang He, Hao Peng, and Philip S Yu. 2021. Kg-bart: Knowledge graph-augmented bart for generative commonsense reasoning. In AAAI Conference on Artificial Intelligence (AAAI).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Analyzing uncertainty in neural machine translation",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018. Analyzing uncertainty in neural machine translation. In International Con- ference on Machine Learning (ICML).",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computa- tional Linguistics (ACL).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Modeling relational data with graph convolutional networks",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Schlichtkrull",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "Rianne",
"middle": [],
"last": "Bloem",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2018,
"venue": "European Semantic Web Conference (ESWC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolu- tional networks. In European Semantic Web Confer- ence (ESWC).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Bleurt: Learning robust metrics for text generation",
"authors": [
{
"first": "Thibault",
"middle": [],
"last": "Sellam",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2020,
"venue": "Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. Bleurt: Learning robust metrics for text generation. In Annual Meeting of the Association for Computa- tional Linguistics (ACL).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Mixture models for diverse machine translation: Tricks of the trade",
"authors": [
{
"first": "Tianxiao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianxiao Shen, Myle Ott, Michael Auli, and Marc'Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. In International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Diversified query generation guided by knowledge graph",
"authors": [
{
"first": "Xinyao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Jiangjie",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jiaze",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chun",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Yanghua",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2022,
"venue": "ACM Conference on Web Search and Data Mining (WSDM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinyao Shen, Jiangjie Chen, Jiaze Chen, Chun Zeng, and Yanghua Xiao. 2022. Diversified query genera- tion guided by knowledge graph. In ACM Conference on Web Search and Data Mining (WSDM).",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Toward diverse text generation with inverse reinforcement learning",
"authors": [
{
"first": "Zhan",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Xinchi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2018,
"venue": "International Joint Conference on Artificial Intelligence (IJCAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhan Shi, Xinchi Chen, Xipeng Qiu, and Xuanjing Huang. 2018. Toward diverse text generation with in- verse reinforcement learning. In International Joint Conference on Artificial Intelligence (IJCAI).",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Evaluating the evaluation of diversity in natural language generation",
"authors": [
{
"first": "Guy",
"middle": [],
"last": "Tevet",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2021,
"venue": "Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guy Tevet and Jonathan Berant. 2021. Evaluating the evaluation of diversity in natural language genera- tion. In Conference of the European Chapter of the Association for Computational Linguistics (EACL).",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Composition-based multirelational graph convolutional networks",
"authors": [
{
"first": "Shikhar",
"middle": [],
"last": "Vashishth",
"suffix": ""
},
{
"first": "Soumya",
"middle": [],
"last": "Sanyal",
"suffix": ""
},
{
"first": "Vikram",
"middle": [],
"last": "Nitin",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference for Learning Representation (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha Talukdar. 2020. Composition-based multi- relational graph convolutional networks. In Inter- national Conference for Learning Representation (ICLR).",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems (NeurIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems (NeurIPS).",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Diverse beam search for improved description of complex scenes",
"authors": [
{
"first": "K",
"middle": [],
"last": "Ashwin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Vijayakumar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cogswell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ramprasaath",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Selvaraju",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Crandall",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI Conference on Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashwin K Vijayakumar, Michael Cogswell, Ram- prasaath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2018. Diverse beam search for improved description of complex scenes. In AAAI Conference on Artificial Intelligence (AAAI).",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Semeval-2020 task 4: Commonsense validation and explanation",
"authors": [
{
"first": "Cunxiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shuailong",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Yili",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Yilong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cunxiang Wang, Shuailong Liang, Yili Jin, Yilong Wang, Xiaodan Zhu, and Yue Zhang. 2020. Semeval- 2020 task 4: Commonsense validation and explana- tion. In Proceedings of the Fourteenth Workshop on Semantic Evaluation (SemEval-14).",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Does it make sense? and why? a pilot study for sense making and explanation",
"authors": [
{
"first": "Cunxiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shuailong",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiaonan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tian",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2019,
"venue": "Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cunxiang Wang, Shuailong Liang, Yue Zhang, Xiaonan Li, and Tian Gao. 2019. Does it make sense? and why? a pilot study for sense making and explanation. In Annual Meeting of the Association for Computa- tional Linguistics (ACL).",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Transformers: State-of-theart natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2020,
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf et al. 2020. Transformers: State-of-the- art natural language processing. In Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Diverse and informative dialogue generation with context-specific commonsense knowledge awareness",
"authors": [
{
"first": "Sixing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dawei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhonghai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2020,
"venue": "Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sixing Wu, Ying Li, Dawei Zhang, Yang Zhou, and Zhonghai Wu. 2020. Diverse and informative dia- logue generation with context-specific commonsense knowledge awareness. In Annual Meeting of the As- sociation for Computational Linguistics (ACL).",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Kg-fid: Infusing knowledge graph in fusion-in-decoder for opendomain question answering",
"authors": [
{
"first": "Donghan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Chenguang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yuwei",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Wenhao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Shuohang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yichong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Zeng",
"suffix": ""
}
],
"year": 2022,
"venue": "Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donghan Yu, Chenguang Zhu, Yuwei Fang, Wenhao Yu, Shuohang Wang, Yichong Xu, Xiang Ren, Yim- ing Yang, and Michael Zeng. 2022a. Kg-fid: Infus- ing knowledge graph in fusion-in-decoder for open- domain question answering. In Annual Meeting of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Dict-bert: Enhancing language model pre-training with dictionary",
"authors": [
{
"first": "Wenhao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Chenguang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yuwei",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Donghan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Shuohang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yichong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2022,
"venue": "Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenhao Yu, Chenguang Zhu, Yuwei Fang, Donghan Yu, Shuohang Wang, Yichong Xu, Michael Zeng, and Meng Jiang. 2022b. Dict-bert: Enhancing language model pre-training with dictionary. In Annual Meet- ing of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A survey of knowledge-enhanced text generation",
"authors": [
{
"first": "Wenhao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Chenguang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zaitang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Qingyun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2022,
"venue": "ACM Computing Survey (CSUR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenhao Yu, Chenguang Zhu, Zaitang Li, Zhiting Hu, Qingyun Wang, Heng Ji, and Meng Jiang. 2022c. A survey of knowledge-enhanced text generation. In ACM Computing Survey (CSUR).",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Sentence-permuted paragraph generation",
"authors": [
{
"first": "Wenhao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Chenguang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zhichun",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2021,
"venue": "Conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenhao Yu, Chenguang Zhu, Tong Zhao, Zhichun Guo, and Meng Jiang. 2021. Sentence-permuted para- graph generation. In Conference on empirical meth- ods in natural language processing (EMNLP).",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Enhancing taxonomy completion with concept generation via fusing relational representations",
"authors": [
{
"first": "Qingkai",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Jinfeng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Wenhao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jane",
"middle": [],
"last": "Cleland-Huang",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2021,
"venue": "ACM SIGKDD Conference on Knowledge Discovery & Data Mining (KDD)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qingkai Zeng, Jinfeng Lin, Wenhao Yu, Jane Cleland- Huang, and Meng Jiang. 2021. Enhancing taxonomy completion with concept generation via fusing rela- tional representations. In ACM SIGKDD Conference on Knowledge Discovery & Data Mining (KDD).",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Grounded conversation generation as guided traverses in commonsense knowledge graphs",
"authors": [
{
"first": "Houyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhenghao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Chenyan",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Houyu Zhang, Zhenghao Liu, Chenyan Xiong, and Zhiyuan Liu. 2020a. Grounded conversation genera- tion as guided traverses in commonsense knowledge graphs. In Annual Meeting of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Bertscore: Evaluating text generation with bert",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Kilian",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Weinberger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference for Learning Representation (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Wein- berger, and Yoav Artzi. 2020b. Bertscore: Evaluating text generation with bert. In International Confer- ence for Learning Representation (ICLR).",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Generating informative and diverse conversational responses via adversarial information maximization",
"authors": [
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Xiujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems (NeurIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018. Generating informative and diverse conversational re- sponses via adversarial information maximization. In Advances in Neural Information Processing Systems (NeurIPS).",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Commonsense knowledge aware conversation generation with graph attention",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jingfang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2018,
"venue": "International Joint Conference on Artificial Intelligence (IJCAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018. Commonsense knowledge aware conversation generation with graph attention. In International Joint Conference on Artifi- cial Intelligence (IJCAI).",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Texygen: A benchmarking platform for text generation models",
"authors": [
{
"first": "Yaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Sidi",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Jiaxian",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A benchmarking platform for text generation models. In ACM SIGIR Conference on Research & Develop- ment in Information Retrieval (SIGIR).",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"text": "An example of diverse commonsense explanation generation. It aims at generating multiple reasonable explanations given a counterfactual statement. Relevant concepts on the commonsense KG (in shade) can help to perform diverse knowledge reasoning.",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "et al. (2020a); Ji et al. (2020); Liu et al. (2021). Different from aforementioned methods, our MoKGE can seek diverse reasoning on KG to encourage various generation outputs without any additional conditions.",
"num": null,
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"text": "SB-3/4: Self-BLEU-3/4 (\u21d3), D-2: Distinct-2 (\u21d1), E-4: Entropy-4 (\u21d1), B-4: BLEU-4 (\u21d1), R-L: ROUGE-L (\u21d1)",
"num": null,
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"text": "His mother stopped by the store and bought him a stuffed elephant.",
"num": null,
"uris": null
},
"TABREF0": {
"content": "<table><tr><td/><td colspan=\"2\">ComVE \u03b1-NLG</td></tr><tr><td>Avg. # human references</td><td>3.00</td><td>4.20</td></tr><tr><td>Avg. # meanings (\u21d1)</td><td/><td/></tr><tr><td>Human references Nucleus sampling</td><td>2.60 2.15</td><td>3.79 3.35</td></tr><tr><td>MoKGE (our method)</td><td>2.63</td><td>3.72</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": "Under human evaluation, the performance of existing diversity promoting methods is still far from that of humans. Our method MoKGE can exceed the human performance on the ComVE task."
},
"TABREF1": {
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td colspan=\"3\">Concept Selection (S3)</td><td>Top-ranked concepts</td><td>Expert 1</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">entertainment</td><td>soccer</td></tr><tr><td/><td colspan=\"2\">source concepts</td><td/><td/><td colspan=\"2\">instrument</td><td/><td>exercise</td><td>Piano is \u2026 sport music press \u2026</td></tr><tr><td/><td>KG</td><td>locate subKG</td><td colspan=\"2\">(S1)</td><td/><td colspan=\"2\">piano play</td><td>You can produce music when pressing \u2026 Transformer (S4)</td></tr><tr><td>art</td><td colspan=\"2\">entertainment</td><td/><td>soccer</td><td/><td>music</td><td/><td>sport</td><td>Top-ranked concepts</td><td>Expert 2</td></tr><tr><td>music</td><td>piano</td><td>play</td><td/><td/><td/><td>pianist</td><td colspan=\"2\">occupation</td><td>action</td></tr><tr><td/><td/><td colspan=\"2\">action</td><td>sport</td><td/><td/><td/><td>Piano is \u2026 sport art form \u2026</td></tr><tr><td>song</td><td colspan=\"3\">GNN Encoder (S2) press instrument</td><td>kind</td><td>art</td><td>form</td><td>kind</td><td>press</td><td>Piano is a kind of art form . Transformer (S4)</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": ""
},
"TABREF2": {
"content": "<table><tr><td>Methods</td><td>Model Variant</td><td colspan=\"5\">Concept diversity #Uni.C(\u21d1) Jaccard (\u21d3) SB-3 (\u21d3) SB-4 (\u21d3) D-2(\u21d1) E-4(\u21d1) B-4 (\u21d1) R-L (\u21d1) Pairwise diversity Corpus diversity Quality</td></tr><tr><td>CVAE</td><td>z = 16 z = 32 z = 64</td><td>4.56 0.1 5.03 0.3 4.67 0.0</td><td>64.74 0.3 47.27 0.8 54.69 0.8</td><td>66.66 0.4 59.20 1.3 55.02 0.8</td><td>62.83 0.5 54.30 1.5 49.58 1.0</td><td>33.75 0.5 9.13 0.1 16.67 0.3 41.52 0.3 32.86 1.1 9.07 0.5 17.04 0.2 42.17 0.5 32.55 0.5 9.07 0.2 15.54 0.4 41.03 0.3</td></tr><tr><td>Truncated sampling</td><td>k = 5 k = 20 k = 50</td><td>4.37 0.0 4.60 0.0 4.68 0.1</td><td>71.38 0.7 63.42 1.2 60.98 1.8</td><td>74.20 0.2 64.47 2.1 61.39 2.4</td><td>71.38 0.2 60.33 2.4 56.93 2.8</td><td>31.32 0.4 9.18 0.1 16.44 0.2 40.99 0.2 33.69 0.6 9.26 0.1 17.70 0.2 42.58 0.5 34.80 0.3 9.29 0.1 17.48 0.4 42.44 0.5</td></tr><tr><td>Nucleus sampling</td><td>p = .5 p = .75 p = .95</td><td>4.19 0.1 4.41 0.1 4.70 0.1</td><td>72.78 1.0 67.01 1.7 61.92 2.6</td><td>77.66 0.8 71.41 2.5 63.43 3.4</td><td>75.14 0.9 68.22 2.9 59.23 3.8</td><td>28.36 0.6 9.05 0.3 16.09 0.6 40.95 0.5 31.21 0.3 9.16 0.1 17.07 0.5 41.88 0.7 34.17 0.3 9.27 0.2 17.68 0.4 42.60 0.8</td></tr><tr><td>MoE</td><td>embed prompt</td><td>5.41 0.0 5.45 0.2</td><td>47.55 0.5 47.54 0.4</td><td>33.64 0.2 33.42 0.3</td><td>28.21 0.1 28.40 0.3</td><td>46.57 0.2 9.61 0.1 18.66 0.5 43.72 0.2 46.93 0.2 9.60 0.2 18.91 0.4 43.71 0.5</td></tr><tr><td>MoKGE (ours)</td><td>embed prompt</td><td>5.35 0.2 5.48 0.2</td><td>48.18 0.5 44.37 0.4</td><td>35.36 1.1 30.93 0.9</td><td>29.71 1.2 25.30 1.1</td><td>47.51 0.4 9.63 0.1 19.13 0.1 43.70 0.1 48.44 0.2 9.67 0.2 19.01 0.1 43.83 0.3</td></tr><tr><td>Human</td><td/><td>6.27 0.0</td><td>26.49 0.0</td><td>12.36 0.0</td><td>8.01 0.0</td><td>63.02 0.0 9.55 0.0 100.0 0.0 100.0 0.0</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": "Diversity and quality evaluation on the ComVE (upper part) and \u03b1-NLG (lower part) datasets. Each model is required to generate three outputs. All experiments are run three times with different random seeds, and the average results on the test set is calculated as the final performance, with standard deviations as subscripts."
},
"TABREF4": {
"content": "<table><tr><td/><td>ComVE (left part: diversity; right part: quality)</td><td>\u03b1-NLG (left part: diversity; right part: quality)</td></tr><tr><td>Methods</td><td colspan=\"2\">SB-4 (\u21d3) D-2 (\u21d1) E-4 (\u21d1) B-4 (\u21d1) R-L (\u21d1) SB-4 (\u21d3) D-2 (\u21d1) E-4 (\u21d1) B-4 (\u21d1) R-L (\u21d1)</td></tr><tr><td>\u22a2 w/o MoE</td><td colspan=\"2\">74.15 0.2 31.92 0.1 9.14 0.0 15.87 0.1 40.24 0.2 77.34 0.2 19.19 0.1 10.10 0.0 12.84 0.1 37.52 0.2</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": "Ablation studies. When not suing MoE (line -w/o MoE), we set beam as three to generate three outputs.MoKGE 25.30 1.1 48.44 0.2 9.67 0.2 19.01 0.1 43.83 0.3 22.43 2.4 38.01 0.6 10.88 0.2 14.17 0.2 38.82 0.7 \u22a2 w/o KG 28.40 0.3 46.93 0.2 9.60 0.2 18.91 0.4 43.71 0.5 23.18 1.9 36.71 0.1 10.85 0.0 14.26 0.3 38.78 0.4"
},
"TABREF5": {
"content": "<table><tr><td>Methods</td><td>Diversity</td><td>ComVE Quality</td><td>Flu. &amp; Gra.</td><td>Diversity</td><td>\u03b1-NLG Quality</td><td>Flu. &amp; Gra.</td></tr><tr><td>Truncated samp.</td><td>2.15\u00b10.76</td><td>2.22\u00b11.01</td><td>3.47\u00b10.75</td><td>2.31\u00b10.76</td><td>2.63\u00b10.77</td><td>3.89\u00b10.36</td></tr><tr><td>Nucleus samp. MoKGE (ours) Human Ref.</td><td>2.03\u00b10.73 2.63\u00b10.51* 2.60\u00b10.59</td><td>2.29\u00b11.03 2.10\u00b10.99 3.00</td><td>3.52\u00b10.70 3.46\u00b10.81 4.00</td><td>2.39\u00b10.73 2.66\u00b10.51* 2.71\u00b10.57</td><td>2.67\u00b10.72 2.57\u00b10.71 3.00</td><td>3.91\u00b10.28 3.87\u00b10.34 4.00</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": "Human evaluations by independent scoring based on diveristy, quality, flency and grammar. In addition, * indicates p-value < 0.05 under paired t-test between MoKGE and baseline methods."
},
"TABREF6": {
"content": "<table><tr><td>Against methods</td><td>Win (%)</td><td>ComVE Tie (%)</td><td>Lose (%)</td><td>Win (%)</td><td>\u03b1-NLG Tie (%)</td><td>Lose (%)</td></tr><tr><td>v.s.</td><td/><td/><td/><td/><td/><td/></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": "Human evaluations by pairwise comparison: MoKGE v.s. two baseline methods based on diversity. Truncated samp. 47.85\u00b15.94 37.09\u00b14.56 15.06\u00b13.31 45.35\u00b15.06 43.19\u00b12.78 11.46\u00b12.31 v.s. Nucleus samp. 54.30\u00b14.62 36.02\u00b12.74 9.68\u00b13.48 41.53\u00b11.55 46.99\u00b12.04 11.48\u00b12.36"
}
}
}
}