ACL-OCL / Base_JSON /prefixE /json /emnlp /2020.emnlp-main.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:00:27.100154Z"
},
"title": "Self-Supervised Knowledge Triplet Learning for Zero-Shot Question Answering",
"authors": [
{
"first": "Pratyay",
"middle": [],
"last": "Banerjee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Arizona State University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Chitta",
"middle": [],
"last": "Baral",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Arizona State University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The aim of all Question Answering (QA) systems is to generalize to unseen questions. Current supervised methods are reliant on expensive data annotation. Moreover, such annotations can introduce unintended annotator bias, making systems focus more on the bias than the actual task. This work proposes Knowledge Triplet Learning (KTL), a self-supervised task over knowledge graphs. We propose heuristics to create synthetic graphs for commonsense and scientific knowledge. We propose using KTL to perform zero-shot question answering, and our experiments show considerable improvements over large pre-trained transformer language models.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The aim of all Question Answering (QA) systems is to generalize to unseen questions. Current supervised methods are reliant on expensive data annotation. Moreover, such annotations can introduce unintended annotator bias, making systems focus more on the bias than the actual task. This work proposes Knowledge Triplet Learning (KTL), a self-supervised task over knowledge graphs. We propose heuristics to create synthetic graphs for commonsense and scientific knowledge. We propose using KTL to perform zero-shot question answering, and our experiments show considerable improvements over large pre-trained transformer language models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The ability to understand natural language and answer questions is one of the core focuses in the field of natural language processing. To measure and study the different aspects of question answering, several datasets are developed, such as SQuaD (Rajpurkar et al., 2018) , HotpotQA (Yang et al., 2018) , and Natural Questions (Kwiatkowski et al., 2019) which require systems to perform extractive question answering. On the other hand, datasets such as SocialIQA (Sap et al., 2019b) , Common-senseQA (Talmor et al., 2018) , Swag (Zellers et al., 2018) and Winogrande require systems to choose the correct answer from a given set. These multiple-choice question answering datasets are very challenging, but recent large pre-trained language models such as BERT (Devlin et al., 2018) , XLNET (Yang et al., 2019b) and RoBERTa (Liu et al., 2019b) have shown very strong performance on them. Moreover, as shown in Winogrande , acquiring unbiased labels requires a \"carefully designed crowdsourcing procedure\", which adds to the cost of data annotation. This is also quantified in other where given a triple (h, r, t) we learn to generate one of the inputs given the other two.",
"cite_spans": [
{
"start": 248,
"end": 272,
"text": "(Rajpurkar et al., 2018)",
"ref_id": "BIBREF50"
},
{
"start": 284,
"end": 303,
"text": "(Yang et al., 2018)",
"ref_id": "BIBREF69"
},
{
"start": 328,
"end": 354,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF31"
},
{
"start": 465,
"end": 484,
"text": "(Sap et al., 2019b)",
"ref_id": "BIBREF53"
},
{
"start": 502,
"end": 523,
"text": "(Talmor et al., 2018)",
"ref_id": "BIBREF58"
},
{
"start": 531,
"end": 553,
"text": "(Zellers et al., 2018)",
"ref_id": "BIBREF72"
},
{
"start": 762,
"end": 783,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 792,
"end": 812,
"text": "(Yang et al., 2019b)",
"ref_id": "BIBREF67"
},
{
"start": 825,
"end": 844,
"text": "(Liu et al., 2019b)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "natural language tasks such as Natural Language Inference (Gururangan et al., 2018) and Argument Reasoning Comprehension (Niven and Kao, 2019) , where such annotation artifacts lead to \"Clever Hans Effect\" in the models (Kaushik and Lipton, 2018; Poliak et al., 2018) . One way to resolve this is to design and create datasets in a clever way, such as in Winogrande , another way is to ignore the data annotations and to build systems to perform unsupervised question answering (Teney and Hengel, 2016; . In this paper, we focus on building unsupervised zero-shot multiple-choice QA systems.",
"cite_spans": [
{
"start": 58,
"end": 83,
"text": "(Gururangan et al., 2018)",
"ref_id": "BIBREF22"
},
{
"start": 121,
"end": 142,
"text": "(Niven and Kao, 2019)",
"ref_id": "BIBREF43"
},
{
"start": 220,
"end": 246,
"text": "(Kaushik and Lipton, 2018;",
"ref_id": "BIBREF29"
},
{
"start": 247,
"end": 267,
"text": "Poliak et al., 2018)",
"ref_id": "BIBREF47"
},
{
"start": 478,
"end": 502,
"text": "(Teney and Hengel, 2016;",
"ref_id": "BIBREF59"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent work (Fabbri et al., 2020; try to generate a synthetic dataset using a text corpus such as Wikipedia, to solve extractive QA. Other works Shwartz et al., 2020) use large pre-trained generative language models such as GPT-2 (Radford et al., 2019) to generate knowledge, questions, and answers and compare against the given answer choices.",
"cite_spans": [
{
"start": 12,
"end": 33,
"text": "(Fabbri et al., 2020;",
"ref_id": "BIBREF20"
},
{
"start": 145,
"end": 166,
"text": "Shwartz et al., 2020)",
"ref_id": "BIBREF56"
},
{
"start": 230,
"end": 252,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we utilize the information present in Knowledge Graphs such as ATOMIC (Sap et al., 2019a) . We define a new task of Knowledge Triplet Learning (KTL) over these knowledge graphs. For tasks which do not have appropriate knowledge graphs, we propose heuristics to create synthetic knowledge graphs. Knowledge Triplet Learning is like Knowledge Representation Learning and Knowledge Graph Completion but not limited to it. Knowledge Representation Learning (Lin et al., 2018) learns the low-dimensional projected and distributed representations of entities and relations defined in a knowledge graph. Knowledge Graph Completion (Ji et al., 2020) aims to identify new relations and entities to expand an incomplete input knowledge graph.",
"cite_spans": [
{
"start": 84,
"end": 103,
"text": "(Sap et al., 2019a)",
"ref_id": "BIBREF52"
},
{
"start": 467,
"end": 485,
"text": "(Lin et al., 2018)",
"ref_id": "BIBREF35"
},
{
"start": 638,
"end": 655,
"text": "(Ji et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In KTL, as shown in Figure 1 , we define a triplet (h, r, t), and given any two as input, we learn to generate the third. This tri-directional reasoning forces the system to learn all the possible relations between the three inputs. We map the question answering task to KTL, by mapping the context, question and answer to (h, r, t) respectively. We define two different ways to perform self-supervised KTL. This task can be designed as a representation generation task or a masked language modeling task. We compare both the strategies in this work. We show how to use models trained on this task to perform zero-shot question answering without any additional supervision. We also show how models pre-trained on this task perform considerably well compared to strong pre-trained language models on few-shot learning. We evaluate our approach on the three commonsense and three science multiplechoice QA datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 28,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contributions of this paper are summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We define the Knowledge Triplet Learning over Knowledge Graph and show how to use it for zero-shot question answering. \u2022 We compare two strategies for the above task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose heuristics to create synthetic knowledge graphs. \u2022 We perform extensive experiments of our framework on three commonsense and three science question-answering datasets. \u2022 We achieve state-of-the-art results for zeroshot and propose a strong baseline for the fewshot question answering task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We define the task of Knowledge Triplet Learning (KTL) in this section. We define G = (V, E) as a Knowledge Graph, where V is the set of vertices, E is the set of edges. V consists of entities which can be phrases or named-entities depending on the given input Knowledge Graph. Let S be a set of fact triples, S \u2286 V \u00d7E\u00d7V with the format (h, r, t),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Triplet Learning",
"sec_num": "2"
},
{
"text": "where h and t belong to set of vertices V and r belongs to set of edges. The h and t indicates the head and tail entities, whereas r indicates the relation between them. For example, from the ATOMIC knowledge graph, (PersonX puts PersonX's trust in PersonY, How is PersonX seen as?, faithful) is one such triple. Here the head is PersonX puts PersonX's trust in PersonY, relation is How is PersonX seen as? and the tail is faithful. Do note V does not contain homogenous entities, i.e, both faithful and PersonX puts PersonX's trust in PersonY are in V .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Triplet Learning",
"sec_num": "2"
},
{
"text": "We define the task of KTL as follows: Given input a triple (h, r, t), we learn the following three functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Triplet Learning",
"sec_num": "2"
},
{
"text": "f t (h, r) \u21d2 t, f h (r, t) \u21d2 h, f r (h, t) \u21d2 r (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Triplet Learning",
"sec_num": "2"
},
{
"text": "That is, each function learns to generate one component of the triple given the other two. The intuition behind learning these three functions is as follows. Let us take the above example: (PersonX puts Per-sonX's trust in PersonY, How is PersonX seen as?, faithful). The first function f t (h, r) learns to generate the answer t given the context and the question. The second function f h (r, t) learns to generate one context where the question and the answer may be valid. The final function f r (h, t) is a Jeopardystyle generating the question which connects the context and the answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Triplet Learning",
"sec_num": "2"
},
{
"text": "In Multiple-choice QA, given the context, two choices may be true for two different questions. Similarly, given the question, two answer choices may be true for two different contexts. For example, given the context: PersonX puts PersonX's trust in PersonY, the answers PersonX is considered trustworthy by others and PersonX is polite are true for two different questions How does this affect others? and How is PersonX seen as?. Learning these three functions enables us to score these relations between the context, question, and answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Triplet Learning",
"sec_num": "2"
},
{
"text": "After learning this function in a self-supervised way, we can use them to perform question answering. Given a triple (h, r, t), we define the following scoring function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using KTL to perform QA",
"sec_num": "2.1"
},
{
"text": "Dt = D(t, ft(h, r)), D h = D(h, f h (r, t)), Dr = D(r, fr(h, t)) score(h, r, t) = Dt * D h * Dr (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using KTL to perform QA",
"sec_num": "2.1"
},
{
"text": "where h is the context, r is the question and t is one of the answer options. D is a distance function which measures the distance between the generated output and the ground-truth. The distance function varies depending on the instantiation of the framework, which we will study in the following sections. The final answer is selected as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using KTL to perform QA",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ans = arg min t (score(h, r, t))",
"eq_num": "(3)"
}
],
"section": "Using KTL to perform QA",
"sec_num": "2.1"
},
{
"text": "As the scores are the distance from the ground-truth we select the choice that has the minimum score. We define the different ways we can implement this framework in the following sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using KTL to perform QA",
"sec_num": "2.1"
},
{
"text": "In this implementation, we use Knowledge representation learning to learn equation (1). In contrast to triplet classification and graph completion, where systems try to learn a score function f r (h, t), i.e, is the fact triple (h, r, t) true or false; in this method we learn to generate the inputs vector representations, i.e, f r (h, t) \u21d2 r. We can view equation 1 as generator functions, which given the two input vector encodings learns to generate a vector representation of the third. The vector encodings can be pre-computed sentence vector representations or contextual vector representations. As our triples (h, r, t) can have a many to many relations between each pair, we first project the two inputs from input vector encoding space to a different space similar to the work of TransD (Ji et al., 2015) . We use a Transformer encoder Enc to encode our triples to the vector encoding space. We learn two projection functions, M i1 and M i2 to project the two inputs, and a third projection function M o to project the entity to be generated. We combine the two projected inputs using a function C. These functions can be implemented using feedforward networks.",
"cite_spans": [
{
"start": 797,
"end": 814,
"text": "(Ji et al., 2015)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Representation Learning",
"sec_num": "2.2"
},
{
"text": "Ie1 = Enc(I1), Ie2 = Enc(I2), Oe = Enc(O) Ie1 = Mi1(Ie1), Ie2 = Mi2(Ie2), Op = Mo(Oe) O = C(Ie1, Ie2) loss = LossF (\u00d4, Op)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Representation Learning",
"sec_num": "2.2"
},
{
"text": "where I i is the input,\u00d4 is the generated output vector and O p is the projected vector. M and C functions are learned using fully connected networks. In our implementation, we use RoBERTa as the Enc transformer, with the output representation of the [cls] token as the phrase representation. We train this model using two types of loss functions, L2Loss where we try to minimize the L2 norm between the generated and the projected ground-truth, and Noise Contrastive Estimation (Gutmann and Hyv\u00e4rinen, 2010) where along with the ground-truth we have k noise-samples. These noise samples are selected from other (h, r, t) triples such that the target output is not another true fact triple, i.e, (h, r, t noise ) is false. The NCELoss is defined as:",
"cite_spans": [
{
"start": 479,
"end": 508,
"text": "(Gutmann and Hyv\u00e4rinen, 2010)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Representation Learning",
"sec_num": "2.2"
},
{
"text": "N CELoss(\u00d4, Op, [N0...N k ]) = \u2212 log exp sim(\u00d4, Op) exp sim(\u00d4, Op) + k\u2208N exp (sim(\u00d4, N k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Representation Learning",
"sec_num": "2.2"
},
{
"text": "where N k are the projected noise samples, sim is the similarity function which can be the L2 norm or Cosine similarity,\u00d4 is the generated output vector and O p is the projected vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Representation Learning",
"sec_num": "2.2"
},
{
"text": "The D distance function (2) for such a model is defined by the distance function used in the loss function. For L2Loss, it is the L2 norm, and in the case of NCELoss, we use 1 \u2212 sim function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Representation Learning",
"sec_num": "2.2"
},
{
"text": "In Span Masked Language Modeling (SMLM), we model the equation 1 as a masked language modeling task. We tokenize and concatenate the triple (h, r, t) with a separator token between them, i. . We feed these tokens to a Transformer encoder Enc and use a feed forward network to unmask the sequence of tokens. Similarly, we mask h to learn f h and t to learn f t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Masked Language Modeling",
"sec_num": "2.3"
},
{
"text": "We train the same Transformer encoder to perform all the three functions. We use the crossentropy loss to train the model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Masked Language Modeling",
"sec_num": "2.3"
},
{
"text": "CELoss(h, r, mask(t), t) = \u2212 1 n n i=1 log2PMLM (ti|h, r, t1..t i ..tn)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Masked Language Modeling",
"sec_num": "2.3"
},
{
"text": "where P M LM is the masked language modeling probability of the token t i , given the unmasked tokens h and r and other masked tokens in t. Do note we do not do progressive unmasking, i.e, all the masked tokens are jointly predicted. The D distance function (2) for this model is same as the loss function defined above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Masked Language Modeling",
"sec_num": "2.3"
},
{
"text": "This section describes our method to create a synthetic knowledge graph from a text corpus containing sentences. Not all types of knowledge are present in a structured knowledge graph, such as ATOMIC, which might help answer questions. For example, the questions in QASC dataset (Khot et al., 2019) require knowledge about scientific concepts, such as, \"Clouds regulate the global engine of atmosphere and ocean.\". The QASC dataset contains a textual knowledge corpus containing science facts. Similarly, the Open Mind Commonsense (OMCS) knowledge corpus contains knowledge about different commonsense facts, such as, \"You are likely to find a jellyfish in a book\". Another kind of knowledge about social interactions and story progression is present in several story understanding datasets, such as RoCStories and the Story Cloze Test (Mostafazadeh et al., 2016) . To perform question answering using this knowledge and KTL, we create the following two graphs: the Common Concept Graph and the Directed Story Graph.",
"cite_spans": [
{
"start": 279,
"end": 298,
"text": "(Khot et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 836,
"end": 863,
"text": "(Mostafazadeh et al., 2016)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Graph Construction",
"sec_num": "3"
},
{
"text": "Common Concept Graph To create the Common Concept Graph, we extract noun-chunks and verb-chunks from each of the sentences using the Spacy Part-of-Speech tagger (Honnibal and Montani, 2017) . We assign all the extracted chunks as the graph's vertices and the sentences as the graph's edges. To generate training samples for KTL, we assign triples (h, R, t) as (e 1 , e 2 , v i ) where v i is the common concept present in both the sentences e 1 and e 2 . For example, in the sentence Clouds regulate the global engine of atmosphere and ocean., the extracted concepts are clouds, global engine, atmosphere, ocean and regulate. The triplet assignment will be, [Warm moist air from the Pacific Ocean brings fog and low stratus clouds to the maritime zone., Clouds regulate the global engine of atmosphere and ocean., clouds]. We create two such synthetic graphs using the QASC science corpus and the OMCS concept corpus. Our hypothesis is this graph, and the KTL framework will allow the model to understand the concepts common in two facts, which allows question answering.",
"cite_spans": [
{
"start": 161,
"end": 189,
"text": "(Honnibal and Montani, 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Graph Construction",
"sec_num": "3"
},
{
"text": "Directed Story Graph This graph is created using short stories from the RoCStories and Story Cloze Test datasets. This graph is different from the above graph as this graph has a directional property, and each story graph is disconnected. To create this graph, we take each short story with k sentences, [s 1 , s 2 , s 3 .., s k ] and create a directed graph such that all sentences are vertices and each sentence is connected with a directed edge only to sentences that occur after it. For example, s 1 is connected to s 2 with a directed edge but not vice versa. We generate triples (h, R, t) by sampling vertices (s i , s j , s k ) such that there is a directed path between the sentences s i and s k through s j . This format captures a smaller story where the head is an event that occurs before the relation and the tail. This graph is designed for story understanding and abductive reasoning using the KTL framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Graph Construction",
"sec_num": "3"
},
{
"text": "Random Sampling There are around 17M sentences in the QASC text corpus; similarly, there are 640K sentences in the OMCS text corpus. Our synthetic triple generation leads to a significantly large set of triples in order of 10 12 and more. To restrict the train dataset size for our KTL framework, we randomly sample triples and limit the train dataset size to be at max 1M samples; we refer to this as Random Sampling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Graph Construction",
"sec_num": "3"
},
{
"text": "Curriculum Filtering Here, we extract the noun and verb chunks from the context, question, and answer options present in the question answering datasets. We filter triples from the generated dataset and keep only those triples where at least one of the entities is present in the extracted noun and verb chunks set. This filtering is analogous to a reallife human examination setting where a teacher provides the set of concepts upon which questions would be asked, and the students can learn the concepts. We perform the sampling and filtering only on the huge Common Concept Graphs generated from QASC and OMCS corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Graph Construction",
"sec_num": "3"
},
{
"text": "We evaluate our framework on the following six datasets: SocialIQA (Sap et al., 2019b) , aNLI (Bhagavatula et al., 2019), CommonsenseQA (Talmor et al., 2018) , QASC (Khot et al., 2019) , Open-BookQA and ARC . SocialIQA, aNLI, and Common-senseQA require commonsense reasoning and external knowledge to answer the questions. Similarly, QASC, OpenBookQA, and ARC require scientific knowledge. Table 1 shows the dataset statistics and the corresponding knowledge graph used to train our KTL model. Table 2 shows the statistics for the triples extracted from the graphs. From the two tables we can observe our KTL triples have different number of words when compared to the target question answering tasks. Especially where the context is significantly larger and human anno- Train Size 2251 1119 8134 4957 9741 169654 33410 Val Size 570 299 926 500 1221 1532 1954 Test Size 2377 1172 920 500 1140 --C Length -----9 15 Q Length 19.4 22.3 13 12 14 9 6 A length 3.7 4.9 1.5 3 1.5 9 3 # of Option 4 4 8 4 5 2 3 KTL Graph QASC-CCG QASC-CCG QASC-CCG QASC-CCG OMCS-CCG DSG ATOMIC tated as in SocialIQA, increasing the challenge for unsupervised learning.",
"cite_spans": [
{
"start": 67,
"end": 86,
"text": "(Sap et al., 2019b)",
"ref_id": "BIBREF53"
},
{
"start": 136,
"end": 157,
"text": "(Talmor et al., 2018)",
"ref_id": "BIBREF58"
},
{
"start": 165,
"end": 184,
"text": "(Khot et al., 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 390,
"end": 397,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 494,
"end": 501,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 771,
"end": 986,
"text": "Train Size 2251 1119 8134 4957 9741 169654 33410 Val Size 570 299 926 500 1221 1532 1954 Test Size 2377 1172 920 500 1140 --C Length -----9 15 Q Length 19.4 22.3 13 12 14 9 6 A length",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4"
},
{
"text": "We can observe the triples in our synthetic graphs, QASC-CCG and OMCS-CCG contain factual statements, and our target question answering datasets have questions that contain wh words or fill-in-theblanks. We translate each question to a hypothesis using the question and each answer option. To create hypothesis statements for questions containing wh words, we use a rule-based model (Demszky et al., 2018) . For fill-in-the-blank and cloze style questions, we replace the blank or concat the question and the answer option. For questions that do not have a context, such as in QASC or CommonsenseQA, we retrieve the top five sentences using the question and answer options as query and perform retrieval from respective source knowledge sentence corpus. For each retrieved-context, we evaluate the answer option score using equation 2 and take the mean score.",
"cite_spans": [
{
"start": 383,
"end": 405,
"text": "(Demszky et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question to Hypothesis Conversion and Context Creation",
"sec_num": "4.1"
},
{
"text": "We compare our models to the following baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.1"
},
{
"text": "entropy loss as the scoring function. We concatenate the context and question and find the cross-entropy loss for each answer choices and choose the answer with minimum loss. 2. Pre-trained RoBerta-large used as is, without any fine-tuning or further pre-training, with scoring the same as our defined SMLM model. We refer to it as Rob-MLM. 3. RoBerta-large model further fine-tuned using the original Masked Language Modeling task over our concatenated fact triples (h, r, t), with scoring same as SMLM. We refer to it as Rob-FMLM. 4. IR Solver described in ARC (Clark et al., 2016) , which sends the context, question, and answer option as a query to Elasticsearch. The top retrieved sentence, which has a non-stopword overlap with both the question and the answer, is used as a representative, and its corresponding IR ranking score is used as confidence for the answer. The option with the highest score is chosen as the answer.",
"cite_spans": [
{
"start": 563,
"end": 583,
"text": "(Clark et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GPT-2 Large with language modeling cross-",
"sec_num": "1."
},
{
"text": "We train the Knowledge Representation Learning (KRL) model using both L2Loss and NCELoss. For NCELoss, we also train it with both L2 norm and Cosine similarity. Both the KRL model (365M) and the SMLM model (358M) uses RoBERTa-large (355M) as the encoder. We train the model for three epochs with the following hyper-parameters: batch sizes [512, 1024] for SMLM and [32,64] for KRL; learning rate in range: [1e-5,5e-5]; warm-up steps in range [0,0.1]; in 4 Nvidia V100s 16GB. We use the transformers package (Wolf et al., 2019) . All triplets from the training graphs are positive samples. We learn using these triplets. For NCE, we choose k equal to ten, i.e., ten negative samples. We perform three hyper-parameter trials using ten percent of the training data for each model, and train models with three different seeds. We report the mean accuracy of the three random seed runs for each of our experiments and report the standard deviation if space permits. Code is available here.",
"cite_spans": [
{
"start": 340,
"end": 345,
"text": "[512,",
"ref_id": null
},
{
"start": 346,
"end": 351,
"text": "1024]",
"ref_id": null
},
{
"start": 507,
"end": 526,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF64"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "KTL Training",
"sec_num": "5.2"
},
{
"text": "Models ARC-E \u2191 ARC-C \u2191",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KTL Training",
"sec_num": "5.2"
},
{
"text": "OBQA \u2191 QASC \u2191 ComQA \u2191 aNLI \u2191 SocIQA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KTL Training",
"sec_num": "5.2"
},
{
"text": "6 Results and Discussion 6.1 Unsupervised Question Answering Table 3 compares our different KTL methods with our four baselines for the six question-answering datasets on the zero-shot question answering task. We use Hypothesis Conversion, Curriculum Filtering, and Context Creation for ARC, QASC, OBQA, and CommonsenseQA for both the baselines and our models. We compare the models on the Train, Dev and Test split if labels are available, to capture the statistical significance better. We can observe that our KTL trained models perform statistically significantly better than the baselines. When comparing the different KRL models, the NCELoss with Cosine similarity performs the best. This observation might be due to the additional supervision provided by the negative samples as the L2Loss model only tries to minimize the distance between the generated and the target projections. When comparing different KTL instantiations, we can see that the SMLM model performs the best overall. SMLM and KRL differ in their core approaches. We hypothesize that multi-layered attention in a transformer encoder enables the SMLM model to distinguish between a true and false statement. In KRL, we are learning from both positive and negative samples, but the model still under-performs. On analysis, we observe the random negative samples may make the training task biased for KRL. Our future work would be to utilize alternative negative sampling techniques, such as selecting samples closer in contextual vector space. The improvements in ARC-Challenge task are considerably less. It is observed that the fact corpus for QASC, although it contains a vast number of science facts, does not contain sufficient knowledge to answer ARC questions. There is a substantial improvement in SocialIQA, aNLI, QASC, and Com-monsenseQA as the respective KTL knowledge corpus contains sufficient knowledge to answer the questions. It is interesting to note that for QASC, we can reduce the problem from an eight-way to a four-way classification, as our top-4 accuracy on QASC is above 92%. Our unsupervised model outperforms previous approaches, such as Self-Talk (Shwartz et al., 2020) . It approaches prior supervised approaches like BIDAF (Seo et al., 2017) , and even surpasses it on two tasks. Table 4 compares our KTL pre-trained transformer encoder in the few-shot question answering task. We fine-tune the encoder with a simple feedforward ",
"cite_spans": [
{
"start": 2147,
"end": 2169,
"text": "(Shwartz et al., 2020)",
"ref_id": "BIBREF56"
},
{
"start": 2225,
"end": 2243,
"text": "(Seo et al., 2017)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [
{
"start": 61,
"end": 68,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 2282,
"end": 2289,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "KTL Training",
"sec_num": "5.2"
},
{
"text": "Model QASC \u2191 OBQA \u2191 ComQA \u2191 aNLI \u2191 SocIQA \u2191",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Few-Shot Question Answering",
"sec_num": "6.2"
},
{
"text": "SMLM -A 23.4 \u00b1 0.6 28.6 \u00b1 0.7 33.6 \u00b1 0.5 64.8 \u00b1 0.9 46.2 \u00b1 0.7 SMLM -Q 26.7 \u00b1 0.8 33.8 \u00b1 0.7 34.4 \u00b1 0.8 65.1 \u00b1 0.7 37.8 \u00b1 0.5 SMLM -C 22.8 \u00b1 1.1 29.8 \u00b1 1.3 31.9 \u00b1 0.9 64.9 \u00b1 0.8 47.1 \u00b1 0.8 SMLM -A*Q*C 27.2 \u00b1 0.6 34.6 \u00b1 0.8 38.8 \u00b1 0.6 65.3 \u00b1 0.7 48.5 \u00b1 0.6 Table 5 : Accuracy comparison of using only Answer (A), Question (Q) and Context (C) distance scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 256,
"end": 263,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Few-Shot Question Answering",
"sec_num": "6.2"
},
{
"text": "network for a n-way classification task, the standard question-answering approach using RoBerta with n being the number of answer options during training with only 8% of the training data. We train on three randomly sampled splits of training data and report the mean. We can observe our KTL pretrained encoders perform significantly better than the baselines and approach the fully supervised model, with only 7.5% percent behind the fully supervised model on SocialIQA. We also observe that our pre-trained models have a lower deviation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Few-Shot Question Answering",
"sec_num": "6.2"
},
{
"text": "Effect of Context, Question, Answer Distance In Table 5 , we compare the effect of the three different distance scores. It is interesting to observe, in OpenBookQA, QASC, and CommonsenseQA, the three datasets which do not provide a context, the model is more perplexed to predict the question when given a wrong answer option, leading to higher accuracy for only Question distance score. On the other hand, in aNLI all three distance scores have nearly equal performance. In SocialIQA, the question has the least accuracy, whereas the model is more perplexed when predicting the context given a wrong answer option. This observation confirms our hypothesis that given a task predicting context and question can contain more information than discriminating between options alone.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 55,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation studies and Analysis",
"sec_num": "6.3"
},
{
"text": "Filtering and Context Retrieval In Table 6 , we observe the effect of hypothesis conversion, curriculum filtering, and our context creation. Converting the question to a hypothesis provides a slight improvement, but a significant improvement is observed when we filter our KTL training samples and keep only those concepts that are present in the target question answering task, compared to when the KTL model is trained with a random sample of 1M. Curriculum filtering is impactful because there are many concepts present in our source knowledge corpus, and the randomly sampled training corpus only contains 50% of the target question answering task concepts on an average. Another critical thing to note in Table 6 is our KTL models can strongly perform like supervised models, when the gold knowledge context is provided, which are available in QASC and OpenBookQA. This observation indicates a better retrieval system for context creation can further improve our models.",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 42,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 710,
"end": 717,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Effect of Hypothesis Conversion, Curriculum",
"sec_num": null
},
{
"text": "Effect of Sythetic Triple corpus size Figure 2 compares our two modeling approaches when we train them with varying numbers of KTL training samples. NCE refers to our KRL model trained with NCELoss and Cosine similarity. We can observe that our KRL model learns faster due to additional supervision, but the SMLM model performs the best when trained with more samples. The performance tapers after 10 5 samples, indicating the models are overfitting to the synthetic data.",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 46,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Effect of Hypothesis Conversion, Curriculum",
"sec_num": null
},
{
"text": "Error Analysis We sampled 50 error cases from each of our question-answering tasks. Our KTL framework allows learning from knowledge graphs, that includes synthetic knowledge graphs. Both our instantiation, SMLM, and KRL function as a knowledge base score generator, were given the inputs, and a target, the generator yields a score, how improbable is the target to be present in the knowledge base. Most of our errors are when all context, question, and answer-option have a large distance score, and the model accuracy degenerates to that of a random model. This more considerable distance indicates the model is highly perplexed to see the input text. For aNLI and SocialIQA, we possess relevant context, and our performance is significantly better in these datasets, but for other tasks, we have another source of error, i.e., context creation. In several cases, the context is irrelevant and acts as a noise. Other errors include when the questions require complex reasoning such as understanding negation, conjunctions, and disjunctions; temporal reasoning such as \"6 am\" being before \"10 am\", and multi-hop reasoning. These complex reasoning tasks are required to answer a significant number of questions in the science and commonsense QA tasks. We also tried to utilize a text generation model, such as GPT-2, to generate and compare with ground truth text using our KTL framework, but preliminary results show the model is overfitting to the synthetic dataset and leads to significantly low performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Hypothesis Conversion, Curriculum",
"sec_num": null
},
{
"text": "Other Instantiations Our KTL framework can be implemented using other methods, such as using a Generator/Discriminator pre-training proposed in Electra , and sequence-tosequence methods. The distance functions for sequence-to-sequence models can be similar to our SMLM model, the cross-entropy loss for the expected generated sequence. Discriminator based methods can adapt to the negative class probabilities as the distance function. Studying different instantiations and their implications are some of the fascinating future works.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Hypothesis Conversion, Curriculum",
"sec_num": null
},
{
"text": "7 Related Work 7.1 Unsupervised QA Recent work on unsupervised question answering approach the problem in two ways, a domain adaption or transfer learning problem (Chung et al., 2018) , or a data augmentation problem Dhingra et al., 2018; . The work of Fabbri et al., 2020; Puri et al., 2020) use style transfer or template-based question, context and answer triple generation, and learn using these to perform unsupervised extractive question answering. There is another approach to learning generative models, generating the answer given a question or clarifying explanations and questions, such as GPT-2 (Radford et al., 2019) to perform unsupervised question answering (Shwartz et al., 2020; . In the visual domain, zero-shot visual question answering is studied in (Teney and Hengel, 2016) , and a self-supervised learning method for logical compositions of visual questions is proposed in (Gokhale et al., 2020) . In contrast, our work focuses on learning from knowledge graphs and generate vector representations or sequences of tokens not restricted to the answer but including the context and the question using the masked language modeling objective.",
"cite_spans": [
{
"start": 163,
"end": 183,
"text": "(Chung et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 217,
"end": 238,
"text": "Dhingra et al., 2018;",
"ref_id": "BIBREF19"
},
{
"start": 253,
"end": 273,
"text": "Fabbri et al., 2020;",
"ref_id": "BIBREF20"
},
{
"start": 274,
"end": 292,
"text": "Puri et al., 2020)",
"ref_id": "BIBREF48"
},
{
"start": 601,
"end": 629,
"text": "GPT-2 (Radford et al., 2019)",
"ref_id": null
},
{
"start": 673,
"end": 695,
"text": "(Shwartz et al., 2020;",
"ref_id": "BIBREF56"
},
{
"start": 770,
"end": 794,
"text": "(Teney and Hengel, 2016)",
"ref_id": "BIBREF59"
},
{
"start": 895,
"end": 917,
"text": "(Gokhale et al., 2020)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Hypothesis Conversion, Curriculum",
"sec_num": null
},
{
"text": "There are several approaches to add external knowledge into models to improve question answering. Broadly they can be classified into two, learning from unstructured knowledge and structured knowledge. In learning from unstructured knowledge, recent large pre-trained language models (Peters et al., 2018; Radford et al., 2019; Devlin et al., 2018; Liu et al., 2019b; Clark et al., 2020; Lan et al., 2019; Joshi et al., 2020; learn general-purpose text encoders from a huge text corpus. On the other hand, learning from structured knowledge includes learning from structured knowledge bases (Yang and Mitchell, 2017; Bauer et al., 2018; Mihaylov and Frank, 2018; Wang and Jiang, 2019; by learning knowledge enriched word embeddings. Using structured knowledge to refine pre-trained contextualized representations learned from unstructured knowledge is another approach (Peters et al., 2019; Yang et al., 2019a; Zhang et al., 2019; Liu et al., 2019a) .",
"cite_spans": [
{
"start": 284,
"end": 305,
"text": "(Peters et al., 2018;",
"ref_id": "BIBREF44"
},
{
"start": 306,
"end": 327,
"text": "Radford et al., 2019;",
"ref_id": "BIBREF49"
},
{
"start": 328,
"end": 348,
"text": "Devlin et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 349,
"end": 367,
"text": "Liu et al., 2019b;",
"ref_id": "BIBREF37"
},
{
"start": 368,
"end": 387,
"text": "Clark et al., 2020;",
"ref_id": "BIBREF13"
},
{
"start": 388,
"end": 405,
"text": "Lan et al., 2019;",
"ref_id": "BIBREF32"
},
{
"start": 406,
"end": 425,
"text": "Joshi et al., 2020;",
"ref_id": "BIBREF28"
},
{
"start": 591,
"end": 616,
"text": "(Yang and Mitchell, 2017;",
"ref_id": "BIBREF66"
},
{
"start": 617,
"end": 636,
"text": "Bauer et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 637,
"end": 662,
"text": "Mihaylov and Frank, 2018;",
"ref_id": "BIBREF39"
},
{
"start": 663,
"end": 684,
"text": "Wang and Jiang, 2019;",
"ref_id": "BIBREF60"
},
{
"start": 869,
"end": 890,
"text": "(Peters et al., 2019;",
"ref_id": "BIBREF45"
},
{
"start": 891,
"end": 910,
"text": "Yang et al., 2019a;",
"ref_id": "BIBREF65"
},
{
"start": 911,
"end": 930,
"text": "Zhang et al., 2019;",
"ref_id": "BIBREF73"
},
{
"start": 931,
"end": 949,
"text": "Liu et al., 2019a)",
"ref_id": "BIBREF65"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Use of External Knowledge for QA",
"sec_num": "7.2"
},
{
"text": "Another approach of using external knowledge includes retrieval of knowledge sentences from a text corpora (Das et al., 2019; Chen et al., 2017; Banerjee, 2019) , or knowledge triples from knowledge bases (Min et al., 2019; Wang et al., 2020) that are useful to answer a specific question. Another recent approach uses language model as knowledge bases (Petroni et al., 2019) , where they query a language model to un-mask a token given an entity and a relation in a predefined template. We use knowledge graphs to learn a self-supervised generative task to perform zero-shot multiple-choice QA in our work.",
"cite_spans": [
{
"start": 107,
"end": 125,
"text": "(Das et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 126,
"end": 144,
"text": "Chen et al., 2017;",
"ref_id": "BIBREF9"
},
{
"start": 145,
"end": 160,
"text": "Banerjee, 2019)",
"ref_id": "BIBREF1"
},
{
"start": 205,
"end": 223,
"text": "(Min et al., 2019;",
"ref_id": "BIBREF40"
},
{
"start": 224,
"end": 242,
"text": "Wang et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 353,
"end": 375,
"text": "(Petroni et al., 2019)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Use of External Knowledge for QA",
"sec_num": "7.2"
},
{
"text": "Over the years there are several methods discovered to perform the task of knowledge representation learning. Few of them are: TransE (Bordes et al., 2013) that views relations as a translation vector between head and tail entities, TransH (Wang et al., 2014 ) that overcomes TransE's inability to model complex relations, and TransD (Ji et al., 2015 ) that aims to reduce the parameters by proposing two different mapping matrices for head and tail. KRL has been used in various ways to generate natural answers (Yin et al., 2016; He et al., 2017) and generate factoid questions (Serban et al., 2016) . The task of Knowledge Graph Completion (Yao et al., 2019) is to either predict unseen relations r between two existing entities: (h, ?, t) or predict the tail entity t given the head entity and the query relation: (h, r, ?). Whereas we are learning to predict including the head, (?, r, t). In KTL, head and tail are not similar text phrases (context and answer) unlike Graph completion. We further modify TransD and adapt it to our KTL framework to perform zero-shot QA.",
"cite_spans": [
{
"start": 134,
"end": 155,
"text": "(Bordes et al., 2013)",
"ref_id": "BIBREF6"
},
{
"start": 240,
"end": 258,
"text": "(Wang et al., 2014",
"ref_id": "BIBREF63"
},
{
"start": 334,
"end": 350,
"text": "(Ji et al., 2015",
"ref_id": "BIBREF26"
},
{
"start": 513,
"end": 531,
"text": "(Yin et al., 2016;",
"ref_id": "BIBREF71"
},
{
"start": 532,
"end": 548,
"text": "He et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 580,
"end": 601,
"text": "(Serban et al., 2016)",
"ref_id": "BIBREF55"
},
{
"start": 643,
"end": 661,
"text": "(Yao et al., 2019)",
"ref_id": "BIBREF70"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Representation Learning",
"sec_num": "7.3"
},
{
"text": "This work proposes a new framework of Knowledge Triplet Learning over knowledge graph entities and relations. We show learning all three possible functions, f r , f h , and f t help the model perform zero-shot multiple-choice question answering, where we do not use question-answering annotations. We learn from both human-annotated and synthetic knowledge graphs and evaluate our framework on the six question-answering datasets. Our framework achieves state-of-the-art in the zero-shot question answering task achieving performance like prior supervised work and sets a strong baseline in the few-shot question answering task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
}
],
"back_matter": [
{
"text": "The authors acknowledge support from the DARPA SAIL-ON program, and ONR award N00014-20-1-2332. The authors will also like to thank the anonymous reviewers, Tejas Gokhale, Arindam Mitra, and Sandipan Choudhuri, for their feedback on earlier drafts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Synthetic QA corpora generation with roundtrip consistency",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Andor",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6168--6173",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1620"
]
},
"num": null,
"urls": [],
"raw_text": "Chris Alberti, Daniel Andor, Emily Pitler, Jacob De- vlin, and Michael Collins. 2019. Synthetic QA cor- pora generation with roundtrip consistency. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 6168- 6173, Florence, Italy. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Asu at textgraphs 2019 shared task: Explanation regeneration using language models and iterative re-ranking",
"authors": [
{
"first": "Pratyay",
"middle": [],
"last": "Banerjee",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)",
"volume": "",
"issue": "",
"pages": "78--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pratyay Banerjee. 2019. Asu at textgraphs 2019 shared task: Explanation regeneration using language mod- els and iterative re-ranking. In Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13), pages 78-84.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Knowledge fusion and semantic knowledge ranking for open domain question answering",
"authors": [
{
"first": "Pratyay",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Chitta",
"middle": [],
"last": "Baral",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.03101"
]
},
"num": null,
"urls": [],
"raw_text": "Pratyay Banerjee and Chitta Baral. 2020. Knowl- edge fusion and semantic knowledge ranking for open domain question answering. arXiv preprint arXiv:2004.03101.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Careful selection of knowledge to solve open book question answering",
"authors": [
{
"first": "Pratyay",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Arindam",
"middle": [],
"last": "Kumar Pal",
"suffix": ""
},
{
"first": "Chitta",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baral",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6120--6129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pratyay Banerjee, Kuntal Kumar Pal, Arindam Mitra, and Chitta Baral. 2019. Careful selection of knowl- edge to solve open book question answering. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 6120- 6129.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Commonsense for generative multi-hop question answering tasks",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Yicheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4220--4230",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1454"
]
},
"num": null,
"urls": [],
"raw_text": "Lisa Bauer, Yicheng Wang, and Mohit Bansal. 2018. Commonsense for generative multi-hop question an- swering tasks. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing, pages 4220-4230, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Abductive commonsense reasoning",
"authors": [
{
"first": "Chandra",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Ronan Le Bras",
"suffix": ""
},
{
"first": "Keisuke",
"middle": [],
"last": "Malaviya",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.05739"
]
},
"num": null,
"urls": [],
"raw_text": "Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Han- nah Rashkin, Doug Downey, Scott Wen-tau Yih, and Yejin Choi. 2019. Abductive commonsense reason- ing. arXiv preprint arXiv:1908.05739.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Translating embeddings for modeling multirelational data",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Garcia-Duran",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Oksana",
"middle": [],
"last": "Yakhnenko",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2787--2795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Advances in neural information processing systems, pages 2787-2795.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Dynamic knowledge graph construction for zero-shot commonsense question answering",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bosselut",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.03876"
]
},
"num": null,
"urls": [],
"raw_text": "Antoine Bosselut and Yejin Choi. 2019. Dynamic knowledge graph construction for zero-shot com- monsense question answering. arXiv preprint arXiv:1911.03876.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Comet: Commonsense transformers for automatic knowledge graph construction",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bosselut",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Malaviya",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.05317"
]
},
"num": null,
"urls": [],
"raw_text": "Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chai- tanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. Comet: Commonsense transformers for auto- matic knowledge graph construction. arXiv preprint arXiv:1906.05317.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Reading Wikipedia to answer opendomain questions",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Fisch",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1870--1879",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1171"
]
},
"num": null,
"urls": [],
"raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870- 1879, Vancouver, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Supervised and unsupervised transfer learning for question answering",
"authors": [
{
"first": "Yu-An",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Hung-Yi",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1143"
]
},
"num": null,
"urls": [],
"raw_text": "Yu-An Chung, Hung-Yi Lee, and James Glass. 2018. Supervised and unsupervised transfer learning for question answering. In Proceedings of the 2018",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "1585--1594",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1585-1594, New Orleans, Louisiana. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Electra: Pre-training text encoders as discriminators rather than generators",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2019. Electra: Pre-training text encoders as discriminators rather than genera- tors. In International Conference on Learning Rep- resentations.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Electra: Pre-training text encoders as discriminators rather than generators",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.10555"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than genera- tors. arXiv preprint arXiv:2003.10555.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Think you have solved question answering? try arc, the ai2 reasoning challenge",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Isaac",
"middle": [],
"last": "Cowhey",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "Carissa",
"middle": [],
"last": "Schoenick",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.05457"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question an- swering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Combining retrieval, statistics, and inference to answer elementary science questions",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Khashabi",
"suffix": ""
}
],
"year": 2016,
"venue": "Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Clark, Oren Etzioni, Tushar Khot, Ashish Sab- harwal, Oyvind Tafjord, Peter Turney, and Daniel Khashabi. 2016. Combining retrieval, statistics, and inference to answer elementary science questions. In Thirtieth AAAI Conference on Artificial Intelli- gence.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Multi-step retrieverreader interaction for scalable open-domain question answering",
"authors": [
{
"first": "Rajarshi",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Shehzaad",
"middle": [],
"last": "Dhuliawala",
"suffix": ""
},
{
"first": "Manzil",
"middle": [],
"last": "Zaheer",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.05733"
]
},
"num": null,
"urls": [],
"raw_text": "Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. 2019. Multi-step retriever- reader interaction for scalable open-domain question answering. arXiv preprint arXiv:1905.05733.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Transforming question answering datasets into natural language inference datasets",
"authors": [
{
"first": "Dorottya",
"middle": [],
"last": "Demszky",
"suffix": ""
},
{
"first": "Kelvin",
"middle": [],
"last": "Guu",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming question answering datasets into natural language inference datasets. ArXiv, abs/1809.02922.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Simple and effective semi-supervised question answering",
"authors": [
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Danish",
"middle": [],
"last": "Danish",
"suffix": ""
},
{
"first": "Dheeraj",
"middle": [],
"last": "Rajagopal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "582--587",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2092"
]
},
"num": null,
"urls": [],
"raw_text": "Bhuwan Dhingra, Danish Danish, and Dheeraj Ra- jagopal. 2018. Simple and effective semi-supervised question answering. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 582-587, New Orleans, Louisiana. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Templatebased question generation from retrieved sentences for improved unsupervised question answering",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Alexander R Fabbri",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.11892"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander R Fabbri, Patrick Ng, Zhiguo Wang, Ramesh Nallapati, and Bing Xiang. 2020. Template- based question generation from retrieved sentences for improved unsupervised question answering. arXiv preprint arXiv:2004.11892.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Vqa-lol: Visual question answering under the lens of logic",
"authors": [
{
"first": "Tejas",
"middle": [],
"last": "Gokhale",
"suffix": ""
},
{
"first": "Pratyay",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Chitta",
"middle": [],
"last": "Baral",
"suffix": ""
},
{
"first": "Yezhou",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tejas Gokhale, Pratyay Banerjee, Chitta Baral, and Yezhou Yang. 2020. Vqa-lol: Visual question an- swering under the lens of logic. In European confer- ence on computer vision. Springer.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Annotation artifacts in natural language inference data",
"authors": [
{
"first": "Swabha",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "107--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A Smith. 2018. Annotation artifacts in natural lan- guage inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107-112.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Noisecontrastive estimation: A new estimation principle for unnormalized statistical models",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Gutmann",
"suffix": ""
},
{
"first": "Aapo",
"middle": [],
"last": "Hyv\u00e4rinen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "297--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Gutmann and Aapo Hyv\u00e4rinen. 2010. Noise- contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the Thirteenth International Conference on Artifi- cial Intelligence and Statistics, pages 297-304.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Generating natural answers by incorporating copying and retrieving mechanisms in sequence-tosequence learning",
"authors": [
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Cao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "199--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shizhu He, Cao Liu, Kang Liu, and Jun Zhao. 2017. Generating natural answers by incorporating copying and retrieving mechanisms in sequence-to- sequence learning. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 199- 208.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "2017. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Ines",
"middle": [],
"last": "Montani",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Knowledge graph embedding via dynamic mapping matrix",
"authors": [
{
"first": "Guoliang",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Liheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "687--696",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge graph embedding via dynamic mapping matrix. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 687-696.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "2020. A survey on knowledge graphs: Representation, acquisition and applications",
"authors": [
{
"first": "Shaoxiong",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Shirui",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Pekka",
"middle": [],
"last": "Marttinen",
"suffix": ""
},
{
"first": "Philip S",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.00388"
]
},
"num": null,
"urls": [],
"raw_text": "Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Martti- nen, and Philip S Yu. 2020. A survey on knowledge graphs: Representation, acquisition and applications. arXiv preprint arXiv:2002.00388.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Contextualized representations using textual encyclopedic knowledge",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Kenton Lee, Yi Luan, and Kristina Toutanova. 2020. Contextualized representations us- ing textual encyclopedic knowledge.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "How much reading does reading comprehension require? a critical investigation of popular benchmarks",
"authors": [
{
"first": "Divyansh",
"middle": [],
"last": "Kaushik",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zachary C Lipton",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "5010--5015",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Divyansh Kaushik and Zachary C Lipton. 2018. How much reading does reading comprehension require? a critical investigation of popular benchmarks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5010-5015.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Qasc: A dataset for question answering via sentence composition",
"authors": [
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Guerquin",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.11473"
]
},
"num": null,
"urls": [],
"raw_text": "Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2019. Qasc: A dataset for question answering via sentence compo- sition. arXiv preprint arXiv:1910.11473.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Natural questions: a benchmark for question answering research",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "453--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a bench- mark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.11942"
]
},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations. arXiv preprint arXiv:1909.11942.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Latent retrieval for weakly supervised open domain question answering",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6086--6096",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1612"
]
},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 6086-6096, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Unsupervised question answering by cloze translation",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4896--4910",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1484"
]
},
"num": null,
"urls": [],
"raw_text": "Patrick Lewis, Ludovic Denoyer, and Sebastian Riedel. 2019. Unsupervised question answering by cloze translation. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4896-4910, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Knowledge representation learning: A quantitative review",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Ruobing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1812.10901"
]
},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Xu Han, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2018. Knowledge representation learning: A quantitative review. arXiv preprint arXiv:1812.10901.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Qi Ju, Haotang Deng, and Ping Wang. 2019a. K-bert: Enabling language representation with knowledge graph",
"authors": [
{
"first": "Weijie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zhiruo",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.07606"
]
},
"num": null,
"urls": [],
"raw_text": "Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2019a. K-bert: Enabling language representation with knowledge graph. arXiv preprint arXiv:1909.07606.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Can a suit of armor conduct electricity? a new dataset for open book question answering",
"authors": [
{
"first": "Todor",
"middle": [],
"last": "Mihaylov",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.02789"
]
},
"num": null,
"urls": [],
"raw_text": "Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct elec- tricity? a new dataset for open book question answer- ing. arXiv preprint arXiv:1809.02789.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge",
"authors": [
{
"first": "Todor",
"middle": [],
"last": "Mihaylov",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "821--832",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1076"
]
},
"num": null,
"urls": [],
"raw_text": "Todor Mihaylov and Anette Frank. 2018. Knowledge- able reader: Enhancing cloze-style reading compre- hension with external commonsense knowledge. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 821-832, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Knowledge guided text retrieval and reading for open domain question answering",
"authors": [
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.03868"
]
},
"num": null,
"urls": [],
"raw_text": "Sewon Min, Danqi Chen, Luke Zettlemoyer, and Han- naneh Hajishirzi. 2019. Knowledge guided text re- trieval and reading for open domain question answer- ing. arXiv preprint arXiv:1911.03868.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Exploring ways to incorporate additional knowledge to improve natural language commonsense question answering",
"authors": [
{
"first": "Arindam",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Pratyay",
"middle": [],
"last": "Banerjee",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.08855"
]
},
"num": null,
"urls": [],
"raw_text": "Arindam Mitra, Pratyay Banerjee, Kuntal Kumar Pal, Swaroop Mishra, and Chitta Baral. 2019. Explor- ing ways to incorporate additional knowledge to im- prove natural language commonsense question an- swering. arXiv preprint arXiv:1909.08855.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A corpus and evaluation framework for deeper understanding of commonsense stories",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Mostafazadeh",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
},
{
"first": "Pushmeet",
"middle": [],
"last": "Kohli",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1604.01696"
]
},
"num": null,
"urls": [],
"raw_text": "Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A cor- pus and evaluation framework for deeper under- standing of commonsense stories. arXiv preprint arXiv:1604.01696.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Probing neural network comprehension of natural language arguments",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Niven",
"suffix": ""
},
{
"first": "Hung-Yu",
"middle": [],
"last": "Kao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4658--4664",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Niven and Hung-Yu Kao. 2019. Probing neu- ral network comprehension of natural language ar- guments. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4658-4664.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227- 2237.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Knowledge enhanced contextual word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Logan",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Vidur",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "43--54",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1005"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 43-54, Hong Kong, China. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Language models as knowledge bases? arXiv preprint",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Bakhtin",
"suffix": ""
},
{
"first": "Yuxiang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"H"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.01066"
]
},
"num": null,
"urls": [],
"raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Se- bastian Riedel. 2019. Language models as knowl- edge bases? arXiv preprint arXiv:1909.01066.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Hypothesis only baselines in natural language inference",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "Aparajita",
"middle": [],
"last": "Haldar",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "180--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language in- ference. In Proceedings of the Seventh Joint Con- ference on Lexical and Computational Semantics, pages 180-191.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Training question answering models from synthetic data",
"authors": [
{
"first": "Raul",
"middle": [],
"last": "Puri",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Spring",
"suffix": ""
},
{
"first": "Mostofa",
"middle": [],
"last": "Patwary",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Shoeybi",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Catanzaro",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.09599"
]
},
"num": null,
"urls": [],
"raw_text": "Raul Puri, Ryan Spring, Mostofa Patwary, Mohammad Shoeybi, and Bryan Catanzaro. 2020. Training ques- tion answering models from synthetic data. arXiv preprint arXiv:2002.09599.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Know what you don't know: Unanswerable questions for squad",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.03822"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for squad. arXiv preprint arXiv:1806.03822.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Winogrande: An adversarial winograd schema challenge at scale",
"authors": [
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Ronan",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Bras",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.10641"
]
},
"num": null,
"urls": [],
"raw_text": "Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhaga- vatula, and Yejin Choi. 2019. Winogrande: An ad- versarial winograd schema challenge at scale. arXiv preprint arXiv:1907.10641.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Atomic: An atlas of machine commonsense for if-then reasoning",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Ronan Le Bras",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Allaway",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Lourie",
"suffix": ""
},
{
"first": "Brendan",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Roof",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Ronan Le Bras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019a. Atomic: An atlas of machine commonsense for if-then reasoning. ArXiv, abs/1811.00146.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Socialiqa: Commonsense reasoning about social interactions",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Lebras",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09728"
]
},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. 2019b. Socialiqa: Com- monsense reasoning about social interactions. arXiv preprint arXiv:1904.09728.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Bidirectional attention flow for machine comprehension",
"authors": [
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2017,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional atten- tion flow for machine comprehension. ArXiv, abs/1611.01603.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Generating factoid questions with recurrent neural networks: The 30m factoid question-answer corpus",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Garcia-Duran",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Sungjin",
"middle": [],
"last": "Ahn",
"suffix": ""
},
{
"first": "Sarath",
"middle": [],
"last": "Chandar",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "588--598",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Alberto Garcia-Duran, Caglar Gulcehre, Sungjin Ahn, Sarath Chandar, Aaron Courville, and Yoshua Bengio. 2016. Generating factoid questions with recurrent neural networks: The 30m factoid question-answer corpus. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 588-598.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Unsupervised commonsense question answering with selftalk",
"authors": [
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "West",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Ronan",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Bras",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.05483"
]
},
"num": null,
"urls": [],
"raw_text": "Vered Shwartz, Peter West, Ronan Le Bras, Chan- dra Bhagavatula, and Yejin Choi. 2020. Unsuper- vised commonsense question answering with self- talk. arXiv preprint arXiv:2004.05483.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "PullNet: Open domain question answering with iterative retrieval on knowledge bases and text",
"authors": [
{
"first": "Haitian",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Tania",
"middle": [],
"last": "Bedrax-Weiss",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2380--2390",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1242"
]
},
"num": null,
"urls": [],
"raw_text": "Haitian Sun, Tania Bedrax-Weiss, and William Cohen. 2019. PullNet: Open domain question answering with iterative retrieval on knowledge bases and text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 2380- 2390, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Commonsenseqa: A question answering challenge targeting commonsense knowledge",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Talmor",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Herzig",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Lourie",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.00937"
]
},
"num": null,
"urls": [],
"raw_text": "Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A ques- tion answering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Zeroshot visual question answering",
"authors": [
{
"first": "Damien",
"middle": [],
"last": "Teney",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hengel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.05546"
]
},
"num": null,
"urls": [],
"raw_text": "Damien Teney and Anton van den Hengel. 2016. Zero- shot visual question answering. arXiv preprint arXiv:1611.05546.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Explicit utilization of general knowledge in machine reading comprehension",
"authors": [
{
"first": "Chao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2263--2272",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1219"
]
},
"num": null,
"urls": [],
"raw_text": "Chao Wang and Hui Jiang. 2019. Explicit utilization of general knowledge in machine reading compre- hension. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 2263-2272, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Multiperspective context aggregation for semi-supervised cloze-style reading comprehension",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kewei",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ruoyu",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Jingming",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "857--867",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Wang, Sujian Li, Wei Zhao, Kewei Shen, Meng Sun, Ruoyu Jia, and Jingming Liu. 2018. Multi- perspective context aggregation for semi-supervised cloze-style reading comprehension. In Proceedings of the 27th International Conference on Computa- tional Linguistics, pages 857-867, Santa Fe, New Mexico, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Knowledge graph embedding by translating on hyperplanes",
"authors": [
{
"first": "Zhen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianwen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jianlin",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2014,
"venue": "Twenty-Eighth AAAI conference on artificial intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by trans- lating on hyperplanes. In Twenty-Eighth AAAI con- ference on artificial intelligence.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Enhancing pre-trained language representations with rich knowledge for machine reading comprehension",
"authors": [
{
"first": "An",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Quan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yajuan",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Qiaoqiao",
"middle": [],
"last": "She",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2346--2357",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1226"
]
},
"num": null,
"urls": [],
"raw_text": "An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, and Sujian Li. 2019a. En- hancing pre-trained language representations with rich knowledge for machine reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2346-2357, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Leveraging knowledge bases in LSTMs for improving machine reading",
"authors": [
{
"first": "Bishan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1436--1446",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1132"
]
},
"num": null,
"urls": [],
"raw_text": "Bishan Yang and Tom Mitchell. 2017. Leveraging knowledge bases in LSTMs for improving machine reading. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1436-1446, Van- couver, Canada. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5754--5764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019b. Xlnet: Generalized autoregressive pretrain- ing for language understanding. In Advances in neural information processing systems, pages 5754- 5764.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Semi-supervised QA with generative domain-adaptive nets",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1040--1050",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1096"
]
},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, and William Cohen. 2017. Semi-supervised QA with generative domain-adaptive nets. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1040-1050, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Hotpotqa: A dataset for diverse, explainable multi-hop question answering",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.09600"
]
},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answer- ing. arXiv preprint arXiv:1809.09600.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "Kgbert: Bert for knowledge graph completion",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Chengsheng",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.03193"
]
},
"num": null,
"urls": [],
"raw_text": "Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Kg- bert: Bert for knowledge graph completion. arXiv preprint arXiv:1909.03193.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "Neural generative question answering",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2972--2978",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Yin, Xin Jiang, Zhengdong Lu, Lifeng Shang, Hang Li, and Xiaoming Li. 2016. Neural generative question answering. In Proceedings of the Twenty- Fifth International Joint Conference on Artificial In- telligence, pages 2972-2978.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "Swag: A large-scale adversarial dataset for grounded commonsense inference",
"authors": [
{
"first": "Rowan",
"middle": [],
"last": "Zellers",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.05326"
]
},
"num": null,
"urls": [],
"raw_text": "Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "ERNIE: Enhanced language representation with informative entities",
"authors": [
{
"first": "Zhengyan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1441--1451",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1139"
]
},
"num": null,
"urls": [],
"raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 1441-1451, Florence, Italy. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Knowledge Triplet Learning Framework,",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Effect of Increasing KTL training samples on the target zero-shot question answering Train split accuracy.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "For the function f r (h, t) \u21d2 r, we mask all the tokens present in r, i.e, [cls][h][sep][mask][sep][t][sep]"
},
"TABREF2": {
"html": null,
"content": "<table><tr><td/><td colspan=\"4\">ATOMIC QASC-CCG OMCS-CCG DSG</td></tr><tr><td colspan=\"2\">Train Size 893393</td><td>1662308</td><td>914442</td><td>1019030</td></tr><tr><td>Val Size</td><td>10000</td><td>10000</td><td>10000</td><td>10000</td></tr><tr><td colspan=\"2\">H Length 11.2</td><td>10.5</td><td>9.6</td><td>10.3</td></tr><tr><td colspan=\"2\">R Length 6.5</td><td>10.3</td><td>9.4</td><td>10.2</td></tr><tr><td>T Length</td><td>2</td><td>1.5</td><td>2</td><td>10.4</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Dataset Statistics for the seven QA tasks. Context is not present in five of the tasks. The KTL Graph refers to the graph over which we learn. CCG is the Common Concept Graph. DSG is the Directed Story Graph. C, Q, A is the average number of words in the context, question, and answer. aNLI and SocialIQA Test set size is hidden."
},
"TABREF3": {
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null,
"text": ""
},
"TABREF5": {
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "Results for the Unsupervised QA task. Mean accuracy on Train, Dev and Test is reported. For Self-Talk and BIDAF Sup. we report the Dev and Test splits, for Roberta Sup. we report Test split. Test is reported if labels are present. Best scores, Second Best."
},
"TABREF7": {
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null,
"text": ""
},
"TABREF9": {
"html": null,
"content": "<table><tr><td>: Effect of Question to Hypothesis Conversion</td></tr><tr><td>(Hypo), Curriculum Filtering (CF) and providing the</td></tr><tr><td>Gold Fact context on the Validation split.</td></tr></table>",
"type_str": "table",
"num": null,
"text": ""
}
}
}
}