ACL-OCL / Base_JSON /prefixR /json /repl4nlp /2021.repl4nlp-1.24.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:59:39.691720Z"
},
"title": "Knowledge Informed Semantic Parsing for Conversational Question Answering",
"authors": [
{
"first": "Raghuveer",
"middle": [],
"last": "Thirukovalluru",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Mukund",
"middle": [],
"last": "Sridhar",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Dung",
"middle": [],
"last": "Thai",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Shruti",
"middle": [],
"last": "Chanumolu",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Nicholas",
"middle": [],
"last": "Monath",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Shankar",
"middle": [],
"last": "Ananthakrishnan",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Umass",
"middle": [],
"last": "Amherst",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Smart assistants are tasked to answer various questions regarding world knowledge. These questions range from retrieval of simple facts to retrieval of complex, multi-hops question followed by various operators (i.e., filter, argmax). Semantic parsing has emerged as the state-of-the-art for answering these kinds of questions by forming queries to extract information from knowledge bases (KBs). Specially, neural semantic parsers (NSPs) effectively translate natural questions to logical forms, which execute on KB and give desirable answers. Yet, NSPs suffer from nonexecutable logical forms for some instances in the generated logical forms might be missing due to the incompleteness of KBs. Intuitively, knowing the KB structure informs NSP with changes of the global logical forms structures with respect to changes in KB instances. In this work, we propose a novel knowledgeinformed decoder variant of NSP. We consider the conversational question answering settings, where a natural language query, its context and its final answers are available at training. Experimental results show that our method outperformed strong baselines by 1.8 F1 points overall across 10 types of questions of the CSQA dataset. Especially for the \"Logical Reasoning\" category, our model improves by 7 F1 points. Furthermore, our results are achieved with 90.3% fewer parameters, allowing faster training for large-scale datasets.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Smart assistants are tasked to answer various questions regarding world knowledge. These questions range from retrieval of simple facts to retrieval of complex, multi-hops question followed by various operators (i.e., filter, argmax). Semantic parsing has emerged as the state-of-the-art for answering these kinds of questions by forming queries to extract information from knowledge bases (KBs). Specially, neural semantic parsers (NSPs) effectively translate natural questions to logical forms, which execute on KB and give desirable answers. Yet, NSPs suffer from nonexecutable logical forms for some instances in the generated logical forms might be missing due to the incompleteness of KBs. Intuitively, knowing the KB structure informs NSP with changes of the global logical forms structures with respect to changes in KB instances. In this work, we propose a novel knowledgeinformed decoder variant of NSP. We consider the conversational question answering settings, where a natural language query, its context and its final answers are available at training. Experimental results show that our method outperformed strong baselines by 1.8 F1 points overall across 10 types of questions of the CSQA dataset. Especially for the \"Logical Reasoning\" category, our model improves by 7 F1 points. Furthermore, our results are achieved with 90.3% fewer parameters, allowing faster training for large-scale datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Knowledge base question answering (KBQA) has emerged as an important research topic over the past few years (Sun et al., 2018; Chakraborty et al., 2019; Sun et al., 2019; alongside with question answering over text corpora. In KBQA, world knowledge is given in the form of multi-relational graph databases * Equal contribution Lehmann et al., 2015) with millions of entities and interrelations between them. When a natural language question arrives, KBQA systems analyse relevant facts in the knowledge bases and derive the answers. In the presence of knowledge bases, question answering results are often time more interpretable and modifiable. For example, the question \"Who started his career at Manchester United in 1992?\" can be answered by fact triples such as (\"David Beckham\", member of sports team, \"Manchester United\"). This fact can be updated as the world knowledge changes while it might be non-trivial to achieve the same effect on text corpora. Likewise, KBQA systems face their own challenges (Chakraborty et al., 2019) , especially in the real-world, conversational settings.",
"cite_spans": [
{
"start": 108,
"end": 126,
"text": "(Sun et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 127,
"end": 152,
"text": "Chakraborty et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 153,
"end": 170,
"text": "Sun et al., 2019;",
"ref_id": "BIBREF17"
},
{
"start": 327,
"end": 348,
"text": "Lehmann et al., 2015)",
"ref_id": "BIBREF8"
},
{
"start": 1009,
"end": 1035,
"text": "(Chakraborty et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In real-world settings, KBQA systems need to perform multi-hop reasoning over chains of supporting facts and carry out various operations within the context of a conversation. For instance, answering the follow up question \"When did he win his first championship?\" might require identifying the player previously mentioned, all of his sport teams, the dates the sport teams won their championships. Then, argmax and filter operators are applied on the returned dates, yielding answers, i.e., \"1999\" for \"David Beckham\". Semantic parsing provides a weak supervision framework to learn to perform all these reasoning steps from just the question answer pairs. Semantic parsers define a set of rules (or grammar) for generating logical forms from natural language questions. Candidate logical forms are executable queries on the knowledge bases that yield the corresponding answers. Neural semantic parsers (NSPs) (Liang et al., 2016; Guo et al., 2018; employ a neural network to translate natural language questions into logical forms. NSPs have shown good performance on KBQA tasks (Liang et al., 2016; and further improved with reinforcement learning (Guo et al., 2018) , multitask learning , and most recently meta-learning (Hua et al., 2020) . Most previous works place more emphasis on modeling the reasoning behavior given in the questions than on interactions with the KB. In this work, we propose a KB-aware NSP variant (KISP) to fill in this gap.",
"cite_spans": [
{
"start": 911,
"end": 931,
"text": "(Liang et al., 2016;",
"ref_id": "BIBREF10"
},
{
"start": 932,
"end": 949,
"text": "Guo et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 1081,
"end": 1101,
"text": "(Liang et al., 2016;",
"ref_id": "BIBREF10"
},
{
"start": 1151,
"end": 1169,
"text": "(Guo et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 1225,
"end": 1243,
"text": "(Hua et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One of the main challenges in learning KBQA systems is to adapt to structural changes of the relevant sub-knowledge base. Different reasoning behaviors might apply to similar questions with respect to different sub-knowledge bases. For example, a similar question \"When did Tiger Woods win his first championship?\" would require a different reasoning chain since he didn't participate in a sports team. Structural changes of the sub-KB is a common phenomenon due to the incompleteness nature of knowledge bases. In such cases, knowing the attributes and relations would inform NSPs with changes in logical forms with respect to specific relevant KB entities. To address this problem, we propose a NSPs with a KB-informed decoder that utilizes local knowledge base structure encoded in pre-trained KB embeddings. Our model collects all relevant KB artifacts and integrates their embeddings into each decoding step, iteratively. We also introduce an attention layer on a set of associated KB random walks as an k-steps look ahead that prevents the decoder from going into KB regions where generated logical forms are not executable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Pre-trained KB embeddings were shown to improve multi-hop KBQA where answers are entities and no operations are involved (Saxena et al., 2020) . In this paper, we demonstrate our work on the full KBQA settings with 10 question categories with no constraints on the answers (Saha et al., 2018) . While (Saxena et al., 2020) evaluates 2-hop questions (Yih et al., 2016) and 2 and 3-hop questions with limited relation types (Zhang et al., 2018) . Our model is also the first NSP variant that utilizes pre-trained features for logical forms generation. CARTON uses an updated action grammar with stacked pointer networks. LASAGNE is an extension of CARTON which further includes a graph attention network to exploit correlations between entities, predicates. Empirical results showed that our model improves upon the MaSP model , a strong baseline for CSQA dataset, by an absolute 1.8 F1, 1.5% accuracy two sets of questions respectively.",
"cite_spans": [
{
"start": 121,
"end": 142,
"text": "(Saxena et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 273,
"end": 292,
"text": "(Saha et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 301,
"end": 322,
"text": "(Saxena et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 422,
"end": 442,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Further, we find that by incorporating knowledge-graph information we can match the performance of much larger pre-trained encoder models while using 90.3% fewer parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first formally describe our task and the Neural Semantic Parser (NSP) on which our work is based.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Knowledge Graph: Let E = {e 0 ...e N } be a set of given entities, and let R = {r 0 ...r M } be a set of relations. A knowledge graph G is a set of fact triples in E \u00d7 R \u00d7 E. A triple is represented as (h, r, t) where h, t \u2208 E and r \u2208 R. There is an extensive literature on representing the knowledge graph (Ji et al., 2020; Dai et al., 2020 ) that encode its semantics and structures. In this work, we use the pre-trained knowledge graph embeddings from Pytorch-BigGraph (Lerer et al., 2019) .",
"cite_spans": [
{
"start": 307,
"end": 324,
"text": "(Ji et al., 2020;",
"ref_id": null
},
{
"start": 325,
"end": 341,
"text": "Dai et al., 2020",
"ref_id": null
},
{
"start": 472,
"end": 492,
"text": "(Lerer et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In conversational question answering (CQA), the goal is to answer a question q within the context of the conversation history C. The question q and the history C are usually concatenated for handling ellipsis and coreference, forming the input X as [C; q]. At training time, a set of answering entities A is also given. The set A comprises entities that resolve to the answer depending on the answer's type. For example, answers of \"Simple Question\" are a list of entities, the answer of \"Verification Question\" is Yes/No, whether the set A is empty or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversational Question Answering:",
"sec_num": null
},
{
"text": "Semantic parsing approach for CQA produces the answer set A by first generating a logical form Y. Formally, a logical form Y is a sequence of actions (y 1 , y 2 , ..., y n ) where the arguments of these actions can be constants (i.e., numbers, dates) or KG instances (i.e., entities, relations, types). The set of actions is defined by a grammar S . We consider the weak-supervision settings where the ground truth logical form Y is not available. Instead, we generate candidates for Y by performing BFS based on grammar S over the knowledge graph G and keeping the candidate logical forms that yield the answer set A (Guo et al., 2018) . Given the input X and the labeled logical form Y, we train an encoder-decoder neural network to generate logical forms given the question and its conversational context.",
"cite_spans": [
{
"start": 618,
"end": 636,
"text": "(Guo et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Semantic Parser",
"sec_num": "2.1"
},
{
"text": "Encoder: The input X is formatted with BERT style. Then, it is fed into a Transformer-based encoder network ENC, producing a sequence of encoded states H = ENC(X) = (h [CLS] , h 0 , ...).",
"cite_spans": [
{
"start": 168,
"end": 173,
"text": "[CLS]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Semantic Parser",
"sec_num": "2.1"
},
{
"text": "The decoder is a Transformer-based model with attention. It takes the input representation from the encoder h [CLS] and the previous decoding state s i\u22121 to produce the target action y i .",
"cite_spans": [
{
"start": 110,
"end": 115,
"text": "[CLS]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder:",
"sec_num": null
},
{
"text": "P r Y\u223cS (Y | X) = yi\u2208S P r(y i | s i\u22121 , H) (1) P r(y i | s i\u22121 , H) = softmax(ATTN([s i\u22121 ; h [CLS] ], H))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder:",
"sec_num": null
},
{
"text": "Classifiers: The decoder is accompanied by a set of classifiers that predict the arguments for the decoder's actions at each decoding step. Our base NSP employs FFNNs for relations and entity types classifiers; and pointer networks for entities and constants mentioned in the question. At each decoding step, these classifiers produce an entity e i , an entity type t i , a relation r i , and a constant c i . The logical form action at time step i is a tuple consists of y i and its arguments within {e i , t i , r i , c i } defined by the grammar S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder:",
"sec_num": null
},
{
"text": "In this section, we introduce a knowledge-informed decoder that utilizes KG information to generate logical forms. We propose a knowledge injection layer that incorporates KG embeddings into the decoder state at each decoding step. To further inform the decoder with information about the expected structure of the KG, we propose an attention layer on random, k-hops knowledge walks from entities we encounter at each decoding step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge-Informed Decoder",
"sec_num": "3"
},
{
"text": "NSP decoders only look at the encoded question and the previous state of decoding to decide the next action. Information of the KB instances (i.e., entities, types, or relations) being considered so far could improve this decision making process. Therefore, at the decoding step i where the action involves a KB instance, we propose a Knowledge Injection Layer (KIL) to propagate KB information to the sub-sequence steps. KIL takes in the KB classifiers predictions, incorporates their embeddings into the current encoding state and forwards it to the next decoding step. Eq. 1 becomes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection Layer(KIL)",
"sec_num": "3.1"
},
{
"text": "P r Y\u223cS (Y | X) = y i \u2208S P r(y i | s * i\u22121 , H) (2) s * i\u22121 = KIL(s i\u22121 ) = FNN([s i\u22121 ; EMBB(v i\u22121 )])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection Layer(KIL)",
"sec_num": "3.1"
},
{
"text": "where v i\u22121 is the corresponding argument of y i\u22121",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection Layer(KIL)",
"sec_num": "3.1"
},
{
"text": "and v i\u22121 \u2208 E \u222a R, i.e., v i \u2208 {e i , t i , r i , c i }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection Layer(KIL)",
"sec_num": "3.1"
},
{
"text": "At step j where j > i, the decoder is informed of preceding KB instances, and is able to adapt to specific sub-KB structure. We find in cases where there multiple entities in context, having the right entity embedding at timestep j helps logical form in the upcoming steps. The entity embedding carries information about type of the entity, which our model is able to use more appropriate predicates for ambiguous mentions. We empirically show that KIL improves the exact match accuracy of the logical form attributes (logical form without KB).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection Layer(KIL)",
"sec_num": "3.1"
},
{
"text": "Now that the decoder is aware of the previous KB instances, it is also useful to peek at the possible reasoning chains coming out of the current decoding state. We do this to avoid reasoning paths that lead to an non-executable region where the logical form is invalid with respect to the KB. Therefore, we propose an attention look-ahead layer to inspect the upcoming KB structures before making the action prediction. We first generate a set of random walks on the KG from predicted entities and relations with the current decoding step. We then apply the attention look-ahead layer on these KG walks to obtain a representation of the expected KG structures. This representation is then fed back to the decoder to predict the action.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention on KG Walks (AKW)",
"sec_num": "3.2"
},
{
"text": "P r Y\u223cS (Y | X) = yi\u2208S P r(y i | s * i\u22121 , H, RANDWALK(v)) RANDWALK(v) = ATTN({EMBB(p j \u223c G(v))} j=0..k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention on KG Walks (AKW)",
"sec_num": "3.2"
},
{
"text": "where v is one among entities in the question and p j is a random walk path on the KB starting from v, denoted as G(v). Here we use one hop random walks from predicates found in the input, though any type of random walk could be used. With the two proposed layers, our NSP decoder is now fully informed with the past and the future KB structures. We demonstrate that our decoder variant achieves better performance on various question categories. Furthermore, we show that the pre-trained KG embeddings do a significant heavy lifting on representing KB information within the decoder states, resulting in less model parameters and required training data. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention on KG Walks (AKW)",
"sec_num": "3.2"
},
{
"text": "Dataset and Evaluation We evaluate our approach on Complex Sequential Question Answering (CSQA) dataset. CSQA consists of 1.6M question answer pairs spread across 200K dialogues. Its has a 152K/16K/28K train, val, test split. More details on the dataset and evaluation metrics used are presented in Section A.1 of the Appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Our model 1 outperforms the MaSP model by 1.8 absolute points in F1-score for entity answer 1 Code: https://github.com/raghavlite/kisp questions and 1.5 absolute points in accuracy for the boolean/counting categories. KISP shows significant improvements in Table 1 compared to MaSP. In more complex question types such 'Logical Reasoning', 'Verification' which require to reason over multiple tuples in the KG and questions that requiring operations like counting, our model outperforms the baseline by more than 10% points. Table 1 compares with MaSP . Appendix has additional analysis. Our model also beats CARTON in the entity answer questions despite them using an updated action grammar. For boolean, count type questions, the additional action vocabulary helps CARTON out perform our system. We will extend KISP to use this additional action vocabulary in the future.",
"cite_spans": [],
"ref_spans": [
{
"start": 257,
"end": 264,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 525,
"end": 532,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.1"
},
{
"text": "KG informed decoding with small models. A significant performance gain is expected in the smaller models by use of the knowledge graph information. We test this hypothesis by drastically reducing the size of the KISP encoder. This small version of KISP with only 9.7% of the baseline parameter slightly outperforms the baseline BERT model on overall F1-score. The gain comes from the fact that our models receive significant signal from KIL to make a more informed decision of valid actions/types in the next step even without a lot of knowledge from the encoder attention. Low resource settings. A semantic parsing system as described above typically requires annotated golden logical forms for training. Logical form annotation is an resource intensive process (Berant et al., 2013; Zhong et al., 2017) .",
"cite_spans": [
{
"start": 763,
"end": 784,
"text": "(Berant et al., 2013;",
"ref_id": "BIBREF0"
},
{
"start": 785,
"end": 804,
"text": "Zhong et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.2"
},
{
"text": "It is also a difficult process to use brute force computation to find these logical forms; also this process often results in spurious logical forms . This calls for models which can work with very few training examples. Hence we evaluate the effectiveness of KISP in low resource settings where only a fraction of data is used for training. Table 3 shows that KISP is able to outperform MaSP in these data constrained cases. The gap between MaSP and KISP widens in these low resource settings further justifying our model. Impact of KIL and AKW To further understand how each classifier on the decoder is ben-efited from the knowledge graph, we look at the accuracies of these classifiers on the evaluation set. Table 4 displays accuracies of the five classifiers from Eq. 1 around logical form generation of different models. KISP does as better job at predicting the overall skeleton of the logical form -(all the various non e i , t i , r i , c i ) actions. We observe attending to knowledge graph improves the logical form skeleton up to 2.3 points. As shown in Example 3 and 4 of the Appendix, the count, f ilter actions within the logical form are better predicted by KISP. KIL provides entity-embedding for the entity of interest at current timestep this helps the model pick the right predicates in the following steps in ambiguous cases. Cases requiring reasoning benefit from seeing random walks around entities in contextprovided by AKW. These lead to better overall sketch accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 342,
"end": 350,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 714,
"end": 829,
"text": "Table 4 displays accuracies of the five classifiers from Eq. 1 around logical form generation of different models.",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.2"
},
{
"text": "KISP is also better at pointing to correct entity accuracy. Pointing to the right entity can has cascading effects on logical form prediction As shown by numbers in Table 4 . KISP does a better job with entity pointer improving by almost 4 points. We attribute this to the KIL sytem of KISP which provides the KG embedding for entity of interest at given time step this helps the decoder's entity pointer mechanism.",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 172,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.2"
},
{
"text": "Entity Linking Errors We follow Sheang (2019) in using a joint mention, type classifier followed by an inverse index entity linker on the input using the encoder representations. The entity pointer classifier described earlier sections looks at these entities in a sentence and points to one among them. We found that a large amount of errors had arisen from this inverse index. Recent work, also points this and uses a better entity linker. Improving this module should significantly add to final performance and hence is a very interesting direction for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.2"
},
{
"text": "We introduced a neural semantic parsing decoder that uses additional knowledge graph information for Conversational QA. Results show that KISP can significantly boost performance in complex multihop question types like logical reasoning questions. Our method can help improve over strong baseline methods like MaSP. Finally we presented a smaller version of our model that is approx 10x smaller without any performance degradation compared to a system that doesn't use KG informed decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We evaluate our approach on Complex Sequential Question Answering (CSQA) dataset. CSQA consists of 1.6M question answer pairs spread across 200K dialogues. Its has a 152K/16K/28K train, val, test split. The dataset's knowledge graph is built on wikidata and represented with triples. The KB consists of 21.2M triplets over 12.8M entities, 3054 distinct entity types, and 567 distinct predicates. There are 10 different question categories split into two groups. Answers to the first group of questions are a list of entities. Question categories of this group are evaluated by the macro F1 score between predicted entities and golden entities. Answers to question categories in the second group are either counts or boolean. This group is evaluated by accuracy. Overall scores for each group are the weighted averaged metrics of all the categories in the group. We refer the reader to Saha et al. (2018) for a more detailed understanding of different categories of questions. Following sections contain training/eval specifics.",
"cite_spans": [
{
"start": 885,
"end": 903,
"text": "Saha et al. (2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Dataset and Evaluation",
"sec_num": null
},
{
"text": "We followed to search for logical forms and create the training data. Exact hyperparameters used in the experiments are mentioned below. We followed Saha et al. (2018) for evaluation metrics. Macro Precision and Macro Recall were used when the answer was a list of entities.",
"cite_spans": [
{
"start": 149,
"end": 167,
"text": "Saha et al. (2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Training details & Evaluation Metrics",
"sec_num": null
},
{
"text": "For questions with answer type boolean/number, we use accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Training details & Evaluation Metrics",
"sec_num": null
},
{
"text": "Training times of different models are in Table. There are some known in-efficiencies in the code, some from design and others conceptual. We in- tend to improve training time in future work by incorporating more e2e methods that will reduce GPU2CPU & CPU2GPU communication and also through some design changes in the short term.",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 48,
"text": "Table.",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.3 Training time Analysis",
"sec_num": null
},
{
"text": "We identify examples to show performance improvement in KISP models, in predicting the correct answer and logical form. As shown in Table 6 below, KISP models for these examples do a better job at sketch, entity, num, type, predicate classification compared to MaSP. The coloured images in Figure 2 -6 show the differences between MaSP and KISP models. For each example we show the golden logical form tree(also predicted by one of the KISP models), MaSP's logical form and the mistakes made by the baseline in color red.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 6",
"ref_id": null
},
{
"start": 290,
"end": 298,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.4 Logical form Analysis",
"sec_num": null
},
{
"text": "\u2022 Utterance Q: Which works of art stars Ji\u0159\u00ed R\u016f\u017ei\u010dka as actor and originated in Germany ? A: Three Nuts for Cinderella Q: Who was that work of art composed by ? A: Karel Svoboda",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example1",
"sec_num": null
},
{
"text": "\u2022 Logical form @ Gold find({Three Nuts for Cinderella}, Composer)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example1",
"sec_num": null
},
{
"text": "\u2022 Utterance Q: What is the job of Joe Falcon ? A: musician Q: What can be considered as category for Joe Falcon ? Example2 (Figure 3 ) and Example5 ( Figure 6 ) show improvement in logical form entity and predicate instantiation compared to MaSP. We notice improvement in logical form skeleton for KISP(SKI+AKW)BERT model in Example4 (Figure 5) . Example 3 ( Figure 4 ) is a case where MaSP gets the right answer despite the incorrect logical form. ",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 132,
"text": "(Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 150,
"end": 158,
"text": "Figure 6",
"ref_id": null
},
{
"start": 334,
"end": 344,
"text": "(Figure 5)",
"ref_id": null
},
{
"start": 359,
"end": 367,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Example2",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semantic parsing on freebase from question-answer pairs",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Chou",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Frostig",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1533--1544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural lan- guage processing, pages 1533-1544.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Introduction to neural network based approaches for question answering over knowledge graphs",
"authors": [
{
"first": "Nilesh",
"middle": [],
"last": "Chakraborty",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Lukovnikov",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Maheshwari",
"suffix": ""
},
{
"first": "Priyansh",
"middle": [],
"last": "Trivedi",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "Asja",
"middle": [],
"last": "Fischer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.09361"
]
},
"num": null,
"urls": [],
"raw_text": "Nilesh Chakraborty, Denis Lukovnikov, Gaurav Ma- heshwari, Priyansh Trivedi, Jens Lehmann, and Asja Fischer. 2019. Introduction to neural network based approaches for question answering over knowledge graphs. arXiv preprint arXiv:1907.09361.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "2020. A survey on knowledge graph embedding: Approaches, applications and benchmarks. Electronics",
"authors": [
{
"first": "Yuanfei",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Shiping",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Neal",
"suffix": ""
},
{
"first": "Wenzhong",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "9",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuanfei Dai, Shiping Wang, Neal N Xiong, and Wen- zhong Guo. 2020. A survey on knowledge graph em- bedding: Approaches, applications and benchmarks. Electronics, 9(5):750.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Dialog-to-action: Conversational question answering over a large-scale knowledge base",
"authors": [
{
"first": "Daya",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Yin",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2942--2951",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daya Guo, Duyu Tang, Nan Duan, Ming Zhou, and Jian Yin. 2018. Dialog-to-action: Conversational ques- tion answering over a large-scale knowledge base. In Advances in Neural Information Processing Sys- tems, pages 2942-2951.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Coupling retrieval and meta-learning for context-dependent semantic parsing",
"authors": [
{
"first": "Daya",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Yin",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daya Guo, Duyu Tang, Nan Duan, Ming Zhou, and Jian Yin. 2019. Coupling retrieval and meta-learning for context-dependent semantic parsing. ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Retrieve, program, repeat: Complex knowledge base question answering via alternate meta-learning",
"authors": [
{
"first": "Yuncheng",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Yuan-Fang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "Guilin",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.15875"
]
},
"num": null,
"urls": [],
"raw_text": "Yuncheng Hua, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi, and Wei Wu. 2020. Retrieve, program, repeat: Complex knowledge base question answer- ing via alternate meta-learning. arXiv preprint arXiv:2010.15875.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "2020. A survey on knowledge graphs: Representation, acquisition and applications",
"authors": [
{
"first": "Shaoxiong",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Shirui",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Pekka",
"middle": [],
"last": "Marttinen",
"suffix": ""
},
{
"first": "Philip S",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.00388"
]
},
"num": null,
"urls": [],
"raw_text": "Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Martti- nen, and Philip S Yu. 2020. A survey on knowledge graphs: Representation, acquisition and applications. arXiv preprint arXiv:2002.00388.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Conversational question answering over knowledge graphs with transformer and graph attention networks",
"authors": [
{
"first": "Endri",
"middle": [],
"last": "Kacupaj",
"suffix": ""
},
{
"first": "Joan",
"middle": [],
"last": "Plepi",
"suffix": ""
},
{
"first": "Kuldeep",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Harsh",
"middle": [],
"last": "Thakkar",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Maleshkova",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2104.01569"
]
},
"num": null,
"urls": [],
"raw_text": "Endri Kacupaj, Joan Plepi, Kuldeep Singh, Harsh Thakkar, Jens Lehmann, and Maria Maleshkova. 2021. Conversational question answering over knowledge graphs with transformer and graph atten- tion networks. arXiv preprint arXiv:2104.01569.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Dbpedia-a large-scale, multilingual knowledge base extracted from wikipedia",
"authors": [
{
"first": "Jens",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Isele",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Jakob",
"suffix": ""
},
{
"first": "Anja",
"middle": [],
"last": "Jentzsch",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Kontokostas",
"suffix": ""
},
{
"first": "Pablo",
"middle": [
"N"
],
"last": "Mendes",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Hellmann",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Morsey",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Van Kleef",
"suffix": ""
},
{
"first": "S\u00f6ren",
"middle": [],
"last": "Auer",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "6",
"issue": "",
"pages": "167--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, S\u00f6ren Auer, et al. 2015. Dbpedia-a large-scale, mul- tilingual knowledge base extracted from wikipedia. Semantic web, 6(2):167-195.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Pytorch-biggraph: A largescale graph embedding system",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Timothee",
"middle": [],
"last": "Lacroix",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Wehrstedt",
"suffix": ""
},
{
"first": "Abhijit",
"middle": [],
"last": "Bose",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Peysakhovich",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.12287"
]
},
"num": null,
"urls": [],
"raw_text": "Adam Lerer, Ledell Wu, Jiajun Shen, Timothee Lacroix, Luca Wehrstedt, Abhijit Bose, and Alex Peysakhovich. 2019. Pytorch-biggraph: A large- scale graph embedding system. arXiv preprint arXiv:1903.12287.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Neural symbolic machines: Learning semantic parsers on freebase with weak supervision",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kenneth",
"suffix": ""
},
{
"first": "Ni",
"middle": [],
"last": "Forbus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lao",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.00020"
]
},
"num": null,
"urls": [],
"raw_text": "Chen Liang, Jonathan Berant, Quoc Le, Kenneth D For- bus, and Ni Lao. 2016. Neural symbolic machines: Learning semantic parsers on freebase with weak su- pervision. arXiv preprint arXiv:1611.00020.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning dependency-based compositional semantics",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Michael I Jordan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "39",
"issue": "2",
"pages": "389--446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Michael I Jordan, and Dan Klein. 2013. Learning dependency-based compositional seman- tics. Computational Linguistics, 39(2):389-446.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Context transformer with stacked pointer networks for conversational question answering over knowledge graphs",
"authors": [
{
"first": "Joan",
"middle": [],
"last": "Plepi",
"suffix": ""
},
{
"first": "Endri",
"middle": [],
"last": "Kacupaj",
"suffix": ""
},
{
"first": "Kuldeep",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Harsh",
"middle": [],
"last": "Thakkar",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Lehmann",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2103.07766"
]
},
"num": null,
"urls": [],
"raw_text": "Joan Plepi, Endri Kacupaj, Kuldeep Singh, Harsh Thakkar, and Jens Lehmann. 2021. Context trans- former with stacked pointer networks for conversa- tional question answering over knowledge graphs. arXiv preprint arXiv:2103.07766.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Complex sequential question answering: Towards learning to converse over linked question answer pairs with a knowledge graph",
"authors": [
{
"first": "Amrita",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Vardaan",
"middle": [],
"last": "Pahuja",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mitesh",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Khapra",
"suffix": ""
},
{
"first": "Sarath",
"middle": [],
"last": "Sankaranarayanan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chandar",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amrita Saha, Vardaan Pahuja, Mitesh M Khapra, Karthik Sankaranarayanan, and Sarath Chandar. 2018. Complex sequential question answering: To- wards learning to converse over linked question an- swer pairs with a knowledge graph. In Thirty- Second AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improving multi-hop question answering over knowledge graphs using knowledge base embeddings",
"authors": [
{
"first": "Apoorv",
"middle": [],
"last": "Saxena",
"suffix": ""
},
{
"first": "Aditay",
"middle": [],
"last": "Tripathi",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Apoorv Saxena, Aditay Tripathi, and Partha Talukdar. 2020. Improving multi-hop question answering over knowledge graphs using knowledge base embed- dings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multilingual complex word identification: Convolutional neural networks with morphological and linguistic features",
"authors": [
{
"first": "Kim",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Sheang",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Student Research Workshop Associated with RANLP 2019",
"volume": "",
"issue": "",
"pages": "83--89",
"other_ids": {
"DOI": [
"10.26615/issn.2603-2821.2019_013"
]
},
"num": null,
"urls": [],
"raw_text": "Kim Cheng Sheang. 2019. Multilingual complex word identification: Convolutional neural networks with morphological and linguistic features. In Proceed- ings of the Student Research Workshop Associated with RANLP 2019, pages 83-89, Varna, Bulgaria. INCOMA Ltd.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Multi-task learning for conversational question answering over a large-scale knowledge base",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Xiubo",
"middle": [],
"last": "Geng",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Daya",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Long",
"suffix": ""
},
{
"first": "Daxin",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Shen, Xiubo Geng, Tao Qin, Daya Guo, Duyu Tang, Nan Duan, Guodong Long, and Daxin Jiang. 2019. Multi-task learning for conversational ques- tion answering over a large-scale knowledge base. EMNLP-IJCNLP.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text",
"authors": [
{
"first": "Haitian",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Tania",
"middle": [],
"last": "Bedrax-Weiss",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09537"
]
},
"num": null,
"urls": [],
"raw_text": "Haitian Sun, Tania Bedrax-Weiss, and William W Co- hen. 2019. Pullnet: Open domain question answer- ing with iterative retrieval on knowledge bases and text. arXiv preprint arXiv:1904.09537.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Open domain question answering using early fusion of knowledge bases and text",
"authors": [
{
"first": "Haitian",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Manzil",
"middle": [],
"last": "Zaheer",
"suffix": ""
},
{
"first": "Kathryn",
"middle": [],
"last": "Mazaitis",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.00782"
]
},
"num": null,
"urls": [],
"raw_text": "Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William W Co- hen. 2018. Open domain question answering using early fusion of knowledge bases and text. arXiv preprint arXiv:1809.00782.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Wikidata: a free collaborative knowledgebase",
"authors": [
{
"first": "Denny",
"middle": [],
"last": "Vrande\u010di\u0107",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Kr\u00f6tzsch",
"suffix": ""
}
],
"year": 2014,
"venue": "Communications of the ACM",
"volume": "57",
"issue": "10",
"pages": "78--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denny Vrande\u010di\u0107 and Markus Kr\u00f6tzsch. 2014. Wiki- data: a free collaborative knowledgebase. Communi- cations of the ACM, 57(10):78-85.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Wikidata: A free collaborative knowledgebase",
"authors": [
{
"first": "Denny",
"middle": [],
"last": "Vrande\u010di\u0107",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Kr\u00f6tzsch",
"suffix": ""
}
],
"year": 2014,
"venue": "Commun. ACM",
"volume": "57",
"issue": "10",
"pages": "78--85",
"other_ids": {
"DOI": [
"10.1145/2629489"
]
},
"num": null,
"urls": [],
"raw_text": "Denny Vrande\u010di\u0107 and Markus Kr\u00f6tzsch. 2014. Wiki- data: A free collaborative knowledgebase. Commun. ACM, 57(10):78-85.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The value of semantic parse labeling for knowledge base question answering",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Wen-Tau Yih",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Meek",
"suffix": ""
},
{
"first": "Jina",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Suh",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "201--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wen-tau Yih, Matthew Richardson, Christopher Meek, Ming-Wei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 201-206.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Variational reasoning for question answering with knowledge graph",
"authors": [
{
"first": "Yuyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hanjun",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "32",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexan- der Smola, and Le Song. 2018. Variational reason- ing for question answering with knowledge graph. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Overall Architecture and the different sources of knowledge used in KISP.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Figure 2: Example1 logical form KISP(SKI+AKW)BERT",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Example2 logical form KISP(SKI) vs MaSP",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Example3 logical form KISP(SKI+AKW) vs MaSP Figure 5: Example4 logical form KISP(SKI+AKW) vs MaSP Example5 logical form KISP(SKI+AKW)BERT vs MaSP",
"num": null
},
"TABREF0": {
"html": null,
"num": null,
"type_str": "table",
"text": "CSQA w/ Large Models. CARTON is CTN, KISP(KIL) is KISP , KISP(KIL+AKW) is KISP3.",
"content": "<table><tr><td/><td>Methods</td><td colspan=\"2\">MaSP CTN KISP KISP</td></tr><tr><td/><td>w\\BERT</td><td/><td>3</td></tr><tr><td/><td># train param</td><td>155M</td><td>157M 160M</td></tr><tr><td/><td>Overall</td><td colspan=\"2\">81.20 81.35 82.56 83.01</td></tr><tr><td/><td>Clarification</td><td colspan=\"2\">80.10 47.31 76.29 76.33</td></tr><tr><td/><td>Comparative</td><td colspan=\"2\">68.19 62.0 68.15 67.83</td></tr><tr><td>F1</td><td>Logical Quantitative</td><td colspan=\"2\">76.40 80.80 87.41 87.14 77.31 80.62 77.76 77.52</td></tr><tr><td/><td>Simple (Coref.)</td><td colspan=\"2\">78.33 87.09 78.78 79.66</td></tr><tr><td/><td>Simple (Direct)</td><td colspan=\"2\">86.57 85.92 87.03 87.68</td></tr><tr><td/><td>Simple (Ellipsis)</td><td colspan=\"2\">85.57 85.07 85.86 86.06</td></tr><tr><td/><td>Overall</td><td colspan=\"2\">44.73 61.28 46.22 46.22</td></tr><tr><td>Acc.</td><td>Compart.(Count) Quant.(Count)</td><td colspan=\"2\">28.71 38.31 27.65 27.32 50.07 57.04 50.82 50.92</td></tr><tr><td/><td colspan=\"3\">Verification(Bool) 65.00 77.82 72.29 72.72</td></tr></table>"
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Comparison of KISP3=KISP(KIL+AKW)-</td></tr><tr><td>Small with different sized baseline models.</td></tr></table>"
},
"TABREF4": {
"html": null,
"num": null,
"type_str": "table",
"text": "Comparison of small KISP(KIL+AKW) and MaSP models. KISP =KISP(KIL+AKW)",
"content": "<table><tr><td colspan=\"2\">Met.\\Acc. Sket. Ent. Pred. Type Num</td></tr><tr><td>MaSP (S)</td><td>80.55 87.39 97.11 90.62 96.30</td></tr><tr><td colspan=\"2\">KISP3 (S) 82.32 95.30 98.83 90.73 100</td></tr><tr><td colspan=\"2\">KISP (S) 83.33 95.37 98.83 90.66 100</td></tr><tr><td colspan=\"2\">MASP (B) 83.63 91.90 97.67 93.11 100</td></tr><tr><td colspan=\"2\">KISP3 (B) 84.47 96.25 99.40 92.25 100</td></tr><tr><td colspan=\"2\">KISP (B) 85.92 95.85 99.25 92.25 100</td></tr></table>"
},
"TABREF5": {
"html": null,
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>Fine grained metrics. KISP3=KISP(KIL),</td></tr><tr><td>KISP =KISP(KIL+AKW). (S)-Small, (B)-Bert.</td></tr></table>"
},
"TABREF7": {
"html": null,
"num": null,
"type_str": "table",
"text": "Running times of different models",
"content": "<table/>"
}
}
}
}