|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:35:15.095495Z" |
|
}, |
|
"title": "Identifying relevant common sense information in knowledge graphs", |
|
"authors": [ |
|
{ |
|
"first": "Guy", |
|
"middle": [], |
|
"last": "Aglionby", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Cambridge United Kingdom", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Simone", |
|
"middle": [], |
|
"last": "Teufel", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Cambridge United Kingdom", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Knowledge graphs are often used to store common sense information that is useful for various tasks. However, the extraction of contextuallyrelevant knowledge is an unsolved problem, and current approaches are relatively simple. Here we introduce a triple selection method based on a ranking model and find that it improves question answering accuracy over existing methods. We additionally investigate methods to ensure that extracted triples form a connected graph. Graph connectivity is important for model interpretability, as paths are frequently used as explanations for the reasoning that connects question and answer.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Knowledge graphs are often used to store common sense information that is useful for various tasks. However, the extraction of contextuallyrelevant knowledge is an unsolved problem, and current approaches are relatively simple. Here we introduce a triple selection method based on a ranking model and find that it improves question answering accuracy over existing methods. We additionally investigate methods to ensure that extracted triples form a connected graph. Graph connectivity is important for model interpretability, as paths are frequently used as explanations for the reasoning that connects question and answer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "For models to be able to reason about situations that arise in everyday life, they must have access to contextually appropriate common sense information. This information is commonly stored as a large set of facts from which the model must identify a relevant subset. One approach to structuring these facts is as a knowledge graph. Here, nodes represent high-level concepts, and typed edges represent different kinds of relationship between concepts. In practice, a subset of facts that are thought to be contextually relevant are extracted from the graph, as using all facts in each instance is unnecessary, noisy, and computationally expensive.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Prior work has focused on different ways to encode these facts, including by inputting them into a graph neural network (GNN) or into a transformer (Feng et al., 2020; Yasunaga et al., 2021) . However, the question of how to identify useful information has been under-explored, particularly in work that uses GNN encoders. If contextually important information is not retrieved then performance could be dramatically reduced, a potential result of the use of overly simplistic retrieval methods.", |
|
"cite_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 167, |
|
"text": "(Feng et al., 2020;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 168, |
|
"end": 190, |
|
"text": "Yasunaga et al., 2021)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper we explore methods to extract highquality subgraphs containing contextually relevant Encoder dist Figure 1 : The triple scoring process for a question answering task, and two methods that use the scores to extract relevant subgraphs for a question and candidate answer.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 120, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "information. 1 We approach this as a ranking task across triples in a knowledge graph, and propose two methods that use the scores to extract a subgraph. The first is a weighted pathfinding approach which extends prior work (Lin et al., 2019) , while the second builds a minimum spanning tree that includes the highest-ranked triples (figure 1). Both approaches ensure that all or most nodes in the subgraph are reachable from each other, which is important for two reasons. First, it means that the GNN can update node embeddings with information from most other nodes, which would not be possible if the graph were disconnected. Second, it allows paths of reasoning to be extracted from the subgraph, which are often used as explanations for model behaviour (Feng et al., 2020; Yasunaga et al., 2021) . There are also situations when specific concepts need to be included in order for a subgraph to be of high enough quality. For example, in question answering, a full explanation must include one or more concepts mentioned both in the question and in a candidate answer. This requires robustness towards how concepts identified because the knowledge repository might express the concept in a slightly different lexical form from the question and/or answer. We therefore experiment with a embedding-based method to identify these concepts, and compare it with existing lexical methods.", |
|
"cite_spans": [ |
|
{ |
|
"start": 13, |
|
"end": 14, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 224, |
|
"end": 242, |
|
"text": "(Lin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 760, |
|
"end": 779, |
|
"text": "(Feng et al., 2020;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 780, |
|
"end": 802, |
|
"text": "Yasunaga et al., 2021)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our contributions are as follows 2 :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Apply a ranking model to identify common sense triples that are relevant to some context. \u2022 Identify and thoroughly investigate methods to ensure that the extracted contextuallyrelevant subgraphs are (almost) connected. \u2022 Compare existing lexical approaches to entity linking to a simple embedding-based method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Many prior approaches to retrieving relevant common sense triples from a knowledge graph start by identifying relevant nodes. Simple lexical overlap between a concept and the context (e.g. question text) is often used for this (Kundu et al., 2019; . However, this entity linking approach is likely to only retrieve simple concepts, as the idiosyncratic phrasing of some node names in knowledge graphs like ConceptNet (Speer et al., 2017) are unlikely to show up in text. Becker et al. (2021) investigate this in detail and propose a series of pre-processing steps that allow lexically-based linking without exact phrase matches. For the same reason, the heuristics used by Lin et al. (2019) for lexical matching are employed by a series of later works (Feng et al., 2020; Yasunaga et al., 2021; . Although lexical matching is a frequent approach with common sense knowledge graphs, in other domains embedding-based approaches are more popular (Gillick et al., 2019) . These work by embedding the candidate text and finding the nearest neighbour in the space of entity embeddings. In question answering, Lin et al. (2019) split these concepts into those identified in the question and in the answer, and find additional concepts for the relevant subgraph by iteratively finding shortest paths between the two sets. This process continues until a maximum number is collected, or the path lengths exceed a threshold. The final subgraph used as input to models is constructed from this set with all valid edges added.", |
|
"cite_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 247, |
|
"text": "(Kundu et al., 2019;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 417, |
|
"end": 437, |
|
"text": "(Speer et al., 2017)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 471, |
|
"end": 491, |
|
"text": "Becker et al. (2021)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 752, |
|
"end": 771, |
|
"text": "(Feng et al., 2020;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 772, |
|
"end": 794, |
|
"text": "Yasunaga et al., 2021;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 943, |
|
"end": 965, |
|
"text": "(Gillick et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Some approaches score nodes and triples that have been identified. Kundu et al. (2019) score multiple paths for each question and answer and choose the answer with the highest mean path score. Yasunaga et al. 2021extract a subgraph following Lin et al. 2019, and additionally score each node for relevance to a question using RoBERTa (Liu et al., 2019) . Ranking is also common with prose facts, particularly when they are input into transformer-based models that have limits on input size (Wang et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 86, |
|
"text": "Kundu et al. (2019)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 334, |
|
"end": 352, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 490, |
|
"end": 509, |
|
"text": "(Wang et al., 2021)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this section we introduce our methods for extracting a contextually-relevant subgraph G for a question answering task. The graph should contain triples that are useful in distinguishing the correct answer from a set of distractors. For each instance, we represent the question text as q and the ith candidate answer as a i , and the set of concepts extracted from each as C q and C a i respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We cast the task of identifying relevant triples in the knowledge graph as a ranking problem, where the highest-ranked triples are those most relevant to q; a i . We use an existing model that is trained to rank facts highly if they constitute part of an explanation for why a i is the correct answer to q (Pan et al., 2021) . This was developed for the TextGraphs 2021 shared task on explanation regeneration for science questions (Thayaparan et al., 2021) and achieved the highest performance. Facts that are used in an explanation are likely to be useful when choosing between answers, making the model a natural choice for identifying relevant triples.", |
|
"cite_spans": [ |
|
{ |
|
"start": 306, |
|
"end": 324, |
|
"text": "(Pan et al., 2021)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 432, |
|
"end": 457, |
|
"text": "(Thayaparan et al., 2021)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Triple scoring", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The model consists of two parts: a fact retriever and a re-ranker. We follow the training procedure in Pan et al. (2021) and use one model based on RoBERTa-Large (Liu et al., 2019) for each stage. At inference time we use only the re-ranker to score each triple 3 in relation to q; a i . To speed this up we pre-compute embeddings for each q; a i and each triple.", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 120, |
|
"text": "Pan et al. (2021)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 180, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Triple scoring", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The most straightforward way to construct G is to use the most relevant triples identified in \u00a73.1 and the grounded nodes C q \u222a C a i . To do this, we select a subset of the top e ranked triples according to limits on the total number of edges and nodes that would be added to G. Iterating in rank order, we add the triple (s, r, o) to G only if adding s and o does not increase the total number of nodes to above n. If n < 2e then some of the top edges will be excluded; this limits the number of nodes in the graph while allowing highly-ranked edges to be present if they share nodes with other edges. We set n = 50 and e = 40 following initial experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constructing G", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "A shortcoming of this method is that the selected triples are not likely to connect with C q or C a i . Indeed, there is no guarantee that the triples are connected to each other. This is problematic in cases where paths in the extracted subgraph are to be used in an explanation (Feng et al., 2020; Yasunaga et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 280, |
|
"end": 299, |
|
"text": "(Feng et al., 2020;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 300, |
|
"end": 322, |
|
"text": "Yasunaga et al., 2021)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constructing G", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To rectify this we find the minimum spanning tree (MST) that spans all nodes in G, taking into account the edges added in the previous step. This is the Steiner tree problem, which is NP-hard; we apply an approximation algorithm (Wu et al., 1986) to find solutions in a reasonable amount of time. We experiment with two variants: one where edges are uniformly weighted, and another where the triple scores are used as weights.", |
|
"cite_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 246, |
|
"text": "(Wu et al., 1986)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constructing G", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We further use the triple scores with the pathfinding method used in previous work (Lin et al., 2019) , transforming this into a weighted shortest path search. We iteratively find the shortest path between any pair of concepts in C q and C a i , adding nodes on the paths to a set until a maximum size is reached. G is then formed from these nodes, as well as all valid edges between pairs from this set. We set the maximum number of nodes to be 50.", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 101, |
|
"text": "(Lin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constructing G", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "It is important that C q and C a i accurately reflect concepts mentioned in q and a, primarily to aid with explanations. A full explanation for a question must include at least one concept from C q and from C a i ; if these concepts are nonsensical then the explanation is invalid. Additionally, the pathfinding method for relevant subgraph extraction relies on the quality of this grounding.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Identifying relevant concepts", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We use two methods for entity linking. The first is from prior work, and is based on lexical match-ing with heuristics (Lin et al., 2019) . These include lemmatising words if an exact match is not found, and a method to avoid selecting nodes with lexical overlap. Despite this, lexical methods are not able to identify relevant concepts that have a lexical form that is not likely to be seen in any context; this occurs often with more specific concepts.To account for this, our second method is based on embeddings from RoBERTa. We embed each concept, and for each q and a i find the 10 most similar concepts via Euclidean distance. Embeddings are constructed in each case by mean-pooling across all tokens.", |
|
"cite_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 137, |
|
"text": "(Lin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Identifying relevant concepts", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We evaluate the quality of the extracted subgraphs by comparing accuracy on a question answering task when using them versus using a baseline. These graphs are used as input to two models, MH-GRN (Feng et al., 2020) and QA-GNN (Yasunaga et al., 2021) , which are both designed for question answering with knowledge graphs. The baseline subgraph is extracted using the unweighted pathfinding method from prior work (Lin et al., 2019); for the fairest comparison we run five baselines which extract subgraphs of different sizes and report the best result from these (see appendix C for full details). We also compare to baseline that uses only RoBERTa-large with no additional facts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 215, |
|
"text": "(Feng et al., 2020)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 227, |
|
"end": 250, |
|
"text": "(Yasunaga et al., 2021)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We report accuracy on two datasets, Open-bookQA (Mihaylov et al., 2018) and Common-senseQA (Talmor et al., 2019) . OpenbookQA is a collection of science questions, and so is in-domain with respect to the data used to train the fact scorer. CommonsenseQA targets more general common sense; performance here is a reflection on how transferable the fact scorer is to other domains. This dataset has no public test set labels, so we report results on the 'in house' test split defined by Lin et al. (2019) . Each model is run three times with different random seeds and the mean accuracy reported. Model hyperparameters are reported in appendix A.", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 71, |
|
"text": "(Mihaylov et al., 2018)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 91, |
|
"end": 112, |
|
"text": "(Talmor et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 484, |
|
"end": 501, |
|
"text": "Lin et al. (2019)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Our base knowledge graph is ConceptNet (Speer et al., 2017) . Following previous work (Lin et al., 2019), we merge similar relations and add reverse relations to the extracted graph. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 39, |
|
"end": 59, |
|
"text": "(Speer et al., 2017)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Our results on OpenbookQA are presented in table 1 and CommonsenseQA in table 2. On Com-monsenseQA, our best method significantly 4 outperforms the baseline method. This suggests that, in this case, the ranker is able to identify facts which are relevant to the question, and that the models are subsequently able to successfully use them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The tuned baseline for OpenbookQA beats the proposed methods in all cases, although there is reasonable variation in accuracy between the baselines of different sizes (see table 6 ). However, in all but two cases the methods for ensuring graph connectivity outperform the method that only uses the highest-ranked triples.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 179, |
|
"text": "(see table 6", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We observe that, in the majority of cases, using methods to increase connectivity within the extracted subgraph improves performance over simply including the top rated facts. The minimum spanning tree (MST) approach has the advantage of including these facts, unlike the weighted path method which may not. However, to ensure that the graph is connected the MST approach may have to include nodes and edges that are less relevant to the context. One might expect a weighted approach to counterbalance this, however this also results in a larger subgraph being constructed which may be detrimental (see appendix B). Indeed, with lexical grounding the weighted approach adds an average of 37 nodes and 83 edges to the extracted subgraph, compared with 26 nodes and 71 edges in the unweighted case. The weighted pathfinding approach has the advantage of avoiding edges which are not relevant to the query. Additionally, the subgraph is extracted in way that is closer to C q and C a i than the MST approach, which considers these nodes only after selecting the top-ranked triples. As a result, the question and answer nodes are connected in a larger variety of ways, which may help increase performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For OpenbookQA, the increase in score between lexical and embedding-based entity linking with an unweighted MST suggests that the concepts identified by the latter method are particularly useful. The same magnitude of increase is not seen in Com-monsenseQA. One possible reason for this is that CommonsenseQA was constructed directly using ConceptNet, which may increase the relevance of concepts obtained with lexical methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Similarly to with lexical grounding, the weighted MST with embedding grounding adds more nodes and edges on average (153 nodes, 217 edges) than the unweighted one (112 nodes, 172 edges). In both cases, the resulting subgraph is substantially larger than the equivalent ones built from lexicallylinked entities. This is likely due to the kinds of nodes identified by entity linking -we observe that concepts identified by the embedding-based method are more specific, and so are less connected within the overall graph. Conversely, concepts that are identified lexically are likely to be simpler and more general, and so better connected within the graph, meaning fewer additional nodes and edges are required to build the MST.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We present a method for extracting relevant information from a common sense knowledge graph, casting it as a ranking problem. We show that scores obtained from a ranking model can be used to select triples containing useful information for a question answering task, improving performance over a commonly-used approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "As it is undesirable for extracted subgraphs to have low connectivity, particularly when using paths within them for model interpretation, we use an algorithm for calculating minimum spanning trees over a supplied set of nodes and edges to ensure the graph is connected. We find that this helps performance; in particular, the models with highest accuracy on CommonsenseQA use a weighted version of this. We additionally find that using an entity linking approach that uses embeddings rather than lexical matching improves performance in some cases. We distribute the contextually-relevant subgraphs to facilitate future work; these drop in to existing models with no further processing required.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Future work might investigate the influence of the fact ranker, as our results suggest that it can transfer from the science to general common sense domain successfully. Further training of the ranker using higher-quality negative samples from e-QASC (Jhamtani and Clark, 2020) may yield better performance, as noted by Pan et al. (2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 320, |
|
"end": 337, |
|
"text": "Pan et al. (2021)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We use the same hyperparameters for MHGRN and QA-GNN as used in the papers which respectively introduced them (Feng et al., 2020; Yasunaga et al., 2021) . We optimise both models using RAdam ) and a learning rate of 1e \u2212 3 for the text encoder and 1e \u2212 5 for the graph encoder. A maximum of 128 tokens are input to the text encoder, which is initialised as RoBERTa-large. A L2 weight decay of 0.01 is used.", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 129, |
|
"text": "(Feng et al., 2020;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 130, |
|
"end": 152, |
|
"text": "Yasunaga et al., 2021)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Hyperparameters", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For MHGRN, batch size is 32 and the text encoder is frozen for the first 3 epochs. A 1-layer 100-dimensional GNN is used with 3-hop message passing at each layer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Hyperparameters", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For QA-GNN, batch size is 128 and the text encoder is frozen for the first 4 epochs. A 5-layer 200-dimensional GNN is used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Hyperparameters", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In all cases, the GNN is initialised with node embeddings derived from BERT, which are made available by Feng et al. (2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 123, |
|
"text": "Feng et al. (2020)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Hyperparameters", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For each type of extracted subgraph, we report the mean and standard deviation of the number of edges in table 3 and number of nodes in table 4. We report results for the baselines in table 5. Table 7 : Accuracy on CommonsenseQA when using the baseline subgraph extraction method with five different target edge counts.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 200, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "B Extracted subgraph size", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Subgraph size is a confounding factor when comparing performance between our extraction methods and the baseline (Lin et al., 2019) . To control for this, we extract baseline subgraphs of five different sizes by expanding them until they reach a certain number of edges. In tables 1 and 2 we report the only highest scoring baseline; full baseline results are presented in tables 6 and 7.", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 131, |
|
"text": "(Lin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Baseline models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We call these \"relevant subgraphs\" or \"extracted subgraphs\", noting that others use \"schema graphs\"(Lin et al., 2019).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We make our code and data available at https://github.com/GuyAglionby/ kg-common-sense-extraction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We linearize triples using the templates from https: //github.com/commonsense/conceptnet5/ wiki/Relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use the Almost Stochastic Dominance test(Dror et al., 2019) and only claim a significant difference if \u03f5 \u2264 0.05.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "* denotes significantly better than baseline subgraph at p < 0.001.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "COCO-EX: A tool for linking concepts from texts to ConceptNet", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Becker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katharina", |
|
"middle": [], |
|
"last": "Korfhage", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anette", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "119--126", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.eacl-demos.15" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Becker, Katharina Korfhage, and Anette Frank. 2021. COCO-EX: A tool for linking concepts from texts to ConceptNet. In Proceedings of the 16th Con- ference of the European Chapter of the Association for Computational Linguistics: System Demonstra- tions, pages 119-126, Online. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Deep dominance -how to properly compare deep neural models", |
|
"authors": [ |
|
{ |
|
"first": "Rotem", |
|
"middle": [], |
|
"last": "Dror", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Segev", |
|
"middle": [], |
|
"last": "Shlomov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2773--2785", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1266" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rotem Dror, Segev Shlomov, and Roi Reichart. 2019. Deep dominance -how to properly compare deep neural models. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2773-2785, Florence, Italy. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Scalable multihop relational reasoning for knowledge-aware question answering", |
|
"authors": [ |
|
{ |
|
"first": "Yanlin", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xinyue", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peifeng", |
|
"middle": [], |
|
"last": "Bill Yuchen Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1295--1309", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.99" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, and Xiang Ren. 2020. Scalable multi- hop relational reasoning for knowledge-aware ques- tion answering. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1295-1309, Online. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Learning dense representations for entity retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gillick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sayali", |
|
"middle": [], |
|
"last": "Kulkarni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Larry", |
|
"middle": [], |
|
"last": "Lansing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Presta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Ie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diego", |
|
"middle": [], |
|
"last": "Garcia-Olano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "528--537", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K19-1049" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Gillick, Sayali Kulkarni, Larry Lansing, Alessan- dro Presta, Jason Baldridge, Eugene Ie, and Diego Garcia-Olano. 2019. Learning dense representations for entity retrieval. In Proceedings of the 23rd Con- ference on Computational Natural Language Learn- ing (CoNLL), pages 528-537, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Learning to Explain: Datasets and Models for Identifying Valid Reasoning Chains in Multihop Question-Answering", |
|
"authors": [ |
|
{ |
|
"first": "Harsh", |
|
"middle": [], |
|
"last": "Jhamtani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "137--150", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.10" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harsh Jhamtani and Peter Clark. 2020. Learning to Explain: Datasets and Models for Identifying Valid Reasoning Chains in Multihop Question-Answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 137-150, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "What's missing: A knowledge gap guided approach for multi-hop question answering", |
|
"authors": [ |
|
{ |
|
"first": "Tushar", |
|
"middle": [], |
|
"last": "Khot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Sabharwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2814--2828", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1281" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tushar Khot, Ashish Sabharwal, and Peter Clark. 2019. What's missing: A knowledge gap guided approach for multi-hop question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2814-2828, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Exploiting explicit paths for multihop reading comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Souvik", |
|
"middle": [], |
|
"last": "Kundu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tushar", |
|
"middle": [], |
|
"last": "Khot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Sabharwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2737--2747", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1263" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Souvik Kundu, Tushar Khot, Ashish Sabharwal, and Peter Clark. 2019. Exploiting explicit paths for multi- hop reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 2737-2747, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "KagNet: Knowledge-aware graph networks for commonsense reasoning", |
|
"authors": [ |
|
{ |
|
"first": "Xinyue", |
|
"middle": [], |
|
"last": "Bill Yuchen Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jamin", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2829--2839", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1282" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. KagNet: Knowledge-aware graph net- works for commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2829-2839, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "On the Variance of the Adaptive Learning Rate and Beyond", |
|
"authors": [ |
|
{ |
|
"first": "Liyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haoming", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pengcheng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weizhu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiawei", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1908.03265" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2021. On the Variance of the Adaptive Learning Rate and Beyond. arXiv:1908.03265 [cs, stat].", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.11692" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretrain- ing Approach. arXiv:1907.11692.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Can a suit of armor conduct electricity? a new dataset for open book question answering", |
|
"authors": [ |
|
{ |
|
"first": "Todor", |
|
"middle": [], |
|
"last": "Mihaylov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tushar", |
|
"middle": [], |
|
"last": "Khot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Sabharwal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2381--2391", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1260" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct elec- tricity? a new dataset for open book question an- swering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381-2391, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "DeepBlueAI at TextGraphs 2021 shared task: Treating multi-hop inference explanation regeneration as a ranking problem", |
|
"authors": [ |
|
{ |
|
"first": "Chunguang", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bingyan", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhipeng", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the Fifteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-15)", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "166--170", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.textgraphs-1.18" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chunguang Pan, Bingyan Song, and Zhipeng Luo. 2021. DeepBlueAI at TextGraphs 2021 shared task: Treat- ing multi-hop inference explanation regeneration as a ranking problem. In Proceedings of the Fifteenth Workshop on Graph-Based Methods for Natural Lan- guage Processing (TextGraphs-15), pages 166-170, 5", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "ConceptNet 5.5: An Open Multilingual Graph of General Knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Robyn", |
|
"middle": [], |
|
"last": "Speer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [], |
|
"last": "Chin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Catherine", |
|
"middle": [], |
|
"last": "Havasi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Thirty-First AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. ConceptNet 5.5: An Open Multilingual Graph of General Knowledge. In Thirty-First AAAI Confer- ence on Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "CommonsenseQA: A question answering challenge targeting commonsense knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Talmor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Herzig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Lourie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4149--4158", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1421" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4149-4158, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "TextGraphs 2021 shared task on multi-hop inference for explanation regeneration", |
|
"authors": [ |
|
{ |
|
"first": "Mokanarangan", |
|
"middle": [], |
|
"last": "Thayaparan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Valentino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Jansen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Ustalov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the Fifteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-15)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "156--165", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.textgraphs-1.17" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mokanarangan Thayaparan, Marco Valentino, Peter Jansen, and Dmitry Ustalov. 2021. TextGraphs 2021 shared task on multi-hop inference for explanation regeneration. In Proceedings of the Fifteenth Work- shop on Graph-Based Methods for Natural Language Processing (TextGraphs-15), pages 156-165, Mexico City, Mexico. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Retrieval enhanced model for commonsense generation", |
|
"authors": [ |
|
{ |
|
"first": "Han", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenguang", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linjun", |
|
"middle": [], |
|
"last": "Shou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Gong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yichong", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Zeng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3056--3062", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.findings-acl.269" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Han Wang, Yang Liu, Chenguang Zhu, Linjun Shou, Ming Gong, Yichong Xu, and Michael Zeng. 2021. Retrieval enhanced model for commonsense gener- ation. In Findings of the Association for Computa- tional Linguistics: ACL-IJCNLP 2021, pages 3056- 3062, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Connecting the dots: A knowledgeable path generator for commonsense question answering", |
|
"authors": [ |
|
{ |
|
"first": "Peifeng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nanyun", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ilievski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pedro", |
|
"middle": [], |
|
"last": "Szekely", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4129--4140", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.findings-emnlp.369" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peifeng Wang, Nanyun Peng, Filip Ilievski, Pedro Szekely, and Xiang Ren. 2020. Connecting the dots: A knowledgeable path generator for commonsense question answering. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4129-4140, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A faster approximation algorithm for the steiner problem in graphs", |
|
"authors": [ |
|
{ |
|
"first": "Ying Fung", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Widmayer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chak Kuen", |
|
"middle": [], |
|
"last": "Wong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "Acta Informatica", |
|
"volume": "23", |
|
"issue": "2", |
|
"pages": "223--229", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/BF00289500" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ying Fung Wu, Peter Widmayer, and Chak Kuen Wong. 1986. A faster approximation algorithm for the steiner problem in graphs. Acta Informatica, 23(2):223-229.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "QA-GNN: Reasoning with language models and knowledge graphs for question answering", |
|
"authors": [ |
|
{ |
|
"first": "Michihiro", |
|
"middle": [], |
|
"last": "Yasunaga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongyu", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bosselut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jure", |
|
"middle": [], |
|
"last": "Leskovec", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "535--546", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.naacl-main.45" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. 2021. QA-GNN: Reasoning with language models and knowledge graphs for question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 535-546, Online. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"text": "Accuracy on OpenbookQA with different subgraph extraction methods.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"text": "Accuracy on CommonsenseQA with different subgraph extraction methods. 5", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"text": "Average number of edges in extracted subgraphs for OpenbookQA and CommonsenseQA.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Grounding</td><td>Subgraph type</td><td>OBQA</td><td>CSQA</td></tr><tr><td>Lexical</td><td>Only top rated MST Weighted MST Weighted path</td><td>49\u00b12 78\u00b122 89\u00b123 53\u00b15</td><td>50 77\u00b121 89\u00b123 54\u00b14</td></tr><tr><td>Embedding</td><td colspan=\"2\">MST Weighted MST 207\u00b153 167\u00b141 Weighted path 59\u00b13</td><td>162\u00b135 206\u00b145 58\u00b12</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"text": "Average number of nodes in extracted subgraphs for OpenbookQA and CommonsenseQA.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">Nodes/edges Model</td><td>OBQA</td><td>CSQA</td></tr><tr><td>Nodes</td><td>MHGRN QA-GNN</td><td>50\u00b110 63\u00b112</td><td>36\u00b17 63\u00b112</td></tr><tr><td>Edges</td><td colspan=\"2\">MHGRN QA-GNN 190\u00b133 128\u00b123</td><td>64\u00b113 188\u00b136</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"text": "Average number of nodes and edges in baseline subgraphs for OpenbookQA and CommonsenseQA.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>6</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF8": { |
|
"text": "Accuracy on OpenbookQA when using the baseline subgraph extraction method with five different target edge counts.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"3\">Target edge count MHGRN QA-GNN</td></tr><tr><td>50 100 150 200 250</td><td>69.48 68.60 69.11 68.95 69.46</td><td>70.08 69.83 70.32 69.54 69.33</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |