|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:21:57.384282Z" |
|
}, |
|
"title": "On Masked Language Models for Contextual Link Prediction", |
|
"authors": [ |
|
{ |
|
"first": "Angus", |
|
"middle": [], |
|
"last": "Brayne", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Maciej", |
|
"middle": [], |
|
"last": "Wiatrak", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Dane", |
|
"middle": [], |
|
"last": "Corneil Benevolentai", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Maple", |
|
"middle": [], |
|
"last": "St", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In the real world, many relational facts require context; for instance, a politician holds a given elected position only for a particular timespan. This context (the timespan) is typically ignored in knowledge graph link prediction tasks, or is leveraged by models designed specifically to make use of it (i.e. n-ary link prediction models). Here, we show that the task of n-ary link prediction is easily performed using language models, applied with a basic method for constructing cloze-style query sentences. We introduce a pre-training methodology based around an auxiliary entity-linked corpus that outperforms other popular pre-trained models like BERT, even with a smaller model. This methodology also enables n-ary link prediction without access to any n-ary training set, which can be invaluable in circumstances where expensive and time-consuming curation of n-ary knowledge graphs is not feasible. We achieve state-ofthe-art performance on the primary n-ary link prediction dataset WD50K and on WikiPeople facts that include literals-typically ignored by knowledge graph embedding methods.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In the real world, many relational facts require context; for instance, a politician holds a given elected position only for a particular timespan. This context (the timespan) is typically ignored in knowledge graph link prediction tasks, or is leveraged by models designed specifically to make use of it (i.e. n-ary link prediction models). Here, we show that the task of n-ary link prediction is easily performed using language models, applied with a basic method for constructing cloze-style query sentences. We introduce a pre-training methodology based around an auxiliary entity-linked corpus that outperforms other popular pre-trained models like BERT, even with a smaller model. This methodology also enables n-ary link prediction without access to any n-ary training set, which can be invaluable in circumstances where expensive and time-consuming curation of n-ary knowledge graphs is not feasible. We achieve state-ofthe-art performance on the primary n-ary link prediction dataset WD50K and on WikiPeople facts that include literals-typically ignored by knowledge graph embedding methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Large-scale knowledge graphs (KGs) have gained prominence over the past several decades as a means for representing complex structured data at scale, leading to the development of machine learning models designed to predict new or unknown information from a KG (Ji et al., 2021) . A subclass of such models deals with link prediction, i.e. inferring new facts from a given KG consisting of (subject, relation, object) triples. For instance, a link prediction model might reason from a KG containing the triple (USA, ElectedPresident, JFK) to infer that the triple (JFK, BornInCountry, USA) also likely exists (i.e. JFK was born in the country USA).", |
|
"cite_spans": [ |
|
{ |
|
"start": 261, |
|
"end": 278, |
|
"text": "(Ji et al., 2021)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 510, |
|
"end": 538, |
|
"text": "(USA, ElectedPresident, JFK)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The triple format is often too restrictive to represent a query effectively. For instance, the query is augmented with an auxiliary link for qualifer information (InYear, 1960) . Each entity or relationship is represented by a unique identifier. Qualifiers require the use of specialised encoder architectures; literal qualifiers like 1960 typically cannot be used at all. (b) We instead represent the query in a templated language model, where the qualifier detail can be directly appended.", |
|
"cite_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 176, |
|
"text": "(InYear, 1960)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Who was elected President of the United States in 1960? permits multiple correct answers when simplified to the triple format (USA, ElectedPresident, [MASK] ), in the absence of the context 1960 (also referred to as a qualifier (Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014) ). Recently, several KG completion models have been developed aimed specifically at link prediction in the presence of qualifiers, collectively referred to as hyper-relational or n-ary link prediction models (Wen et al., 2016; Zhang et al., 2018; Guan et al., 2019; Liu et al., 2020; Rosso et al., 2020; Galkin et al., 2020; Yu and Yang, 2021; Wang et al., 2021b) . Usage of qualifiers becomes particularly difficult when they include literals, i.e. values that cannot be efficiently represented as discrete graph entities. Examples of literals include years (like 1960) , times, or numerals. Existing KG completion algorithms typically remove literals (Rosso et al., 2020; Galkin et al., 2020) or use specialised techniques to leverage them (Kristiadi et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 156, |
|
"text": "[MASK]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 258, |
|
"text": "(Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 467, |
|
"end": 485, |
|
"text": "(Wen et al., 2016;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 505, |
|
"text": "Zhang et al., 2018;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 506, |
|
"end": 524, |
|
"text": "Guan et al., 2019;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 525, |
|
"end": 542, |
|
"text": "Liu et al., 2020;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 543, |
|
"end": 562, |
|
"text": "Rosso et al., 2020;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 563, |
|
"end": 583, |
|
"text": "Galkin et al., 2020;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 584, |
|
"end": 602, |
|
"text": "Yu and Yang, 2021;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 603, |
|
"end": 622, |
|
"text": "Wang et al., 2021b)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 818, |
|
"end": 829, |
|
"text": "(like 1960)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 912, |
|
"end": 932, |
|
"text": "(Rosso et al., 2020;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 933, |
|
"end": 953, |
|
"text": "Galkin et al., 2020)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1001, |
|
"end": 1025, |
|
"text": "(Kristiadi et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The need for new models to leverage qualifiers and literals reveals some fundamental weaknesses in discrete, triple-based knowledge graph represen-tations. Unlike graphs, written languages clearly permit the use of qualifiers and literals to represent facts and queries. Pre-trained language models like BERT (Devlin et al., 2019) have already shown competitive performance compared to existing KG link prediction approaches on triple-based KGs (Clou\u00e2tre et al., 2021; Yao et al., 2019) . As such, it is natural to ask whether Language Models (LMs) present a better alternative for inferring facts with qualifiers and literals compared to n-ary KG inference models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 309, |
|
"end": 330, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 445, |
|
"end": 468, |
|
"text": "(Clou\u00e2tre et al., 2021;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 469, |
|
"end": 486, |
|
"text": "Yao et al., 2019)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Apart from their ability to represent qualifiers and literals, using LMs with novel pre-training methodologies on vast corpora also presents opportunities to enable n-ary link prediction without access to any n-ary training set. The need to construct large, partially complete n-ary knowledge graphs in new domains is an expensive and timeconsuming requirement of link prediction (Nicholson and Greene, 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 380, |
|
"end": 408, |
|
"text": "(Nicholson and Greene, 2020)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Here, we present Hyper Relational Link Prediction using an auxiliary Entity Linked Corpus (Hyper-ELC), the first fully natural-language-based approach applied to KG link prediction benchmarks containing qualifiers and literals. We make use of model pre-training to leverage the large corpora directly available to language models, applying a simple entity-linking approach to prime the model for later inference on named KG entities and to enable link prediction without access to any nary training set. To our knowledge, this is the first approach to link prediction without KG supervision. We also use fine-tuning to specifically focus Hyper-ELC on the types of queries represented in the training set. By using KG link prediction datasets, we can directly compare language models to KG models specifically designed to take advantage of additional context in form of qualifiers and literals. Our results show competitive performance compared to these link prediction models, suggesting that language models provide a performant and practical alternative to KG models for link prediction beyond triple-based datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Several models have been developed over the past decade to learn from and infer on n-ary relationships. This has been driven by the recognition that knowledge bases like Freebase (Bollacker et al., 2008 ) contain a sizeable number of relationships involving more than two named entities. Wen et al. (2016) generalized the triple-based translational embedding model TransH (Wang et al., 2014) to hyper-relational facts. Zhang et al. (2018) extended this approach using a binary loss learned from the probability that any two entities participate in the same n-ary fact.", |
|
"cite_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 202, |
|
"text": "(Bollacker et al., 2008", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 288, |
|
"end": 305, |
|
"text": "Wen et al. (2016)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 391, |
|
"text": "(Wang et al., 2014)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 419, |
|
"end": 438, |
|
"text": "Zhang et al. (2018)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "N-Ary Link Prediction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Unlike these earlier embedding-based models, NaLP (Guan et al., 2019) addressed the n-ary link prediction problem with a neural network, representing n-ary facts as permutation-invariant sets of role-value pairs. Liu et al. (2020) developed the first tensor decomposition-based approach to the problem, adapting earlier tensor decomposition methods applied to link prediction in triple-based KGs. HINGE (Rosso et al., 2020 ) applied a convolutional network to the underlying triples and qualifiers in an n-ary fact.", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 230, |
|
"text": "Liu et al. (2020)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 422, |
|
"text": "(Rosso et al., 2020", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "N-Ary Link Prediction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "More recently, several specialised n-ary prediction models have been developed by combining knowledge graph embeddings with attentionbased transformer architectures (Vaswani et al., 2017) ; namely StarE (Galkin et al., 2020) , Hy-Transformer (Yu and Yang, 2021) and GRAN (Wang et al., 2021b) . In the StarE model, embeddings are fed through a graph neural network before entering the transformer layer. Hy-Transformer and GRAN instead feed the processed embeddings into the transformer directly. Hy-Transformer also adds a qualifier prediction-based auxiliary task, while GRAN modifies the transformer attention model to represent the link structure of the n-ary input. Together, these three transformer-based models have achieved state-of-the-art performance on the n-ary link prediction task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 187, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 203, |
|
"end": 224, |
|
"text": "(Galkin et al., 2020)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 261, |
|
"text": "(Yu and Yang, 2021)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 271, |
|
"end": 291, |
|
"text": "(Wang et al., 2021b)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "N-Ary Link Prediction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Hyper-ELC differs from other n-ary link prediction models in that it represents facts in natural language, eliminating the need for specialised encoders or graph-based methods and introducing the ability to pre-train on massive natural language corpora. By representing facts as token sequences, earlier modelling constraints can be avoided; e.g. multiple arities can be supported with the same model (unlike Liu et al. (2020)), and structural information can be retained in token positional encodings, unlike Wen et al. (2016) and Guan et al. (2019) . The pre-training introduced here also enables prediction on the downstream task without access to any n-ary training set. Nonetheless, like the most recent approaches, we also use a transformer architecture. In particular, Hyper-ELC is most similar to Hy-Transformer and GRAN, with named graph entities exchanged for word tokens with positional embeddings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 510, |
|
"end": 527, |
|
"text": "Wen et al. (2016)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 532, |
|
"end": 550, |
|
"text": "Guan et al. (2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "N-Ary Link Prediction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Parallel to research on incorporating qualifiers, several groups have investigated leveraging numerical attributes of entities in triple-based KG completion tasks (Garc\u00eda-Dur\u00e1n and Niepert, 2017; Tay et al., 2017; Wu and Wang, 2018; Kristiadi et al., 2019) . In these models, the numerical literals are general attributes associated with one of the entities involved in the triple (e.g. the latitude of a city entity); conversely, in the tasks we consider here, literals directly participate in n-ary facts. Nonetheless, we note that our approach could be straightforwardly applied to numerical attributes as well, by inserting them into the textual templates.", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 195, |
|
"text": "(Garc\u00eda-Dur\u00e1n and Niepert, 2017;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 213, |
|
"text": "Tay et al., 2017;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 232, |
|
"text": "Wu and Wang, 2018;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 233, |
|
"end": 256, |
|
"text": "Kristiadi et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literals in Link Prediction", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Hyper-ELC also differs from previous models by using a standard word-piece tokenisation approach to efficiently parse the literal data. While some literals, like 1962, are single tokens in the BERT base uncased vocabulary, less commonly discussed dates are split into multiple tokens -for example 1706 becomes 170 and ##6. Additionally, pre-training gives the model additional context to learn the relationships between dates -e.g. that similar people and events are discussed in sentences containing 1961 and sentences containing 1962, revealing a similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literals in Link Prediction", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Notably, literal attributes composed of textual descriptions have also been investigated in KG completion, e.g. Xie et al. (2016) ; Xu et al. (2016) . While we focus on numerical literals here, our natural language-based approach could also be extended to general textual attributes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 129, |
|
"text": "Xie et al. (2016)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 132, |
|
"end": 148, |
|
"text": "Xu et al. (2016)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literals in Link Prediction", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The success of large pre-trained language models has motivated multiple investigations into whether they can be used as knowledge bases. Petroni et al. (2019) proposed a benchmark for evaluating factual knowledge present in LMs with cloze-style queries. Their work has been further extended to probing areas including semantic (Ettinger, 2020; Wallace et al., 2019) , commonsense (Tamborrino et al., 2020; Forbes et al., 2019; Roberts et al., 2020) , and linguistic (Lin et al., 2019; Tenney et al., 2019) knowledge. Furthermore, in order to improve the performance of LMs in extracting factual knowledge, Jiang et al. (2020) and Shin et al. (2020) proposed methods for automatic discovery and cre-ation of cloze-style queries. This body of work focuses mainly on predicting tokens for filling in blanks, rather than ranking unique entity IDs, as we do here, and therefore requires an entity disambiguation post-processing step. It also focuses on comparison to open-domain question answering or relation extraction approaches rather than link prediction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 158, |
|
"text": "Petroni et al. (2019)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 327, |
|
"end": 343, |
|
"text": "(Ettinger, 2020;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 365, |
|
"text": "Wallace et al., 2019)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 405, |
|
"text": "(Tamborrino et al., 2020;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 406, |
|
"end": 426, |
|
"text": "Forbes et al., 2019;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 427, |
|
"end": 448, |
|
"text": "Roberts et al., 2020)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 466, |
|
"end": 484, |
|
"text": "(Lin et al., 2019;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 485, |
|
"end": 505, |
|
"text": "Tenney et al., 2019)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 606, |
|
"end": 625, |
|
"text": "Jiang et al. (2020)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 630, |
|
"end": 648, |
|
"text": "Shin et al. (2020)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Models for Link Prediction", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Several groups have proposed using LMs for triple-based link prediction. Yao et al. (2019) proposed KG-BERT, which encodes a triple as a sequence, where the entities and relation are separated by a [SEP] token and represented by their textual descriptions. They train to classify whether an individual triplet is correct or not, scoring every (h, r, ?) and (?, r, t) triplet to be ranked. This approach can involve millions of inference steps for a single completion. This work was extended for improved efficiency and performance in Kim et al. (2020) ; Wang et al. (2021a) . This methodology, including entity separation and precise entity descriptions, diverges from plain masked text and is therefore incompatible with our simple pre-training approach that enables n-ary link prediction without access to a training knowledge graph.", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 90, |
|
"text": "Yao et al. (2019)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 534, |
|
"end": 551, |
|
"text": "Kim et al. (2020)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 554, |
|
"end": 573, |
|
"text": "Wang et al. (2021a)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Models for Link Prediction", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "An alternative approach to triple-based link prediction is MLMLM (Clou\u00e2tre et al., 2021) , which also improves on KG-BERT's inference complexity with respect to the number of entities in the KG. They instead use the MLM setup to generate the logits for the tokens required to rebuild all of the entities. These logits are used alongside mean likelihood sampling to rank all entities. The head entity prediction input includes the head entity mask, relation, tail entity and tail entity definition. The tail entity prediction input is analogous. Unlike KG-BERT and its extensions, this method shares the MLM setup with our approach, however they predict tokens rather than unique entity ids. The maximum number of tokens of all of the entities is predicted for each example -predicting the pad token if necessary. This has the benefit that they can predict previously unseen entities (as long as they have fewer than the maximum number of tokens). However, again, this work requires entity disambiguation to go from tokens to a unique entity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 88, |
|
"text": "(Clou\u00e2tre et al., 2021)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Models for Link Prediction", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Finally, none of the language model approaches discussed above have been adapted to higher order link prediction with qualifiers and literals. Hyper-ELC additionally extends upon these approaches with a task-specific pre-training approach that en- Figure 2 : Overview of the training procedure. The names in brackets below the labels are purely informative; as in the typical link prediction setup, we rank the unique identifiers.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 256, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Language Models for Link Prediction", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "[Left] Entities of interest in the pre-training corpus are linked and replaced with mask tokens; the model is trained to predict the corresponding named entity of interest.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Models for Link Prediction", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "[Right] The finetuning task is the same, but performed on automatically generated sentences from the train set. Surface forms are used for the other entities in each fact.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Models for Link Prediction", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "ables us to perform this task without access to a training knowledge graph.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Models for Link Prediction", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "A hyper-relational (n-ary) graph, made up of hyperrelational facts, can be defined as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "G = (V, R, E),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where V is the set of vertices (entities), R is the set of relations, and E is a set (e 1 , . . . , e n ) of edges with e j \u2208 V \u00d7 R \u00d7 V \u00d7 P(R \u00d7 V) for 1 \u2264 j \u2264 n. Here, P denotes the power set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A hyper-relational fact e j \u2208 E is written as a tuple (s, r, o, Q), with s, o \u2208 V and r \u2208 R. Here, Q is the set of qualifier pairs (qr i , qv i ) with qualifier relations qr i \u2208 R and qualifier values qv i \u2208 V. An example of a fact in this representation would be (StephenHawking (s), AwardReceived (r), Edding-tonMedal (o), (PointInTime (qr 1 ), 1975 (qv 1 ))).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our approach consists of three stages:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "1. Pre-training to predict the unique identifier of a masked entity in the sentences of an auxiliary entity linked corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "2. Finetuning on sentence-like natural language templates created from the training set of the n-ary link prediction dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "3. Evaluation on the test set of the n-ary link prediction dataset using the same format of natural language templates.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For a visual representation of the process, see Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 56, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our method may use any corpus that references the entities of interest and any entity linking methodology for recognising them within the corpus. As we use the entity linked corpus only in pre-training and not for evaluation, we do not require it to be gold standard. However, increased coverage and precision of the linking may result in better downstream performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-Training", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Each pre-training example is a tuple consisting of a unique entity ID and a masked sentence in which that entity occurs. In the sentence, the span of every occurrence of the entity of interest is replaced by a \"[MASK]\" token. A single unique entity is masked in each example while all other entities are left as plain text. For example, the label for the entity StephenHawking is Q17714 and a masked sentence would be: \"[MASK] (8 January 1942 -14 March 2018) was an English theoretical physicist, cosmologist, and author.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-Training", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In order to use our pre-trained language model for the n-ary link prediction task, we must format the query in natural language as a cloze-style sentence. This may be done in any way that represents the query, but linguistic alignment with the pre-training corpus may benefit performance (Jiang et al., 2020; Shin et al., 2020 : Statistics of the datasets used in the experiments. The \"Pre\" and \"Lit\" labels on the datasets indicate pre-training and literal datasets, respectively. \"M\" indicates million. Validation set statistics have been left out for brevity, but they follow a similar pattern to the test set statistics. In the original WikiPeople source data, 10.9% of statements have literals in the qualifiers. The source data also includes 12,363 (3.3%) statements with a literal in the tail position, which are removed from all datasets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 288, |
|
"end": 308, |
|
"text": "(Jiang et al., 2020;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 326, |
|
"text": "Shin et al., 2020", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Finetuning and Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "One simple approach is to space separate the entities, relationships and roles in the (s, r, o, Q) order ( Figure 2 ) described in Section 3. This requires that each of the entities have associated textual names, which is usually the case in knowledge graphs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 115, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Finetuning and Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Our models are all based on the Transformer architecture (Vaswani et al., 2017) , more specifically BERT (Devlin et al., 2019) . However, we found a smaller version of the BERT architecture to be more stable during pre-training, which enabled a higher learning rate and larger batch size (see Table 5 in the Appendix). We use the BERT base uncased word-piece tokenisation for all text-based models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 79, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 105, |
|
"end": 126, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 293, |
|
"end": 301, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We use a single linear layer as a decoder, followed by a softmax. For optimisation, we leverage a standard categorical cross-entropy loss. All of our models are trained with the Adam optimiser, and are regularised via dropout and gradient clipping. We follow the same setup during pre-training and finetuning. We believe that this alignment between pre-training and the downstream task is part of what makes this approach so powerful. Note that the pre-trained model can also be applied on the downstream task even without additional finetuning on a training graph (Section 6.3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "For finetuning and evaluation we use two n-ary link prediction datasets: WikiPeople 1 (Guan et al., 2019) and WD50K 2 (Galkin et al., 2020) . Both 1 Downloaded from:", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 139, |
|
"text": "(Galkin et al., 2020)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "WikiPeople and WD50K", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "https://github.com/ gsp2014/NaLP/tree/master/data/WikiPeople 2 Downloaded from: https://zenodo.org/ record/4036498", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "WikiPeople and WD50K", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "WikiPeople and WD50K are extracted from Wikidata and contain a mixture of binary and higherorder facts. WikiPeople is a commonly used benchmark containing facts related to entities representing humans.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "WikiPeople and WD50K", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "WD50K was created by Galkin et al. (2020) from the 2019/08/01 Wikidata dump 3 . It was developed with the goal of containing a higher proportion of non-literal higher-order relationships. It is based on the entities from FB15K-237 (Bordes et al., 2013) that have a direct mapping in Wikidata.", |
|
"cite_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 252, |
|
"text": "(Bordes et al., 2013)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "WikiPeople and WD50K", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In order to transform the facts in these datasets into natural language queries, we use the English Wikidata names for each of the entity and relationship/role IDs 4 . We then create templates in the simple manner described in Section 4.2. We find that while the queries are not particularly natural in their structure and vocabulary, their meaning remains largely the same (an example template is shown in Figure 2 , right).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 407, |
|
"end": 415, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "WikiPeople and WD50K", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Galkin et al. (2020) noted that most of the qualifier values in WikiPeople are literals, in this case datetime instances. Literals appear in approximately 13% of the statements in the WikiPeople dataset, but they are typically ignored in knowledge graph embedding approaches (Rosso et al., 2020) . If the literals are ignored, only 2.6% of statements in WikiPeople are higher-order. None of the previous approaches to this dataset encode literals.", |
|
"cite_spans": [ |
|
{ |
|
"start": 275, |
|
"end": 295, |
|
"text": "(Rosso et al., 2020)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-Named Entity Qualifiers", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Note that, for evaluation purposes, alternative correct entities are filtered from the ranking at evaluation time when assessing a given potential answer (Bordes et al., 2013) . This has implications for treating literals. Consider the case where literals are ignored: when evaluating whether the model correctly predicted EddingtonMedal as a completion for the fact (StephenHawking, AwardReceived, [MASK] , (PointInTime, 1975) ), the entity Cop-leyMedal would be filtered out of the ranking if the fact (StephenHawking, AwardReceived, Cop-leyMedal, (PointInTime, 2006) ) also exists in the dataset. This occurs because the PointInTime qualifier is ignored, so that the subject and relation of the facts are identical (and both medals are equally valid completions). When literal-containing qualifiers are not ignored, the facts are distinct, with only one correct answer for each.", |
|
"cite_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 175, |
|
"text": "(Bordes et al., 2013)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 399, |
|
"end": 405, |
|
"text": "[MASK]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 408, |
|
"end": 427, |
|
"text": "(PointInTime, 1975)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 504, |
|
"end": 569, |
|
"text": "(StephenHawking, AwardReceived, Cop-leyMedal, (PointInTime, 2006)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-Named Entity Qualifiers", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The primary WikiPeople dataset used here was adapted by Rosso et al. (2020) from the original WikiPeople (Guan et al., 2019) . To investigate whether this literal data can be leveraged by our model, we generated a new dataset from a subset of WikiPeople that we call WikiPeople Literal. Unlike in Rosso et al. (2020) and Galkin et al. (2020) , where literal qualifier terms are ignored when filtering the rankings for evaluation, we include the literal terms during filtering in WikiPeople Literal. Additionally, we evaluate only on facts that include at least one literal. This focus enables us to probe the model's ability to interpret literal qualifiers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 75, |
|
"text": "Rosso et al. (2020)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 94, |
|
"end": 124, |
|
"text": "WikiPeople (Guan et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 316, |
|
"text": "Rosso et al. (2020)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 321, |
|
"end": 341, |
|
"text": "Galkin et al. (2020)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-Named Entity Qualifiers", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Following Rosso et al. (2020) , we drop all statements that contain literals in the main triple.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 29, |
|
"text": "Rosso et al. (2020)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-Named Entity Qualifiers", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For pre-training we create an entity linked corpus based on the 2019/08/01 English Wikipedia 5 dump used in BLINK (Ledell Wu, 2020) . We process the XML with Gensim 6 , which we adapt to leave article hyperlinks in the text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 131, |
|
"text": "Wu, 2020)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Linked Corpus", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "For simplicity, we use a regex to find occurrences of the entities of interest in the large hyperlinked Wikipedia corpus. For each article we extract the title entity and all of the hyperlinked entities, along with their surface forms in the text and their title name in the hyperlink. We find the wikidata IDs for each of these entities 7 and we retain those entities that are in our downstream n-ary dataset. We then split the article into sentences and run a case insensitive regex over each sentence to find the spans of these entities and link them to their Wikidata IDs, using the ID to surface form/title name dictionaries. Given this collection of entity linked sentences, we create the pre-training examples as described in Section 4.1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Linked Corpus", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Throughout this section we compare to the following external baselines developed for n-ary link prediction: (i) NaLP-Fix (Rosso et al., 2020) , (ii) HINGE (Rosso et al., 2020) , (iii) StarE (Galkin et al., 2020) , (iv) Hy-Transformer (Yu and Yang, 2021) , and (v) GRAN (Wang et al., 2021b) . NaLP-Fix is an improved version of the original NaLP model (Guan et al., 2019) . None of these methods make predictions over natural language and none of them encode literals.", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 141, |
|
"text": "(Rosso et al., 2020)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 155, |
|
"end": 175, |
|
"text": "(Rosso et al., 2020)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 190, |
|
"end": 211, |
|
"text": "(Galkin et al., 2020)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 253, |
|
"text": "(Yu and Yang, 2021)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 269, |
|
"end": 289, |
|
"text": "(Wang et al., 2021b)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 351, |
|
"end": 370, |
|
"text": "(Guan et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The metrics that we use are based on predicting only the subject and object of the hyper-relational facts. We follow the filtered setting introduced by Bordes et al. (2013) as discussed in Section 5.2 to ensure that corrupted facts are not valid facts from the rest of the dataset. For each test example, we filter from the model's predicted ranking all of the entities that appear in the same position in otherwise identical examples in either the training, validation or test set (except the test entity of interest). We consider mean reciprocal rank (MRR) and hits at 1 and 10 (H@1 and H@10 respectively).", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 172, |
|
"text": "Bordes et al. (2013)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In order to showcase the expressive power of natural language as a representation, we employ an experiment that involves making predictions with non-named entity qualifier terms (i.e. literals). We use an evaluation dataset (described in Section 5.2) that contains only the examples in the WikiPeople dataset that have at least one literal qualifier. Additionally, we consider these qualifiers when filtering the ranking at evaluation time, unlike the typical WikiPeople evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Link Prediction with Literals", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "To the best of our knowledge, no existing works leverage literals in qualifiers, so no strong baselines exist. We therefore use two baselines that cannot leverage literals as comparison points. The first, Hyper-ELC [UNK] , is an ablated version of our model that replaces any literal entity with the [UNK] token. We also used the publicly-available on literal-containing qualifiers after adding them back into the dataset (note that StarE achieves stateof-the-art on the full dataset on Hits@10). On the WikiPeople Literal dataset, Hyper-ELC significantly outperformed both StarE and Hyper-ELC [UNK] (Table 2, first three columns). In particular, the performance boost over the [UNK] ablation illustrates that our model specifically makes use of the information represented in literal qualifiers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 220, |
|
"text": "[UNK]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 678, |
|
"end": 683, |
|
"text": "[UNK]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Link Prediction with Literals", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Hyper-ELC also performed reasonably well on the standard WikiPeople dataset (Table 2, last three columns), outperforming NaLP-Fix, but with lower overall performance than the most recent baselines (StarE, Hy-Transformer and GRAN).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Link Prediction with Literals", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "To investigate the differences between Hyper-ELC and the other state-of-the-art baselines on WikiPeople, we examined the MRR performance ratio of StarE compared to Hyper-ELC for the relationship-entity position (i.e. head or tail) pairs that occur more than 500 times in the evaluation set (see Appendix, Table 6 in the appendix). Notably, Hyper-ELC displayed the most pronounced performance deficit compared to StarE on inferring correct entities in one-to-many relationships with many possible answers. In Section 7, we discuss potential reasons for this deficit and possible future improvements.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 305, |
|
"end": 312, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Link Prediction with Literals", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Next, we evaluated Hyper-ELC on the WD50K datasets (Table 3) , which do not contain any literal entities. WD50K (100) has been created by filtering WD50K to have 100% higher order relationships.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 60, |
|
"text": "(Table 3)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Link Prediction with Named Entities Only", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "In order to understand the value of the pretraining and finetuning steps, we consider multiple ablation models:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Link Prediction with Named Entities Only", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Hyper-ELC (only P): a pre-trained version of Hyper-ELC without any exposure to the templated finetuning data (the train set).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Link Prediction with Named Entities Only", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Hyper-ELC (only F): a randomly initialised (i.e. only finetuned) version of Hyper-ELC.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Link Prediction with Named Entities Only", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "BERT (only F): a BERT model (base uncased) with its own initialisation followed by a randomly initialised classification layer, finetuned.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Link Prediction with Named Entities Only", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "On the full WD50K dataset, Hyper-ELC achieved an MRR of 0.354, nearly identical to the state-of-the-art Hy-Transformer with 0.356. While Hy-Transformer achieved the best performance on Hits@1, Hyper-ELC achieved state-of-the-art on Hits@10.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Link Prediction with Named Entities Only", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "On the smaller, purely hyper-relational WD50K (100) dataset, Hyper-ELC performed comparably to StarE but was outperformed by Hy-Transformer (see discussion in Section 7).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Link Prediction with Named Entities Only", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Finally, we focus specifically on hyper-relational link prediction with the ablated version of Hyper-ELC exposed only to the pre-training data (Table 3 , last row, and Table 4 ). Hyper-ELC (only P) has some ability to perform inference, without any access to the training knowledge graph; it achieves an MRR of 0.087 and 0.207 on WD50K and WD50K (100) respectively, compared to 0.0003 and 0.0006 for the random model and 0.356 and 0.699 for the state-of-the-art Hy-Transformer. This approach could be very powerful in domains where expensive and time consuming curation of hyperrelational knowledge graphs is not feasible.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 152, |
|
"text": "(Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 169, |
|
"end": 176, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Link Prediction without a Training Graph", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "The significant performance difference between Hyper-ELC and Hyper-ELC (only P) can likely be partially attributed to the distributional shift in the language from pre-training to the templated format used in finetuning and evaluation on the \"Ba- Table 3 : Performance comparison on the WD50K datasets. We train and test on the dataset indicated following the approach used by the baselines. Model names \"only P\" and \"only F\" indicate that only pre-training or finetuning was performed respectively. Methods above the line use the n-ary training graph, while those below do not. Table 4 : With some minor adjustments to the wording of some of the most frequent relationships/roles, to move from the \"Basic\" to the \"Clean\" dataset, we can boost performance for the model that doesn't have access to graph based training data. Here, \"only P\" indicates only pre-training, without finetuning. sic\" dataset, where the templates are often stilted and ungrammatical. To test the hypothesis that improved templates could drive improved performance, we considered 37 of the roles/relationships that occur most frequently in the WD50K (100) training dataset and altered some to make the templates for the \"Clean\" dataset to be more similar to the natural language occurring in the Wikipedia pre-training corpus; for instance, we improved the grammar with stop words like \"the\". Table 7 in the appendix shows the 37 roles/relationships that we considered and the changes that we made. In Table 4 we can see a performance increase from 0.207 MRR to 0.232 for Hyper-ELC (only P) with these simple template changes. However, we saw only a minimal improvement when finetuning was introduced, from 0.642 MRR to 0.645, suggesting that the model adapts effectively to the templated linguistic style with finetuning.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 247, |
|
"end": 254, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 579, |
|
"end": 586, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1368, |
|
"end": 1375, |
|
"text": "Table 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1477, |
|
"end": 1484, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Link Prediction without a Training Graph", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Here, we presented Hyper-ELC, the first purely natural language-based approach to n-ary link prediction and the first model to leverage literals in n-ary qualifiers. The natural language-based approach allows us to take advantage of pre-training on massive entity-linked corpora and easily leverage the detail present in hyper-relational facts. Hyper-ELC matched state-of-the-art performance on WD50K and established state-of-the-art on a version of WikiPeople containing only literal qualifiers. However, it did not reach the performance of existing KG models on the full WikiPeople dataset. As shown in Table 6 , Hyper-ELC tends to perform significantly worse than StarE on one-tomany relationships; e.g. ([MASK] , SexOrGender, Male). One hypothesis for this result is that the softmax loss function used in training the model assumes a single correct answer out of all entities for a given masked template; for each unique training example, all competing entities (including valid ones) are treated as false. The objective function and negative sampling approach are therefore potential areas for investigation in future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 707, |
|
"end": 714, |
|
"text": "([MASK]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 605, |
|
"end": 612, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In addition, we expect performance improvements by increasing coverage of relevant information for the entities of interest in the pre-training dataset. The WD50K and WikiPeople pre-training datasets only have 88.2% and 85.3% coverage of the WD50K and WikiPeoople entities, respectively. This could be achieved by improving the quality of the entity linking methodology used. Simple improvements could be made to our regex method, such as including the WikiData surface forms in the regex dictionaries. Even greater improvements could likely be made with feature based or neural entity linking methodologies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Finally, we found that Hy-Transformer had the best performance on WD50K (100), though Hyper-ELC performed similarly to or better than the other KG baselines. Yu and Yang (2021) propose that Hy-Transformer's auxiliary masked qualifier prediction task allows it to better leverage the train set, which could explain why Hy-Transformer performs well on the smaller train set in WD50K (100). A similar qualifier prediction task could also be investigated in the context of a language model, which we leave for future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 158, |
|
"end": 176, |
|
"text": "Yu and Yang (2021)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Overall, our results show how a language model can leverage weakly relevant data (an entity-linked corpus) to reach strong performance on a complex link prediction task. In particular, we note that many practical relational inference problems do not exist in isolated domains where only a structured KG model is available; rather, they are loosely informed by massive, readily available unstructured natural language datasets. In these cases, the sheer quantity and variety of data available to language models, combined with their inherent flexibility in representing context, may swing the balance in their favour. Table 7 : 37 of the roles/relationships that occur most frequently in the WD50K (100) train dataset were considered and some were altered to make templates more similar to natural language -for example improving grammar.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 617, |
|
"end": 624, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "https://dumps.wikimedia.org/wikidatawiki/20190801/ 4 https://www.wikidata.org/wiki/Special:EntityData", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://dl.fbaipublicfiles.com/BLINK/enwiki-pagesarticles.xml.bz26 https://github.com/RaRe-Technologies/gensim/ blob/develop/gensim/corpora/wikicorpus.py 7 https://dumps.wikimedia.org/wikidatawiki/latest/ wikidatawiki-latest-wb_items_per_site.sql.gz", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Hy-Transformer did not have a published codebase, and we were unable to successfully run the published GRAN code.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": " Table 5 : Hyperparameters used for pre-training and finetuning models. During pre-training the model was trained with early stopping and a maximum number of epochs, but for finetuning only early stopping was used. Only learning rate (lr) was tuned. [0.00001, 0.0001, 0.001] were experimented with and the maximum learning rate that led to convergence was used. FF indicates the feed-foward layer and ES indicates early stopping. Limited to the relationship head/tail pairs that occur more than 500 times in the evaluation set.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1, |
|
"end": 8, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Freebase: a collaboratively created graph database for structuring human knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Kurt", |
|
"middle": [], |
|
"last": "Bollacker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Evans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Praveen", |
|
"middle": [], |
|
"last": "Paritosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Sturge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jamie", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 2008 ACM SIG-MOD international conference on Management of data", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1247--1250", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collabo- ratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIG- MOD international conference on Management of data, pages 1247-1250.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Translating embeddings for modeling multirelational data", |
|
"authors": [ |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Usunier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Garcia-Duran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oksana", |
|
"middle": [], |
|
"last": "Yakhnenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "26", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Mlmlm: Link prediction with mean likelihood masked language model", |
|
"authors": [ |
|
{ |
|
"first": "Louis", |
|
"middle": [], |
|
"last": "Clou\u00e2tre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "Trempe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amal", |
|
"middle": [], |
|
"last": "Zouaq", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Sarath Chandar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "FINDINGS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Louis Clou\u00e2tre, Philippe Trempe, Amal Zouaq, and A. P. Sarath Chandar. 2021. Mlmlm: Link prediction with mean likelihood masked language model. In FINDINGS.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models", |
|
"authors": [ |
|
{ |
|
"first": "Allyson", |
|
"middle": [], |
|
"last": "Ettinger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "34--48", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00298" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Allyson Ettinger. 2020. What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models. Transactions of the Association for Computational Linguistics, 8:34-48.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Do neural language representations learn physical commonsense? CoRR", |
|
"authors": [ |
|
{ |
|
"first": "Maxwell", |
|
"middle": [], |
|
"last": "Forbes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ari", |
|
"middle": [], |
|
"last": "Holtzman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maxwell Forbes, Ari Holtzman, and Yejin Choi. 2019. Do neural language representations learn physical commonsense? CoRR, abs/1908.02899.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Message passing for hyper-relational knowledge graphs", |
|
"authors": [ |
|
{ |
|
"first": "Mikhail", |
|
"middle": [], |
|
"last": "Galkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Priyansh", |
|
"middle": [], |
|
"last": "Trivedi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gaurav", |
|
"middle": [], |
|
"last": "Maheshwari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ricardo", |
|
"middle": [], |
|
"last": "Usbeck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jens", |
|
"middle": [], |
|
"last": "Lehmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2009.10847" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikhail Galkin, Priyansh Trivedi, Gaurav Maheshwari, Ricardo Usbeck, and Jens Lehmann. 2020. Message passing for hyper-relational knowledge graphs. arXiv preprint arXiv:2009.10847.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Kblrn: End-to-end learning of knowledge base representations with latent, relational, and numerical features", |
|
"authors": [ |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Garc\u00eda-Dur\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mathias", |
|
"middle": [], |
|
"last": "Niepert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1709.04676" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alberto Garc\u00eda-Dur\u00e1n and Mathias Niepert. 2017. Kblrn: End-to-end learning of knowledge base rep- resentations with latent, relational, and numerical features. arXiv preprint arXiv:1709.04676.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Link prediction on n-ary relational data", |
|
"authors": [ |
|
{ |
|
"first": "Xiaolong", |
|
"middle": [], |
|
"last": "Saiping Guan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuanzhuo", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xueqi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "The World Wide Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "583--593", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saiping Guan, Xiaolong Jin, Yuanzhuo Wang, and Xueqi Cheng. 2019. Link prediction on n-ary re- lational data. In The World Wide Web Conference, pages 583-593.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A survey on knowledge graphs: Representation, acquisition, and applications", |
|
"authors": [ |
|
{ |
|
"first": "Shaoxiong", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shirui", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erik", |
|
"middle": [], |
|
"last": "Cambria", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pekka", |
|
"middle": [], |
|
"last": "Marttinen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S Yu", |
|
"middle": [], |
|
"last": "Philip", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "IEEE Transactions on Neural Networks and Learning Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Martti- nen, and S Yu Philip. 2021. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE Transactions on Neural Networks and Learning Systems.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "How can we know what language models know?", |
|
"authors": [ |
|
{ |
|
"first": "Zhengbao", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "423--438", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00324" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423-438.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Multi-task learning for knowledge graph completion with pre-trained language models", |
|
"authors": [ |
|
{ |
|
"first": "Bosung", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taesuk", |
|
"middle": [], |
|
"last": "Hong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Youngjoong", |
|
"middle": [], |
|
"last": "Ko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jungyun", |
|
"middle": [], |
|
"last": "Seo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 28th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1737--1743", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.coling-main.153" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bosung Kim, Taesuk Hong, Youngjoong Ko, and Jungyun Seo. 2020. Multi-task learning for knowl- edge graph completion with pre-trained language models. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1737-1743, Barcelona, Spain (Online). International Committee on Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Incorporating literals into knowledge graph embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Agustinus", |
|
"middle": [], |
|
"last": "Kristiadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [ |
|
"Asif" |
|
], |
|
"last": "Khan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Denis", |
|
"middle": [], |
|
"last": "Lukovnikov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jens", |
|
"middle": [], |
|
"last": "Lehmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Asja", |
|
"middle": [], |
|
"last": "Fischer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Semantic Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "347--363", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Agustinus Kristiadi, Mohammad Asif Khan, Denis Lukovnikov, Jens Lehmann, and Asja Fischer. 2019. Incorporating literals into knowledge graph embed- dings. In International Semantic Web Conference, pages 347-363. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Zero-shot entity linking with dense entity retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Martin Josifoski Sebastian Riedel Luke Zettlemoyer Ledell", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabio", |
|
"middle": [], |
|
"last": "Petroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martin Josifoski Sebastian Riedel Luke Zettlemoyer Ledell Wu, Fabio Petroni. 2020. Zero-shot entity linking with dense entity retrieval. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Open sesame: Getting inside bert's linguistic knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Yongjie", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Chern Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberta", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yongjie Lin, Yi Chern Tan, and Roberta Frank. 2019. Open sesame: Getting inside bert's linguistic knowl- edge. ArXiv, abs/1906.01698.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Generalizing tensor decomposition for n-ary relational knowledge bases", |
|
"authors": [ |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quanming", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of The Web Conference 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1104--1114", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu Liu, Quanming Yao, and Yong Li. 2020. Generaliz- ing tensor decomposition for n-ary relational knowl- edge bases. In Proceedings of The Web Conference 2020, pages 1104-1114.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Constructing knowledge graphs and their biomedical applications", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Casey", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Nicholson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Greene", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Computational and Structural Biotechnology Journal", |
|
"volume": "18", |
|
"issue": "", |
|
"pages": "1414--1428", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.csbj.2020.05.017" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David N. Nicholson and Casey S. Greene. 2020. Con- structing knowledge graphs and their biomedical ap- plications. Computational and Structural Biotechnol- ogy Journal, 18:1414-1428.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Language models as knowledge bases? arXiv preprint", |
|
"authors": [ |
|
{ |
|
"first": "Fabio", |
|
"middle": [], |
|
"last": "Petroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rockt\u00e4schel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anton", |
|
"middle": [], |
|
"last": "Bakhtin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuxiang", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.01066" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Patrick Lewis, An- ton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowl- edge bases? arXiv preprint arXiv:1909.01066.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "How much knowledge can you pack into the parameters of a language model", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Raffel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5418--5426", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.437" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the param- eters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418-5426, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Beyond triplets: hyper-relational knowledge graph embedding for link prediction", |
|
"authors": [ |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Rosso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dingqi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "Cudr\u00e9-Mauroux", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of The Web Conference 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1885--1896", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paolo Rosso, Dingqi Yang, and Philippe Cudr\u00e9- Mauroux. 2020. Beyond triplets: hyper-relational knowledge graph embedding for link prediction. In Proceedings of The Web Conference 2020, pages 1885-1896.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts", |
|
"authors": [ |
|
{ |
|
"first": "Taylor", |
|
"middle": [], |
|
"last": "Shin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yasaman", |
|
"middle": [], |
|
"last": "Razeghi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Logan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Wallace", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4222--4235", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.346" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. AutoPrompt: Elic- iting Knowledge from Language Models with Auto- matically Generated Prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222-4235, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Pretraining is (almost) all you need: An application to commonsense reasoning", |
|
"authors": [ |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Tamborrino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Pellican\u00f2", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Baptiste", |
|
"middle": [], |
|
"last": "Pannier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Voitot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Louise", |
|
"middle": [], |
|
"last": "Naudin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexandre Tamborrino, Nicola Pellican\u00f2, Baptiste Pan- nier, Pascal Voitot, and Louise Naudin. 2020. Pre- training is (almost) all you need: An application to commonsense reasoning. ArXiv, abs/2004.14074.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Multi-task neural network for nondiscrete attribute prediction in knowledge graphs", |
|
"authors": [ |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Tay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anh", |
|
"middle": [], |
|
"last": "Luu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minh", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Tuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siu Cheung", |
|
"middle": [], |
|
"last": "Phan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hui", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 ACM on Conference on Information and Knowledge Management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1029--1038", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yi Tay, Luu Anh Tuan, Minh C Phan, and Siu Che- ung Hui. 2017. Multi-task neural network for non- discrete attribute prediction in knowledge graphs. In Proceedings of the 2017 ACM on Conference on In- formation and Knowledge Management, pages 1029- 1038.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "What do you learn from context? probing for sentence structure in contextual", |
|
"authors": [ |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Tenney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Berlin", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Poliak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"Thomas" |
|
], |
|
"last": "Mccoy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Najoung", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellie", |
|
"middle": [], |
|
"last": "Pavlick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? probing for sentence structure in contextual- ized word representations. ArXiv, abs/1905.06316.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Wikidata: a free collaborative knowledgebase", |
|
"authors": [ |
|
{ |
|
"first": "Denny", |
|
"middle": [], |
|
"last": "Vrande\u010di\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Kr\u00f6tzsch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Communications of the ACM", |
|
"volume": "57", |
|
"issue": "10", |
|
"pages": "78--85", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Denny Vrande\u010di\u0107 and Markus Kr\u00f6tzsch. 2014. Wiki- data: a free collaborative knowledgebase. Communi- cations of the ACM, 57(10):78-85.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Do NLP models know numbers? probing numeracy in embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Wallace", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yizhong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sujian", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5307--5315", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1534" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019. Do NLP models know num- bers? probing numeracy in embeddings. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 5307-5315, Hong Kong, China. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Structure-augmented text representation learning for efficient knowledge graph completion", |
|
"authors": [ |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guodong", |
|
"middle": [], |
|
"last": "Long", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tianyi", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ying", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the Web Conference 2021, WWW '21", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1737--1748", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3442381.3450043" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Ying Wang, and Yi Chang. 2021a. Structure-augmented text representation learning for efficient knowledge graph completion. In Proceedings of the Web Confer- ence 2021, WWW '21, page 1737-1748, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Link prediction on n-ary relational facts: A graph-based approach", |
|
"authors": [ |
|
{ |
|
"first": "Quan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haifeng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yajuan", |
|
"middle": [], |
|
"last": "Lyu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2105.08476" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quan Wang, Haifeng Wang, Yajuan Lyu, and Yong Zhu. 2021b. Link prediction on n-ary relational facts: A graph-based approach. arXiv preprint arXiv:2105.08476.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Knowledge graph embedding by translating on hyperplanes", |
|
"authors": [ |
|
{ |
|
"first": "Zhen", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianwen", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianlin", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by trans- lating on hyperplanes. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 28.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "On the representation and embedding of knowledge bases beyond binary relations", |
|
"authors": [ |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Wen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianxin", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yongyi", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shini", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1604.08642" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jianfeng Wen, Jianxin Li, Yongyi Mao, Shini Chen, and Richong Zhang. 2016. On the representation and embedding of knowledge bases beyond binary relations. arXiv preprint arXiv:1604.08642.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Knowledge graph embedding with numeric attributes of entities", |
|
"authors": [ |
|
{ |
|
"first": "Yanrong", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhichun", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of The Third Workshop on Representation Learning for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "132--136", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yanrong Wu and Zhichun Wang. 2018. Knowledge graph embedding with numeric attributes of entities. In Proceedings of The Third Workshop on Represen- tation Learning for NLP, pages 132-136.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Representation learning of knowledge graphs with entity descriptions", |
|
"authors": [ |
|
{ |
|
"first": "Ruobing", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jia", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huanbo", |
|
"middle": [], |
|
"last": "Luan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, and Maosong Sun. 2016. Representation learning of knowledge graphs with entity descriptions. In Pro- ceedings of the AAAI Conference on Artificial Intelli- gence, volume 30.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Xipeng Qiu, and Xuanjing Huang. 2016. Knowledge graph representation with jointly structural and textual encoding", |
|
"authors": [ |
|
{ |
|
"first": "Jiacheng", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kan", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1611.08661" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiacheng Xu, Kan Chen, Xipeng Qiu, and Xuanjing Huang. 2016. Knowledge graph representation with jointly structural and textual encoding. arXiv preprint arXiv:1611.08661.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Kgbert: Bert for knowledge graph completion", |
|
"authors": [ |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengsheng", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Kg- bert: Bert for knowledge graph completion. ArXiv, abs/1909.03193.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Improving hyper-relational knowledge graph completion", |
|
"authors": [ |
|
{ |
|
"first": "Donghan", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2104.08167" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Donghan Yu and Yiming Yang. 2021. Improving hyper-relational knowledge graph completion. arXiv preprint arXiv:2104.08167.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Scalable instance reconstruction in knowledge bases via relatedness affiliated embedding", |
|
"authors": [ |
|
{ |
|
"first": "Richong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junpeng", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiajie", |
|
"middle": [], |
|
"last": "Mei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yongyi", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 World Wide Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1185--1194", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richong Zhang, Junpeng Li, Jiajie Mei, and Yongyi Mao. 2018. Scalable instance reconstruction in knowledge bases via relatedness affiliated embed- ding. In Proceedings of the 2018 World Wide Web Conference, pages 1185-1194.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "A Appendices", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A Appendices", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "N-ary query representation in KG vs. natural language frameworks. (a) In a knowledge graph, the primary triple query (USA, ElectedPresident, [MASK])", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Dataset</td><td colspan=\"2\">Statements Train Test</td><td colspan=\"4\">Statements w/ Qualifiers (%) Statements w/ Literals (%) Train Test Train Test</td><td>Entities</td></tr><tr><td>WikiPeople Pre WikiPeople WikiPeople Lit</td><td>37.4M 294,439 294,439</td><td>380,396 37,712 3,906</td><td>-2.6 12.1</td><td>-2.6 100</td><td>-0 10.9</td><td>-0 100</td><td>29,720 34,839 34,839</td></tr><tr><td>WD50K Pre WD50K WD50K (100)</td><td>48.6M 166,435 22,738</td><td>494,881 46,159 5,297</td><td>-13.8 100</td><td>-13.1 100</td><td>-0 0</td><td>-0 0</td><td>42,800 47,155 18,791</td></tr></table>", |
|
"text": ").", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Performance comparison on the two WikiPeople-derived datasets. WikiPeople Literal evaluates only on examples with literal qualifiers (about 10.9% of the full test set) and filters ranking for evaluation with literals included. Methods above the line can encode literal terms, while methods below can't.", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |