Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D16-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:37:42.025870Z"
},
"title": "Jointly Embedding Knowledge Graphs and Logical Rules",
"authors": [
{
"first": "Shu",
"middle": [],
"last": "Guo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100093",
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Quan",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100093",
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Lihong",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "National Computer Network Emergency Response Technical Team Coordination Center of China",
"institution": "",
"location": {
"postCode": "100029",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Bin",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Li",
"middle": [],
"last": "Guo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100093",
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Embedding knowledge graphs into continuous vector spaces has recently attracted increasing interest. Most existing methods perform the embedding task using only fact triples. Logical rules, although containing rich background information, have not been well studied in this task. This paper proposes a novel method of jointly embedding knowledge graphs and logical rules. The key idea is to represent and model triples and rules in a unified framework. Specifically, triples are represented as atomic formulae and modeled by the translation assumption, while rules represented as complex formulae and modeled by t-norm fuzzy logics. Embedding then amounts to minimizing a global loss over both atomic and complex formulae. In this manner, we learn embeddings compatible not only with triples but also with rules, which will certainly be more predictive for knowledge acquisition and inference. We evaluate our method with link prediction and triple classification tasks. Experimental results show that joint embedding brings significant and consistent improvements over stateof-the-art methods. Particularly, it enhances the prediction of new facts which cannot even be directly inferred by pure logical inference, demonstrating the capability of our method to learn more predictive embeddings.",
"pdf_parse": {
"paper_id": "D16-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "Embedding knowledge graphs into continuous vector spaces has recently attracted increasing interest. Most existing methods perform the embedding task using only fact triples. Logical rules, although containing rich background information, have not been well studied in this task. This paper proposes a novel method of jointly embedding knowledge graphs and logical rules. The key idea is to represent and model triples and rules in a unified framework. Specifically, triples are represented as atomic formulae and modeled by the translation assumption, while rules represented as complex formulae and modeled by t-norm fuzzy logics. Embedding then amounts to minimizing a global loss over both atomic and complex formulae. In this manner, we learn embeddings compatible not only with triples but also with rules, which will certainly be more predictive for knowledge acquisition and inference. We evaluate our method with link prediction and triple classification tasks. Experimental results show that joint embedding brings significant and consistent improvements over stateof-the-art methods. Particularly, it enhances the prediction of new facts which cannot even be directly inferred by pure logical inference, demonstrating the capability of our method to learn more predictive embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Knowledge graphs (KGs) provide rich structured information and have become extremely useful resources for many NLP related applications like word sense disambiguation (Wasserman-Pritsker et al., 2015) and information extraction (Hoffmann et al., 2011) . A typical KG represents knowledge as multi-relational data, stored in triples of the form (head entity, relation, tail entity), e.g., (Paris, Capital-Of, France) . Although powerful in representing structured data, the symbolic nature of such triples makes KGs, especially large-scale KGs, hard to manipulate.",
"cite_spans": [
{
"start": 167,
"end": 200,
"text": "(Wasserman-Pritsker et al., 2015)",
"ref_id": "BIBREF34"
},
{
"start": 228,
"end": 251,
"text": "(Hoffmann et al., 2011)",
"ref_id": "BIBREF10"
},
{
"start": 388,
"end": 415,
"text": "(Paris, Capital-Of, France)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, a promising approach, namely knowledge graph embedding, has been proposed and successfully applied to various KGs (Nickel et al., 2012; Socher et al., 2013; Bordes et al., 2014) . The key idea is to embed components of a KG including entities and relations into a continuous vector space, so as to simplify the manipulation while preserving the inherent structure of the KG. The embeddings contain rich semantic information about entities and relations, and can significantly enhance knowledge acquisition and inference .",
"cite_spans": [
{
"start": 124,
"end": 145,
"text": "(Nickel et al., 2012;",
"ref_id": "BIBREF22"
},
{
"start": 146,
"end": 166,
"text": "Socher et al., 2013;",
"ref_id": "BIBREF29"
},
{
"start": 167,
"end": 187,
"text": "Bordes et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most existing methods perform the embedding task based solely on fact triples Wang et al., 2014; Nickel et al., 2016) . The only requirement is that the learned embeddings should be compatible with those facts. While logical rules contain rich background information and are extremely useful for knowledge acquisition and inference (Jiang et al., 2012; Pujara et al., 2013) , they have not been well studied in this task. and Wei et al. (2015) tried to leverage both embedding methods and logical rules for KG completion. In their work, however, rules are modeled separately from embedding methods, serving as postprocessing steps, and thus will not help to obtain better embeddings. Rockt\u00e4schel et al. (2015) recently proposed a joint model which injects first-order logic into embeddings. But it focuses on the relation extraction task, and creates vector embeddings for entity pairs rather than individual entities. Since entities do not have their own embeddings, relations between unpaired entities cannot be effectively discovered (Chang et al., 2014) .",
"cite_spans": [
{
"start": 78,
"end": 96,
"text": "Wang et al., 2014;",
"ref_id": "BIBREF32"
},
{
"start": 97,
"end": 117,
"text": "Nickel et al., 2016)",
"ref_id": "BIBREF23"
},
{
"start": 332,
"end": 352,
"text": "(Jiang et al., 2012;",
"ref_id": "BIBREF12"
},
{
"start": 353,
"end": 373,
"text": "Pujara et al., 2013)",
"ref_id": "BIBREF24"
},
{
"start": 426,
"end": 443,
"text": "Wei et al. (2015)",
"ref_id": "BIBREF35"
},
{
"start": 684,
"end": 709,
"text": "Rockt\u00e4schel et al. (2015)",
"ref_id": "BIBREF28"
},
{
"start": 1037,
"end": 1057,
"text": "(Chang et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we introduce KALE, a new approach that learns entity and relation Embeddings by jointly modeling Knowledge And Logic. Knowledge triples are taken as atoms and modeled by the translation assumption, i.e., relations act as translations between head and tail entities . A triple (e i , r k , e j ) is scored by \u2225e i + r k \u2212 e j \u2225 1 , where e i , r k , and e j are the vector embeddings for entities and relations. The score is then mapped to the unit interval [0, 1] to indicate the truth value of that triple. Logical rules are taken as complex formulae constructed by combining atoms with logical connectives (e.g., \u2227 and \u21d2), and modeled by t-norm fuzzy logics (H\u00e1jek, 1998) . The truth value of a rule is a composition of the truth values of the constituent atoms, defined by specific logical connectives. In this way, KALE represents triples and rules in a unified framework, as atomic and complex formulae respectively. Figure 1 gives a simple illustration of the framework. After unifying triples and rules, KALE minimizes a global loss involving both of them to obtain entity and relation embeddings. The learned embeddings are therefore compatible not only with triples but also with rules, which will definitely be more predictive for knowledge acquisition and inference.",
"cite_spans": [
{
"start": 674,
"end": 687,
"text": "(H\u00e1jek, 1998)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 936,
"end": 944,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this paper are summarized as follows. (i) We devise a unified framework that jointly models triples and rules to obtain more predictive entity and relation embeddings. The new framework KALE is general enough to handle any type of rules that can be represented as first-order logic formulae. (ii) We evaluate KALE with link prediction and triple classification tasks on WordNet (Miller, 1995) and Freebase (Bollacker et al., 2008) . Experimental results show significant and consistent improvements over state-of-the-art methods. Particularly, joint embedding enhances the prediction of new facts which cannot even be directly inferred by pure logical inference, demonstrating the capability of KALE to learn more predictive embeddings.",
"cite_spans": [
{
"start": 404,
"end": 418,
"text": "(Miller, 1995)",
"ref_id": "BIBREF19"
},
{
"start": 432,
"end": 456,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent years have seen rapid growth in KG embedding methods. Given a KG, such methods aim to encode its entities and relations into a continuous vector space, by using neural network architectures (Socher et al., 2013; Bordes et al., 2014) , matrix/tensor factorization techniques (Nickel et al., 2011; Riedel et al., 2013; Chang et al., 2014) , or Bayesian clustering strategies (Kemp et al., 2006; Xu et al., 2006; Sutskever et al., 2009) . Among these methods, TransE , which models relations as translating operations, achieves a good trade-off between prediction accuracy and computational efficiency. Various extensions like TransH (Wang et al., 2014) and Tran-sR (Lin et al., 2015b) are later proposed to further enhance the prediction accuracy of TransE. Most existing methods perform the embedding task based solely on triples contained in a KG. Some recent work tries to further incorporate other types of information available, e.g., relation paths (Neelakantan et al., 2015; Lin et al., 2015a; Luo et al., 2015) , relation type-constraints (Krompa\u00dfet al., 2015), entity types , and entity descriptions (Zhong et al., 2015) , to learn better embeddings.",
"cite_spans": [
{
"start": 197,
"end": 218,
"text": "(Socher et al., 2013;",
"ref_id": "BIBREF29"
},
{
"start": 219,
"end": 239,
"text": "Bordes et al., 2014)",
"ref_id": "BIBREF4"
},
{
"start": 281,
"end": 302,
"text": "(Nickel et al., 2011;",
"ref_id": "BIBREF21"
},
{
"start": 303,
"end": 323,
"text": "Riedel et al., 2013;",
"ref_id": "BIBREF26"
},
{
"start": 324,
"end": 343,
"text": "Chang et al., 2014)",
"ref_id": "BIBREF6"
},
{
"start": 380,
"end": 399,
"text": "(Kemp et al., 2006;",
"ref_id": "BIBREF13"
},
{
"start": 400,
"end": 416,
"text": "Xu et al., 2006;",
"ref_id": "BIBREF37"
},
{
"start": 417,
"end": 440,
"text": "Sutskever et al., 2009)",
"ref_id": "BIBREF31"
},
{
"start": 638,
"end": 657,
"text": "(Wang et al., 2014)",
"ref_id": "BIBREF32"
},
{
"start": 670,
"end": 689,
"text": "(Lin et al., 2015b)",
"ref_id": "BIBREF16"
},
{
"start": 960,
"end": 986,
"text": "(Neelakantan et al., 2015;",
"ref_id": "BIBREF20"
},
{
"start": 987,
"end": 1005,
"text": "Lin et al., 2015a;",
"ref_id": "BIBREF15"
},
{
"start": 1006,
"end": 1023,
"text": "Luo et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 1114,
"end": 1134,
"text": "(Zhong et al., 2015)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Logical rules have been widely studied in knowledge acquisition and inference, usually on the basis of Markov logic networks (Richardson and Domingos, 2006; Br\u00f6cheler et al., 2010; Pujara et al., 2013; Beltagy and Mooney, 2014) . Recently, there has been growing interest in combining logical rules and embedding models. and Wei et al. (2015) tried to utilize rules to refine predictions made by embedding models, via integer linear programming or Markov logic networks. In their work, however, rules are modeled separately from embedding models, and will not help obtain better embeddings. Rockt\u00e4schel et al. (2015) proposed a joint model that injects first-order logic into embeddings. But their work focuses on relation extraction, creating vector embeddings for entity pairs, and hence fails to discover relations between unpaired entities. This paper, in contrast, aims at learning more predictive embeddings by jointly modeling knowledge and logic. Since each entity has its own embedding, our approach can successfully make predictions between unpaired entities, providing greater flexibility for knowledge acquisition and inference.",
"cite_spans": [
{
"start": 125,
"end": 156,
"text": "(Richardson and Domingos, 2006;",
"ref_id": "BIBREF25"
},
{
"start": 157,
"end": 180,
"text": "Br\u00f6cheler et al., 2010;",
"ref_id": "BIBREF5"
},
{
"start": 181,
"end": 201,
"text": "Pujara et al., 2013;",
"ref_id": "BIBREF24"
},
{
"start": 202,
"end": 227,
"text": "Beltagy and Mooney, 2014)",
"ref_id": "BIBREF0"
},
{
"start": 325,
"end": 342,
"text": "Wei et al. (2015)",
"ref_id": "BIBREF35"
},
{
"start": 591,
"end": 616,
"text": "Rockt\u00e4schel et al. (2015)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We first describe the formulation of joint embedding. We are given a KG containing a set of triples K = {(e i , r k , e j )}, with each triple composed of two entities e i , e j \u2208 E and their relation r k \u2208 R. Here E is the entity vocabulary and R the relation set. Besides the triples, we are given a set of logical rules L, either specified manually or extracted automatically. A logical rule is encoded, for example, in the form of \u2200x, y : (x, r s , y) \u21d2 (x, r t , y), stating that any two entities linked by relation r s should also be linked by relation r t . Entities and relations are associated with vector embeddings, denoted by e, r \u2208 R d , representing their latent semantics. The proposed method, KALE, aims to learn these embeddings by jointly modeling knowledge triples K and logical rules L.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly Embedding Knowledge and Logic",
"sec_num": "3"
},
{
"text": "To enable joint embedding, a key ingredient of KALE is to unify triples and rules, in terms of firstorder logic (Rockt\u00e4schel et al., 2014; Rockt\u00e4schel et al., 2015) . A triple (e i , r k , e j ) is taken as a ground atom which applies a relation r k to a pair of entities e i and e j . Given a logical rule, it is first instantiated with concrete entities in the vocabulary E, resulting in a set of ground rules. For example, a universally quantified rule \u2200x, y : (x, Capital-Of, y) \u21d2 (x, Located-In, y) might be instantiated with the concrete entities of Paris and France, giving the ground rule (Paris, Capital-Of, France) \u21d2 (Paris, Located-In, France). 1 A ground rule can then be interpreted as a complex formula, constructed by combining ground atoms with logical connectives (e.g. \u2227 and \u21d2).",
"cite_spans": [
{
"start": 112,
"end": 138,
"text": "(Rockt\u00e4schel et al., 2014;",
"ref_id": "BIBREF27"
},
{
"start": 139,
"end": 164,
"text": "Rockt\u00e4schel et al., 2015)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "Let F denote the set of training formulae, both atomic (triples) and complex (ground rules). KALE further employs a truth function I : F \u2192 [0, 1] to assign a soft truth value to each formula, indicating how likely a triple holds or to what degree a ground rule is satisfied. The truth value of a triple is determined by the corresponding entity and relation embeddings. The truth value of a ground rule is determined by the truth values of the constituent triples, via specific logical connectives. In this way, KALE models triples and rules in a unified framework. See Figure 1 for an overview. Finally, KALE minimizes a global loss over the training formulae F to learn entity and relation embeddings compatible with both triples and rules. In what follows, we describe the key components of KALE, including triple modeling, rule modeling, and joint learning.",
"cite_spans": [],
"ref_spans": [
{
"start": 570,
"end": 578,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "To model triples we follow TransE , as it is simple and efficient while achieving state-of-the-art predictive performance. Specifically, given a triple (e i , r k , e j ), we model the relation embedding r k as a translation between the entity embeddings e i and e j , i.e., we want e i + r k \u2248 e j when the triple holds. The intuition here originates from linguistic regularities such as France \u2212 Paris = Germany \u2212 Berlin (Mikolov et al., 2013) . In relational data, such analogy holds because of the certain relation Capital-Of, through which we will get Paris + Capital-Of = France and Berlin + Capital-Of = Germany. Then, we score each triple on the basis of \u2225e i + r k \u2212 e j \u2225 1 , and define its soft truth value as",
"cite_spans": [
{
"start": 423,
"end": 445,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Triple Modeling",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "I (e i , r k , e j ) = 1 \u2212 1 3 \u221a d \u2225e i + r k \u2212 e j \u2225 1 ,",
"eq_num": "(1)"
}
],
"section": "Triple Modeling",
"sec_num": "3.2"
},
{
"text": "where d is the dimension of the embedding space. It is easy to see that I (e i , r k , e j ) \u2208 [0, 1] with the constraints \u2225e i \u2225 2 \u2264 1, \u2225e j \u2225 2 \u2264 1, and \u2225r k \u2225 2 \u2264 1. 2 I (e i , r k , e j ) is expected to be large if the triple holds, and small otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Triple Modeling",
"sec_num": "3.2"
},
{
"text": "To model rules we use t-norm fuzzy logics (H\u00e1jek, 1998) , which define the truth value of a complex formula as a composition of the truth values of its constituents, through specific t-norm based logical connectives. We follow Rockt\u00e4schel et al. (2015) and use the product t-norm. The compositions associated with logical conjunction (\u2227), disjunction (\u2228), and negation (\u00ac) are defined as follow:",
"cite_spans": [
{
"start": 42,
"end": 55,
"text": "(H\u00e1jek, 1998)",
"ref_id": "BIBREF9"
},
{
"start": 227,
"end": 252,
"text": "Rockt\u00e4schel et al. (2015)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Modeling",
"sec_num": "3.3"
},
{
"text": "I(f 1 \u2227 f 2 ) = I(f 1 )\u2022I(f 2 ), I(f 1 \u2228 f 2 ) = I(f 1 ) + I(f 2 ) \u2212 I(f 1 )\u2022I(f 2 ), I(\u00acf 1 ) = 1 \u2212 I(f 1 ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Modeling",
"sec_num": "3.3"
},
{
"text": "where f 1 and f 2 are two constituent formulae, either atomic or complex. Given these compositions, the truth value of any complex formula can be calculated recursively, e.g.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Modeling",
"sec_num": "3.3"
},
{
"text": "I(\u00acf 1 \u2227 f 2 ) = I(f 2 ) \u2212 I(f 1 )\u2022I(f 2 ), I(f 1 \u21d2 f 2 ) = I(f 1 )\u2022I(f 2 ) \u2212 I(f 1 ) + 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Modeling",
"sec_num": "3.3"
},
{
"text": ". This paper considers two types of rules. The first type is \u2200x, y : (x, r s , y) \u21d2 (x, r t , y). Given a ground rule f (e m , r s , e n ) \u21d2 (e m , r t , e n ), the truth value is calculated as: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Modeling",
"sec_num": "3.3"
},
{
"text": "I(f )=I(e m , r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Modeling",
"sec_num": "3.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Modeling",
"sec_num": "3.3"
},
{
"text": "I(\u2022,\u2022,\u2022)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Modeling",
"sec_num": "3.3"
},
{
"text": "is the truth value of a constituent triple, defined by Eq. (1). The second type is \u2200x, y, z :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Modeling",
"sec_num": "3.3"
},
{
"text": "(x, r s 1 , y) \u2227 (y, r s 2 , z) \u21d2 (x, r t , z)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Modeling",
"sec_num": "3.3"
},
{
"text": ". Given a ground rule f (e \u2113 , r s 1 , e m ) \u2227 (e m , r s 2 , e n ) \u21d2 (e \u2113 , r t , e n ), the truth value is: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Modeling",
"sec_num": "3.3"
},
{
"text": "I(f )=I(e \u2113 , r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Modeling",
"sec_num": "3.3"
},
{
"text": "The larger the truth values are, the better the ground rules are satisfied. It is easy to see that besides these two types of rules, the KALE framework is general enough to handle any rules that can be represented as first-order logic formulae. The investigation of other types of rules will be left for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Modeling",
"sec_num": "3.3"
},
{
"text": "\u2225ej\u2225 1 \u2264 3 \u221a d,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Modeling",
"sec_num": "3.3"
},
{
"text": "where the last inequality holds because",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Modeling",
"sec_num": "3.3"
},
{
"text": "\u2225x\u2225 1 = \u2211 i |xi| \u2264 \u221a d \u2211 i x 2 i = \u221a d \u2225x\u2225 2 for any x \u2208 R d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Modeling",
"sec_num": "3.3"
},
{
"text": ", according to the Cauchy-Schwarz inequality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Modeling",
"sec_num": "3.3"
},
{
"text": "After unifying triples and rules as atomic and complex formulae, we minimize a global loss over this general representation to learn entity and relation embeddings. We first construct a training set F containing all positive formulae, including (i) observed triples, and (ii) ground rules in which at least one constituent triple is observed. Then we minimize a margin-based ranking loss, enforcing positive formulae to have larger truth values than negative ones:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min {e},{r} \u2211 f + \u2208F \u2211 f \u2212 \u2208N f + [ \u03b3 \u2212 I(f + ) + I(f \u2212 ) ] + , s.t. \u2225e\u2225 2 \u2264 1, \u2200e \u2208 E; \u2225r\u2225 2 \u2264 1, \u2200r \u2208 R.",
"eq_num": "(4)"
}
],
"section": "Joint Learning",
"sec_num": "3.4"
},
{
"text": "Here f + \u2208 F is a positive formula, f \u2212 \u2208 N f + a negative one constructed for f + , \u03b3 a margin separating positive and negative formulae, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning",
"sec_num": "3.4"
},
{
"text": "[x] + max{0, x}. If f + (e i , r k , e j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning",
"sec_num": "3.4"
},
{
"text": "is a triple, we construct f \u2212 by replacing either e i or e j with a random entity e \u2208 E, and calculate its truth value according to Eq. (1). For example, we might generate a negative instance (Paris, Capital-Of, Germany) for the triple (Paris, Capital-Of, France). If f + (e m , r s , e n ) \u21d2 (e m , r t , e n ) or (e \u2113 , r s 1 , e m ) \u2227 (e m , r s 2 , e n ) \u21d2 (e \u2113 , r t , e n ) is a ground rule, we construct f \u2212 by replacing r t in the consequent with a random relation r \u2208 R, and calculate its truth value according to Eq. (2) or Eq. (3). For example, given a ground rule (Paris, Capital-Of, France) \u21d2 (Paris, Located-In, France), a possible negative instance (Paris, Capital-Of, France)\u21d2 (Paris, Has-Spouse, France) could be generated. We believe that most instances (both triples and ground rules) generated in this way are truly negative. Stochastic gradient descent in mini-batch mode is used to carry out the minimization. To satisfy the \u2113 2 -constraints, e and r are projected to the unit \u2113 2 -ball before each mini-batch. Embeddings learned in this way are required to be compatible with not only triples but also rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning",
"sec_num": "3.4"
},
{
"text": "Complexity. We compare KALE with several stateof-the-art embedding methods in space complexity and time complexity (per iteration) during learning. of the embedding space, and n e /n r /n t /n g is the number of entities/relations/triples/ground rules. The results indicate that incorporating additional rules will not significantly increase the space or time complexity of KALE, keeping the model complexity almost the same as that of TransE (optimal among the methods listed in the table). But please note that KALE needs to ground universally quantified rules before learning, which further requires O(n u n t /n r ) in time complexity. Here, n u is the number of universally quantified rules, and n t /n r is the averaged number of observed triples per relation. During grounding, we select those ground rules with at least one triple observed. Grounding is required only once before learning, and is not included during the iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "3.5"
},
{
"text": "Extensions. Actually, our approach is quite general. (i) Besides TransE, a variety of embedding methods, e.g., those listed in Table 1 , can be used for triple modeling (Section 3.2), as long as we further define a mapping f : R \u2192 [0, 1] to map original scores to soft truth values. (ii) Besides the two types of rules introduced in Section 3.3, other types of rules can also be handled as long as they can be represented as first-order logic formulae. (iii) Besides the product t-norm, other types of t-norm based fuzzy logics can be used for rule modeling (Section 3.3), e.g., the \u0141ukasiewicz t-norm used in probabilistic soft logic (Br\u00f6cheler et al., 2010) and the minimum t-norm used in fuzzy description logic (Stoilos et al., 2007) . (iv) Besides the pairwise ranking loss, other types of loss functions can be designed for joint learning (Section 3.4), e.g., the pointwise squared loss or the logarithmic loss (Rockt\u00e4schel et al., 2014; Rockt\u00e4schel et al., 2015) .",
"cite_spans": [
{
"start": 635,
"end": 659,
"text": "(Br\u00f6cheler et al., 2010)",
"ref_id": "BIBREF5"
},
{
"start": 715,
"end": 737,
"text": "(Stoilos et al., 2007)",
"ref_id": "BIBREF30"
},
{
"start": 917,
"end": 943,
"text": "(Rockt\u00e4schel et al., 2014;",
"ref_id": "BIBREF27"
},
{
"start": 944,
"end": 969,
"text": "Rockt\u00e4schel et al., 2015)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 127,
"end": 134,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Discussions",
"sec_num": "3.5"
},
{
"text": "We empirically evaluate KALE with two tasks: (i) link prediction and (ii) triple classification. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Datasets. We use two datasets: WN18 and FB122. WN18 is a subgraph of WordNet containing 18 relations. FB122 is composed of 122 Freebase relations regarding the topics of \"people\", \"location\", and \"sports\", extracted from FB15K. Both WN18 and F-B15K are released by 3 . Triples on each dataset are split into training/validation/test sets, used for model training, parameter tuning, and evaluation respectively. For WN18 we use the original data split, and for FB122 we extract triples associated with the 122 relations from the training, validation, and test sets of FB15K. We further create logical rules for each dataset, in the form of \u2200x, y :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "(x, r s , y) \u21d2 (x, r t , y) or \u2200x, y, z : (x, r s 1 , y) \u2227 (y, r s 2 , z) \u21d2 (x, r t , z).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "To do so, we first run TransE to get entity and relation embeddings, and calculate the truth value for each of such rules according to Eq. (2) or Eq. (3). Then we rank all such rules by their truth values and manually filter those ranked at the top. We finally create 47 rules on FB122, and 14 on WN18 (see Table 2 for examples). The rules are then instantiated with concrete entities (grounding). Ground rules in which at least one constituent triple is observed in the training set are used in joint learning.",
"cite_spans": [],
"ref_spans": [
{
"start": 307,
"end": 314,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Note that some of the test triples can be inferred by directly applying these rules on the training set (pure logical inference). On each dataset, we further split the test set into two parts, test-I and test-II. The former contains triples that cannot be directly inferred by pure logical inference, and the latter the remaining test triples. Table 3 gives some statistics of the datasets, including the number of entities, relations, triples in training/validation/test-I/test-II set, and ground rules.",
"cite_spans": [],
"ref_spans": [
{
"start": 344,
"end": 351,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Comparison settings. As baselines we take the embedding techniques of TransE, TransH, and Tran-sR. TransE models relation embeddings as translation operations between entity embeddings. TransH \u2200x, y : /sports/athlete/team(x, y) \u21d2 /sports/sports team/player(y, x) \u2200x, y : /location/country/capital(x, y) \u21d2 /location/location/contains(x, y) \u2200x, y, z : /people/person/nationality(x, y) \u2227 /location/country/official language(y, z) \u21d2 /people/person/languages(x, z) \u2200x, y, z : /country/administrative divisions(x, y) \u2227 /administrative division/capital(y, z) \u21d2 /country/second level divisions(x, z) \u2200x, y : hypernym(x, y) \u21d2 hyponym(y, x) \u2200x, y : instance hypernym(x, y) \u21d2 instance hyponym(y, x) \u2200x, y : synset domain topic of(x, y) \u21d2 member of domain topic(y, x) and TransR are extensions of TransE. They further allow entities to have distinct embeddings when involved in different relations, by introducing relationspecific hyperplanes and projection matrices respectively. All the three methods have been demonstrated to perform well on WordNet and Freebase data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "We further test our approach in three different scenarios. (i) KALE-Trip uses triples alone to perform the embedding task, i.e., only the training triples are included in the optimization Eq. (4). It is a linearly transformed version of TransE. The only difference is that relation embeddings are normalized in KALE-Trip, but not in TransE. (ii) KALE-Pre first repeats pure logical inference on the training set and adds inferred triples as additional training data, until no further triples can be inferred. Both original and inferred triples are then included in the optimization. For example, given a logical rule \u2200x, y : (x, r s , y) \u21d2 (x, r t , y), a new triple (e i , r t , e j ) can be inferred if (e i , r s , e j ) is observed in the training set, and both triples will be used as training instances for embedding. (iii) KALE-Joint is the joint learning scenario, which considers both training triples and ground rules in the optimization. In the aforementioned example, training triple (e i , r s , e j ) and ground rule (e i , r s , e j ) \u21d2 (e i , r t , e j ) will be used in the training process of KALE-Joint, without explicitly incorporating triple (e i , r t , e j ). Among the methods, TransE/TransH/TransR and KALE-Trip use only triples, while KALE-Pre/KALE-Joint further incorporates rules, before or during embedding. Implementation details. We use the code provided by for TransE 4 , and the code provided by Lin et al. (2015b) for TransH and Tran-sR 5 . KALE is implemented in Java. Note that Lin et al. (2015b) initialized TransR with the results of TransE. However, to ensure fair comparison, we randomly initialize all the methods in our experiments. For all the methods, we create 100 mini-batches on each dataset, and tune the embedding dimension d in {20, 50, 100}. For TransE, TransH, and Tran-sR which score a triple by a distance in R + , we tune the learning rate \u03b7 in {0.001, 0.01, 0.1}, and the margin \u03b3 in {1, 2, 3, 4}. For KALE which scores a triple (as well as a ground rule) by a soft truth value in the unit interval [0, 1], we set the learning rate \u03b7 in {0.01, 0.02, 0.05, 0.1}, and the margin \u03b3 in {0.1, 0.12, 0.15, 0.2}. KALE allows triples and rules to have different weights, with the former fixed to 1, and the latter (denoted by \u03bb) selected in {0.001, 0.01, 0.1, 1}.",
"cite_spans": [
{
"start": 1429,
"end": 1447,
"text": "Lin et al. (2015b)",
"ref_id": "BIBREF16"
},
{
"start": 1514,
"end": 1532,
"text": "Lin et al. (2015b)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "This task is to complete a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ). Evaluation protocol. We follow the same evaluation protocol used in TransE . For each test triple (e i , r k , e j ), we replace the head entity e i by every entity e \u2032 i in the dictionary, and calculate the truth value (or distance) for the corrupted triple (e \u2032 i , r k , e j ). Ranking the truth values in descending order (or the distances in ascending order), we get the rank of the correct entity e i . Similarly, we can get another rank by corrupting the tail entity e j . Aggregated over all the test triples, we report three metrics: (i) the mean reciprocal rank (MRR), (ii) the median of the ranks (MED), and (iii) the proportion of ranks no larger than n (HITS@N). We do not report the averaged rank (i.e., the \"Mean Rank\" metric used by ), since it is usually sensitive to outliers (Nickel et al., 2016) .",
"cite_spans": [
{
"start": 944,
"end": 965,
"text": "(Nickel et al., 2016)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Link Prediction",
"sec_num": "4.2"
},
{
"text": "Note that a corrupted triple may exist in KGs, which should also be taken as a valid triple. Consider a test triple (Paris, Located-In, France) ing instances in KALE-Pre, or encoded explicitly in training ground rules in KALE-Joint, making this set trivial for the rules to some extent. From the results, we can see that in both settings: (i) KALE-Pre and KALE-Joint outperform (or at least perform as well as) the other methods which use triples alone on almost all the test sets, demonstrating the superiority of incorporating logical rules. (ii) On the test-I sets which contain triples beyond the scope of pure logical inference, KALE-Joint performs significantly better than KALE-Pre. On these sets KALE-Joint can still beat all the baselines by a significant margin in most cases, while KALE-Pre can hardly outperform KALE-Trip. It demonstrates the capability of the joint embedding scenario to learn more predictive embeddings, through which we can make better predictions even beyond the scope of pure logical inference. (iii) On the test-II sets which contain directly inferable triples, KALE-Pre can easily beat all the baselines (even KALE-Joint). That means, for triples covered by pure logical inference, it is trivial to improve the performance by directly incorporating them as training instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Link Prediction",
"sec_num": "4.2"
},
{
"text": "To better understand how the joint embedding scenario can learn more predictive embeddings, on each dataset we further split the test-I set into two parts. Given a triple (e i , r k , e j ) in the test-I set, we assign it to the first part if relation r k is covered by the rules, and the second part otherwise. We call the two parts Test-Incl and Test-Excl respectively. Table 6 compares the performance of KALE-Trip and KALE-Joint on the two parts. The results show that KALE-Joint outperforms KALE-Trip on both parts, but the improvements on Test-Incl are much more significant than those on Test-Excl. Take the filtered setting on WN18 as an example. On Test-Incl, KALE-Joint increases the metric MRR by 55.7%, decreases the metric MED by 26.9%, and increas-es the metric HITS@10 by 38.2%. On Test-Excl, however, MRR rises by 3.1%, MED remains the same, and HITS@10 rises by only 0.3%. This observation indicates that jointly embedding triples and rules helps to learn more predictive embeddings, especially for those relations that are used to construct the rules. This might be the main reason that KALE-Joint can make better predictions even beyond the scope of pure logical inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Link Prediction",
"sec_num": "4.2"
},
{
"text": "This task is to verify whether an unobserved triple (e i , r k , e j ) is correct or not. Evaluation protocol. We take the following evaluation protocol similar to that used in TransH (Wang et al., 2014) . We first create labeled data for evaluation. For each triple in the test or validation set (i.e., a positive triple), we construct 10 negative triples for it by randomly corrupting the entities, 5 at the head position and the other 5 at the tail position. 6 To make the negative triples as difficult as possible, we corrupt a position using only entities that have appeared in that position, and further ensure that the corrupted triples do not exist in either the training, validation, or test set. We simply use the truth values (or distances) to classify triples. Triples with large truth values (or small distances) tend to be predicted as positive. To evaluate, we first rank the triples associated with each specific relation (in descending order according to their truth values, or in ascending order according to the distances), and calculate the average precision for that relation. We then report on the test sets the mean average precision (MAP) aggregated over different relations.",
"cite_spans": [
{
"start": 184,
"end": 203,
"text": "(Wang et al., 2014)",
"ref_id": "BIBREF32"
},
{
"start": 462,
"end": 463,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Triple Classification",
"sec_num": "4.3"
},
{
"text": "Optimal configurations. The hyperparameters of each method are again tuned in the ranges specified in Section 4.1, and the best models are selected by maximizing MAP on the validation set. The optimal configurations for KALE are: d = 100, \u03b7 = 0.1, \u03b3 = 0.2, and \u03bb = 0.1 on FB122; d = 100, \u03b7 = 0.1, \u03b3 = 0.2, and \u03bb = 0.001 on WN18. Again, we use the same configuration for KALE-Trip, KALE-Pre, and KALE-Joint on each dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Triple Classification",
"sec_num": "4.3"
},
{
"text": "Results. Table 7 shows the results on the test-I, test-II, and test-all sets of our datasets. From the results, we can see that: (i) KALE-Pre and KALE-Joint outperform the other methods which use triples alone on almost all the test sets, demonstrating the superiority of incorporating logical rules. (ii) KALE-Joint performs better than KALE-Pre on the test-I sets, i.e., triples that cannot be directly inferred by performing pure logical inference on the training set. This observation is similar to that observed in the link prediction task, demonstrating that the joint embedding scenario can learn more predictive embeddings and make predictions beyond the capability of pure logical inference.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 7",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Triple Classification",
"sec_num": "4.3"
},
{
"text": "In this paper, we propose a new method for jointly embedding knowledge graphs and logical rules, referred to as KALE. The key idea is to represent and model triples and rules in a unified framework. Specifically, triples are represented as atomic formulae and modeled by the translation assumption, while rules as complex formulae and by the t-norm fuzzy logics. A global loss on both atomic and complex formulae is then minimized to perform the embedding task. Embeddings learned in this way are compatible not only with triples but also with rules, which are certainly more useful for knowledge acquisition and inference. We evaluate KALE with the link prediction and triple classification tasks on WordNet and Freebase data. Experimental results show that joint embedding brings significant and consistent improvements over state-of-the-art methods. More importantly, it can obtain more predictive embeddings and make better predictions even beyond the scope of pure logical inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "For future work, we would like to (i) Investigate the efficacy of incorporating other types of logical rules such as \u2200x, y, z : (x, Capital-Of, y) \u21d2 \u00ac(x, Capital-Of, z). (ii) Investigate the possibility of modeling logical rules using only relation embeddings as suggested by Demeester et al. (2016) , e.g., modeling the above rule using only the embedding associated with Capital-Of. This avoids grounding, which might be time and space inefficient especially for complicated rules. (iii) Investigate the use of automatically extracted rules which are no longer hard rules and tolerant of uncertainty.",
"cite_spans": [
{
"start": 276,
"end": 299,
"text": "Demeester et al. (2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "Our approach actually takes as input rules represented in first-order logic, i.e., those with quantifiers such as \u2200. But it could be hard to deal with quantifiers, so we use ground rules, i.e., propositional statements during learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that 0 \u2264 \u2225ei + r k \u2212 ej\u2225 1 \u2264 \u2225ei\u2225 1 + \u2225r k \u2225 1 +",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://everest.hds.utc.fr/doku.php?id=en:smemlj12",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/glorotxa/SME 5 https://github.com/mrlyk423/relation extraction",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Previous work typically constructs only a single negative case for each positive one. We empirically found such a balanced classification task too simple for our datasets. So we consider a highly unbalanced setting, with a positive-to-negative ratio of 1:10, for which the previously used metric accuracy is no longer suitable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their insightful comments and suggestions. This research is supported by the National Natural Science Foundation of China (grant No. 61402465) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "and a possible corruption (Lyon, Located-In, France). Both triples are valid. In this case, ranking Lyon before the correct answer Paris should not be counted as an error. To avoid such phenomena, we follow and remove those corrupted triples which exist in either the training, validation, or test set before getting the ranks. That means, we remove Lyon from the candidate list before getting the rank of Paris in the aforementioned example. We call the original setting \"raw\" and the new setting \"filtered\".Optimal configurations. For each of the methods to be compared, we tune its hyperparameters in the ranges specified in Section 4.1, and select a best model that leads to the highest filtered MRR score on the validation set (with a total of 500 epochs over the training data). The optimal configurations for KALE are: d = 100, \u03b7 = 0.05, \u03b3 = 0.12, and \u03bb = 1 on FB122; d = 50, \u03b7 = 0.05, \u03b3 = 0.2, and \u03bb = 0.1 on WN18. To better see and understand the effects of rules, we use the same configuration for KALE-Trip, KALE-Pre, and KALE-Joint on each dataset.Results. Table 4 and Table 5 show the results in the raw setting and filtered setting respectively. On each dataset we report the metrics on three sets: test-I, test-II, and the whole test set (denoted by test-all).Test-I contains test triples that cannot be directly inferred by performing pure logical inference on the training set, and hence might be intrinsically more difficult for the rules. The remaining test triples (i.e., the directly inferable ones) are included in Test-II. These triples have either been used directly as train-",
"cite_spans": [],
"ref_spans": [
{
"start": 1069,
"end": 1088,
"text": "Table 4 and Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Efficient markov logic inference for natural language semantics",
"authors": [
{
"first": "Islam",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 28th AAAI Conference on Artificial Intelligence -Workshop on Statistical Relational Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "9--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Islam Beltagy and Raymond J. Mooney. 2014. Efficient markov logic inference for natural language semantics. In Proceedings of the 28th AAAI Conference on Arti- ficial Intelligence -Workshop on Statistical Relational Artificial Intelligence, pages 9-14.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Freebase: a collaboratively created graph database for structuring human knowledge",
"authors": [
{
"first": "Kurt",
"middle": [],
"last": "Bollacker",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Praveen",
"middle": [],
"last": "Paritosh",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Sturge",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data",
"volume": "",
"issue": "",
"pages": "1247--1250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim S- turge, and Jamie Taylor. 2008. Freebase: a collab- oratively created graph database for structuring hu- man knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, pages 1247-1250.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning structured embeddings of knowledge bases",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 25th AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "301--306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning structured embed- dings of knowledge bases. In Proceedings of the 25th AAAI Conference on Artificial Intelligence, pages 301-306.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Translating embeddings for modeling multi-relational data",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Garcia-Dur\u00e1n",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Oksana",
"middle": [],
"last": "Yakhnenko",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 27th Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2787--2795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia-Dur\u00e1n, Jason Weston, and Oksana Yakhnenko. 2013. Trans- lating embeddings for modeling multi-relational da- ta. In Proceedings of the 27th Annual Conference on Neural Information Processing Systems, pages 2787- 2795.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A semantic matching energy function for learning with multi-relational data. Machine Learning",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "94",
"issue": "",
"pages": "233--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2014. A semantic matching energy function for learning with multi-relational data. Ma- chine Learning, 94(2):233-259.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Probabilistic similarity logic",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Br\u00f6cheler",
"suffix": ""
},
{
"first": "Lilyana",
"middle": [],
"last": "Mihalkova",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "73--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Br\u00f6cheler, Lilyana Mihalkova, and Lise Getoor. 2010. Probabilistic similarity logic. In Proceedings of the 26th Conference on Uncertainty in Artificial Intel- ligence, pages 73-82.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Typed tensor decomposition of knowledge bases for relation extraction",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Bishan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Meek",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1568--1579",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai-wei Chang, Wen-tau Yih, Bishan Yang, and Christo- pher Meek. 2014. Typed tensor decomposition of knowledge bases for relation extraction. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1568-1579.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Regularizing relation representations by first-order implications",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Demeester",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies -Workshop on Automated Knowledge Base Construction",
"volume": "",
"issue": "",
"pages": "75--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Demeester, Tim Rockt\u00e4schel, and Sebastian Riedel. 2016. Regularizing relation representations by first-order implications. In Proceedings of the 2016 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Lan- guage Technologies -Workshop on Automated Knowl- edge Base Construction, pages 75-80.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semantically smooth knowledge graph embedding",
"authors": [
{
"first": "Shu",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Quan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lihong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "84--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shu Guo, Quan Wang, Lihong Wang, Bin Wang, and Li Guo. 2015. Semantically smooth knowledge graph embedding. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 84-94.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The metamathematics of fuzzy logic",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "H\u00e1jek",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petr H\u00e1jek. 1998. The metamathematics of fuzzy logic. Kluwer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Knowledgebased weak supervision for information extraction of overlapping relations",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "541--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computation- al Linguistics: Human Language Technologies, pages 541-550.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A latent factor model for highly multi-relational data",
"authors": [
{
"first": "Rodolphe",
"middle": [],
"last": "Jenatton",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [
"L"
],
"last": "Roux",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [
"R"
],
"last": "Obozinski",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 26th Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3167--3175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rodolphe Jenatton, Nicolas L. Roux, Antoine Bordes, and Guillaume R. Obozinski. 2012. A latent factor model for highly multi-relational data. In Proceedings of the 26th Annual Conference on Neural Information Processing Systems, pages 3167-3175.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning to refine an automatically extracted knowledge base using markov logic",
"authors": [
{
"first": "Shangpu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Lowd",
"suffix": ""
},
{
"first": "Dejing",
"middle": [],
"last": "Dou",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of 12th IEEE International Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "912--917",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shangpu Jiang, Daniel Lowd, and Dejing Dou. 2012. Learning to refine an automatically extracted knowl- edge base using markov logic. In Proceedings of 12th IEEE International Conference on Data Mining, pages 912-917.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning systems of concepts with an infinite relational model",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Kemp",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"B"
],
"last": "Tenenbaum",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Takeshi",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Naonori",
"middle": [],
"last": "Ueda",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "381--388",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Kemp, Joshua B. Tenenbaum, Thomas L. Grif- fiths, Takeshi Yamada, and Naonori Ueda. 2006. Learning systems of concepts with an infinite relation- al model. In Proceedings of the 21st AAAI Conference on Artificial Intelligence, pages 381-388.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Type-constrained representation learning in knowledge graphs",
"authors": [
{
"first": "Denis",
"middle": [],
"last": "Krompa\u00df",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Baier",
"suffix": ""
},
{
"first": "Volker",
"middle": [],
"last": "Tresp",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 14th International Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "640--655",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denis Krompa\u00df, Stephan Baier, and Volker Tresp. 2015. Type-constrained representation learning in knowl- edge graphs. In Proceedings of the 14th International Semantic Web Conference, pages 640-655.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Modeling relation paths for representation learning of knowledge bases",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Song",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "705--714",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Zhiyuan Liu, Huanbo Luan, Maosong Sun, Siwei Rao, and Song Liu. 2015a. Modeling relation paths for representation learning of knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 705- 714.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning entity and relation embeddings for knowledge graph completion",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xuan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 29th AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2181--2187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015b. Learning entity and relation em- beddings for knowledge graph completion. In Pro- ceedings of the 29th AAAI Conference on Artificial In- telligence, pages 2181-2187.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Context-dependent knowledge graph embedding",
"authors": [
{
"first": "Yuanfei",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Quan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1656--1661",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuanfei Luo, Quan Wang, Bin Wang, and Li Guo. 2015. Context-dependent knowledge graph embed- ding. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1656-1661.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, pages 746-751.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Compositional vector space models for knowledge base completion",
"authors": [
{
"first": "Arvind",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mc-Callum",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "156--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arvind Neelakantan, Benjamin Roth, and Andrew Mc- Callum. 2015. Compositional vector space model- s for knowledge base completion. In Proceedings of the 53rd Annual Meeting of the Association for Com- putational Linguistics and the 7th International Join- t Conference on Natural Language Processing, pages 156-166.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A three-way model for collective learning on multi-relational data",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Nickel",
"suffix": ""
},
{
"first": "Hans",
"middle": [
"P"
],
"last": "Volker Tresp",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kriegel",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 28th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "809--816",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Nickel, Volker Tresp, and Hans P. Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th In- ternational Conference on Machine Learning, pages 809-816.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Factorizing yago: Scalable machine learning for linked data",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Nickel",
"suffix": ""
},
{
"first": "Hans",
"middle": [
"P"
],
"last": "Volker Tresp",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kriegel",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 21st International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "271--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Nickel, Volker Tresp, and Hans P. Kriegel. 2012. Factorizing yago: Scalable machine learning for linked data. In Proceedings of the 21st International Conference on World Wide Web, pages 271-280.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Holographic embeddings of knowledge graphs",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Nickel",
"suffix": ""
},
{
"first": "Lorenzo",
"middle": [],
"last": "Rosasco",
"suffix": ""
},
{
"first": "Tomaso",
"middle": [],
"last": "Poggio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 30th AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1955--1961",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Nickel, Lorenzo Rosasco, and Tomaso Pog- gio. 2016. Holographic embeddings of knowledge graphs. In Proceedings of the 30th AAAI Conference on Artificial Intelligence, pages 1955-1961.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Knowledge graph identification",
"authors": [
{
"first": "Jay",
"middle": [],
"last": "Pujara",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Getoor",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 12th International Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "542--557",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jay Pujara, Hui Miao, Lise Getoor, and William Cohen. 2013. Knowledge graph identification. In Proceed- ings of the 12th International Semantic Web Confer- ence, pages 542-557.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Markov logic networks",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Domingos",
"suffix": ""
}
],
"year": 2006,
"venue": "Machine Learning",
"volume": "62",
"issue": "",
"pages": "107--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Richardson and Pedro Domingos. 2006. Markov logic networks. Machine Learning, 62(1- 2):107-136.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Relation extraction with matrix factorization and universal schemas",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"M"
],
"last": "Marlin",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "74--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Pro- ceedings of the 2013 Conference on North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, pages 74-84.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Low-dimensional embeddings of logic",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Matko",
"middle": [],
"last": "Bo\u0161njak",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics -Workshop on Semantic Parsing",
"volume": "",
"issue": "",
"pages": "45--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Rockt\u00e4schel, Matko Bo\u0161njak, Sameer Singh, and Se- bastian Riedel. 2014. Low-dimensional embeddings of logic. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguistics - Workshop on Semantic Parsing, pages 45-49.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Injecting logical background knowledge into embeddings for relation extraction",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1119--1129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Rockt\u00e4schel, Sameer Singh, and Sebastian Riedel. 2015. Injecting logical background knowledge into embeddings for relation extraction. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 1119-1129.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Reasoning with neural tensor networks for knowledge base completion",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 27th Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "926--934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning with neural ten- sor networks for knowledge base completion. In Pro- ceedings of the 27th Annual Conference on Neural In- formation Processing Systems, pages 926-934.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Reasoning with very expressive fuzzy description logics",
"authors": [
{
"first": "Giorgos",
"middle": [],
"last": "Stoilos",
"suffix": ""
},
{
"first": "Giorgos",
"middle": [
"B"
],
"last": "Stamou",
"suffix": ""
},
{
"first": "Jeff",
"middle": [
"Z"
],
"last": "Pan",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Artificial Intelligence Research",
"volume": "30",
"issue": "",
"pages": "273--320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giorgos Stoilos, Giorgos B. Stamou, Jeff Z. Pan, Vassilis Tzouvaras, and Ian Horrocks. 2007. Reasoning with very expressive fuzzy description logics. Journal of Artificial Intelligence Research, 30:273-320.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Modelling relational data using bayesian clustered tensor factorization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"B"
],
"last": "Tenenbaum",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [
"R"
],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 23rd Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1821--1828",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Joshua B. Tenenbaum, and Ruslan R. Salakhutdinov. 2009. Modelling relational data using bayesian clustered tensor factorization. In Proceed- ings of the 23rd Annual Conference on Neural Infor- mation Processing Systems, pages 1821-1828.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Knowledge graph embedding by translating on hyperplanes",
"authors": [
{
"first": "Zhen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianwen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jianlin",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 28th AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1112--1119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the 28th AAAI Conference on Artificial Intelligence, pages 1112-1119.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Knowledge base completion using embeddings and rules",
"authors": [
{
"first": "Quan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1859--1865",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quan Wang, Bin Wang, and Li Guo. 2015. Knowledge base completion using embeddings and rules. In Pro- ceedings of the 24th International Joint Conference on Artificial Intelligence, pages 1859-1865.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Learning to identify the best contexts for knowledge-based wsd",
"authors": [
{
"first": "Evgenia",
"middle": [],
"last": "Wasserman-Pritsker",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Einat",
"middle": [],
"last": "Minkov",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1662--1667",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evgenia Wasserman-Pritsker, William W. Cohen, and Einat Minkov. 2015. Learning to identify the best contexts for knowledge-based wsd. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1662-1667.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Large-scale knowledge base completion: inferring via grounding network sampling over selected instances",
"authors": [
{
"first": "Zhuoyu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhenyu",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Zhengya",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Guanhua",
"middle": [],
"last": "Tian",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th ACM International on Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "1331--1340",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhuoyu Wei, Jun Zhao, Kang Liu, Zhenyu Qi, Zhengya Sun, and Guanhua Tian. 2015. Large-scale knowl- edge base completion: inferring via grounding net- work sampling over selected instances. In Proceed- ings of the 24th ACM International on Conference on Information and Knowledge Management, pages 1331-1340.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Connecting language and knowledge bases with embedding models for relation extraction",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Oksana",
"middle": [],
"last": "Yakhnenko",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1366--1371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Weston, Antoine Bordes, Oksana Yakhnenko, and Nicolas Usunier. 2013. Connecting language and knowledge bases with embedding models for relation extraction. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1366-1371.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Infinite hidden relational models",
"authors": [
{
"first": "Zhao",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Volker",
"middle": [],
"last": "Tresp",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Hanspeter",
"middle": [],
"last": "Kriegel",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of Proceedings of the 22nd Conference on Uncertainty in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "544--551",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhao Xu, Volker Tresp, Kai Yu, and Hanspeter Kriegel. 2006. Infinite hidden relational models. In Proceed- ings of Proceedings of the 22nd Conference on Uncer- tainty in Artificial Intelligence, pages 544-551.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Aligning knowledge and text embeddings by entity descriptions",
"authors": [
{
"first": "Huaping",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Jianwen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "267--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huaping Zhong, Jianwen Zhang, Zhen Wang, Hai Wan, and Zheng Chen. 2015. Aligning knowledge and text embeddings by entity descriptions. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 267-272.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Simple illustration of KALE.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "s , e n )\u2022I(e m , r t , e n ) \u2212I(e m , r s , e n ) + 1,",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "s 1 , e m )\u2022I(e m , r s 2 , e n )\u2022I(e \u2113 , r t , e n ) \u2212I(e \u2113 , r s 1 , e m )\u2022I(e m , r s 2 , e n ) + 1.",
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td>Method</td><td colspan=\"2\">Complexity (Space/Time)</td></tr><tr><td>SE (Bordes et al., 2011)</td><td>ned+2nrd 2</td><td>O(ntd 2 )</td></tr><tr><td>KALE (this paper)</td><td>ned+nrd</td><td>O(ntd+ngd)</td></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": "shows the results, where d is the dimension LFM(Jenatton et al., 2012) ned+nrd 2 O(ntd 2 ) TransE ned+nrd O(ntd) TransH(Wang et al., 2014) ned+2nrd O(ntd) TransR(Lin et al., 2015b) ned+nr(d 2 +d) O(ntd 2 )"
},
"TABREF1": {
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table",
"text": "Complexity of different embedding methods."
},
"TABREF3": {
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table",
"text": "Statistics of datasets."
},
"TABREF4": {
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table",
"text": "Examples of rules created."
},
"TABREF7": {
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table",
"text": "Triple classification results on the test-I, test-II, and test-all sets of FB122 and WN18."
}
}
}
}