Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K16-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:11:43.139848Z"
},
"title": "Neighborhood Mixture Model for Knowledge Base Completion",
"authors": [
{
"first": "Dat",
"middle": [
"Quoc"
],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Macquarie University",
"location": {
"settlement": "Sydney",
"country": "Australia"
}
},
"email": "[email protected]"
},
{
"first": "Kairit",
"middle": [],
"last": "Sirts",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Macquarie University",
"location": {
"settlement": "Sydney",
"country": "Australia"
}
},
"email": ""
},
{
"first": "Lizhen",
"middle": [],
"last": "Qu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Australian National University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Macquarie University",
"location": {
"settlement": "Sydney",
"country": "Australia"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Knowledge bases are useful resources for many natural language processing tasks, however, they are far from complete. In this paper, we define a novel entity representation as a mixture of its neighborhood in the knowledge base and apply this technique on TransE-a well-known embedding model for knowledge base completion. Experimental results show that the neighborhood information significantly helps to improve the results of the TransE, leading to better performance than obtained by other state-of-the-art embedding models on three benchmark datasets for triple classification, entity prediction and relation prediction tasks.",
"pdf_parse": {
"paper_id": "K16-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "Knowledge bases are useful resources for many natural language processing tasks, however, they are far from complete. In this paper, we define a novel entity representation as a mixture of its neighborhood in the knowledge base and apply this technique on TransE-a well-known embedding model for knowledge base completion. Experimental results show that the neighborhood information significantly helps to improve the results of the TransE, leading to better performance than obtained by other state-of-the-art embedding models on three benchmark datasets for triple classification, entity prediction and relation prediction tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Knowledge bases (KBs), such as WordNet (Miller, 1995) , YAGO (Suchanek et al., 2007) , Freebase (Bollacker et al., 2008) and DBpedia (Lehmann et al., 2015) , represent relationships between entities as triples (head entity, relation, tail entity). Even very large knowledge bases are still far from complete (Socher et al., 2013; West et al., 2014) . Knowledge base completion or link prediction systems (Nickel et al., 2015) predict which triples not in a knowledge base are likely to be true (Taskar et al., 2004; Bordes et al., 2011) .",
"cite_spans": [
{
"start": 39,
"end": 53,
"text": "(Miller, 1995)",
"ref_id": "BIBREF25"
},
{
"start": 61,
"end": 84,
"text": "(Suchanek et al., 2007)",
"ref_id": "BIBREF33"
},
{
"start": 96,
"end": 120,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF0"
},
{
"start": 133,
"end": 155,
"text": "(Lehmann et al., 2015)",
"ref_id": "BIBREF18"
},
{
"start": 308,
"end": 329,
"text": "(Socher et al., 2013;",
"ref_id": "BIBREF32"
},
{
"start": 330,
"end": 348,
"text": "West et al., 2014)",
"ref_id": "BIBREF37"
},
{
"start": 404,
"end": 425,
"text": "(Nickel et al., 2015)",
"ref_id": "BIBREF29"
},
{
"start": 494,
"end": 515,
"text": "(Taskar et al., 2004;",
"ref_id": "BIBREF34"
},
{
"start": 516,
"end": 536,
"text": "Bordes et al., 2011)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Embedding models for KB completion associate entities and/or relations with dense feature vectors or matrices. Such models obtain state-of-the-art performance Bordes et al., 2013; Socher et al., 2013; Wang et al., 2014; Guu et al., 2015; Nguyen et al., 2016) and generalize to large KBs (Krompa et al., 2015) .",
"cite_spans": [
{
"start": 159,
"end": 179,
"text": "Bordes et al., 2013;",
"ref_id": "BIBREF4"
},
{
"start": 180,
"end": 200,
"text": "Socher et al., 2013;",
"ref_id": "BIBREF32"
},
{
"start": 201,
"end": 219,
"text": "Wang et al., 2014;",
"ref_id": "BIBREF36"
},
{
"start": 220,
"end": 237,
"text": "Guu et al., 2015;",
"ref_id": "BIBREF12"
},
{
"start": 238,
"end": 258,
"text": "Nguyen et al., 2016)",
"ref_id": "BIBREF27"
},
{
"start": 287,
"end": 308,
"text": "(Krompa et al., 2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most embedding models for KB completion learn only from triples and by doing so, ignore lots of information implicitly provided by the structure of the knowledge graph. Recently, several authors have addressed this issue by incorporating relation path information into model learning (Garc\u00eda-Dur\u00e1n et al., 2015; Lin et al., 2015a; Guu et al., 2015; Toutanova et al., 2016) and have shown that the relation paths between entities in KBs provide useful information and improve knowledge base completion. For instance, a three-relation path (head, born in hospital/r 1 , e 1 ) \u21d2(e 1 , hospital located in city/r 2 , e 2 ) \u21d2(e 2 , city in country/r 3 , tail)",
"cite_spans": [
{
"start": 284,
"end": 311,
"text": "(Garc\u00eda-Dur\u00e1n et al., 2015;",
"ref_id": "BIBREF8"
},
{
"start": 312,
"end": 330,
"text": "Lin et al., 2015a;",
"ref_id": "BIBREF20"
},
{
"start": 331,
"end": 348,
"text": "Guu et al., 2015;",
"ref_id": "BIBREF12"
},
{
"start": 349,
"end": 372,
"text": "Toutanova et al., 2016)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "is likely to indicate that the fact (head, nationality, tail) could be true, so the relation path here p = {r 1 , r 2 , r 3 } is useful for predicting the relationship \"nationality\" between the head and tail entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Besides the relation paths, there could be other useful information implicitly presented in the knowledge base that could be exploited for better KB completion. For instance, the whole neighborhood of entities could provide lots of useful information for predicting the relationship between two entities. Consider for example a KB fragment given in Figure 1 . If we know that Ben Affleck has won an Oscar award and Ben Affleck lives in Los Angeles, then this can help us to predict that Ben Affleck is an actor or a film maker, rather than a lecturer or a doctor. If we additionally know that Ben Affleck's gender is male then there is a higher chance for him to be a film maker. This intuition can be formalized by representing an entity vector as a relation-specific mixture of its neighborhood as follows: Ben Affleck = \u03c9 r,1 (Violet Anne, child of)",
"cite_spans": [],
"ref_spans": [
{
"start": 349,
"end": 357,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "+ \u03c9 r,2 (male, gender \u22121 ) + \u03c9 r,3 (Los Angeles, lives in \u22121 ) + \u03c9 r,4 (Oscar award, won \u22121 ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "where \u03c9 r,i are the mixing weights that indicate how important each neighboring relation is for predicting the relation r. For example, for predicting the occupation relationship, the knowledge about the child of relationship might not be that informative and thus the corresponding mixing coefficient can be close to zero, whereas it could be relevant for predicting some other relationship, such as parent or spouse, in which case the relation-specific mixing coefficient for the child of relationship could be high.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The primary contribution of this paper is introducing and formalizing the neighborhood mixture model. We demonstrate its usefulness by applying it to the well-known TransE model (Bordes et al., 2013) . However, it could be applied to other embedding models as well, such as Bilinear models Yang et al., 2015) and STransE (Nguyen et al., 2016) . While relation path models exploit extra information using longer paths existing in the KB, the neighborhood mixture model effectively incorporates information about many paths simultaneously. Our extensive experiments on three benchmark datasets show that it achieves superior performance over competitive baselines in three KB completion tasks: triple classification, entity prediction and relation prediction.",
"cite_spans": [
{
"start": 178,
"end": 199,
"text": "(Bordes et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 290,
"end": 308,
"text": "Yang et al., 2015)",
"ref_id": "BIBREF38"
},
{
"start": 313,
"end": 342,
"text": "STransE (Nguyen et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we start by explaining how to formally construct the neighbor-based entity representations in section 2.1, and then describe the Neighborhood Mixture Model applied to the TransE model (Bordes et al., 2013) in section 2.2. Section 2.3 explains how we train our model.",
"cite_spans": [
{
"start": 201,
"end": 222,
"text": "(Bordes et al., 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neighborhood mixture modeling",
"sec_num": "2"
},
{
"text": "Let E denote the set of entities and R the set of relation types. Denote by R \u22121 the set of inverse relations r \u22121 . Denote by G the knowledge graph consisting of a set of correct tiples (h, r, t), such that h, t \u2208 E and r \u2208 R. Let K denote the symmetric closure of G, i.e. if a triple (h, r, t) \u2208 G, then both (h, r, t) and (t, r \u22121 , h) \u2208 K.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighbor-based entity representation",
"sec_num": "2.1"
},
{
"text": "Define:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighbor-based entity representation",
"sec_num": "2.1"
},
{
"text": "N e,r = {e |(e , r, e) \u2208 K} as a set of neighboring entities connected to entity e with relation r. Then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighbor-based entity representation",
"sec_num": "2.1"
},
{
"text": "N e = {(e , r)|r \u2208 R \u222a R \u22121 , e \u2208 N e,r }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighbor-based entity representation",
"sec_num": "2.1"
},
{
"text": "is the set of all entity and relation pairs that are neighbors for entity e. Each entity e is associated with a k-dimensional vector v e \u2208 R k and relation-dependent vectors u e,r \u2208 R k , r \u2208 R \u222a R \u22121 . Now we can define the neighborhood-based entity representation \u03d1 e,r for an entity e \u2208 E for predicting the relation r \u2208 R as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighbor-based entity representation",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03d1 e,r = a e v e + (e ,r )\u2208Ne b r,r u e ,r ,",
"eq_num": "(1)"
}
],
"section": "Neighbor-based entity representation",
"sec_num": "2.1"
},
{
"text": "a e and b r,r are the mixture weights that are constrained to sum to 1 for each neighborhood:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighbor-based entity representation",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a e \u221d \u03b4 + exp \u03b1 e (2) b r,r \u221d exp \u03b2 r,r",
"eq_num": "(3)"
}
],
"section": "Neighbor-based entity representation",
"sec_num": "2.1"
},
{
"text": "where \u03b4 0 is a hyper-parameter that controls the contribution of the entity vector v e to the neighbor-based mixture, \u03b1 e and \u03b2 r,r are the learnable exponential mixture parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighbor-based entity representation",
"sec_num": "2.1"
},
{
"text": "In real-world factual KBs, e.g. Freebase (Bollacker et al., 2008) , some entities, such as \"male\", can have thousands or millions neighboring entities sharing the same relation \"gender.\" For such entities, computing the neighbor-based vectors can be computationally very expensive. To overcome this problem, we introduce in our implementation a filtering threshold \u03c4 and consider in the neighbor-based entity representation construction only those relation-specific neighboring entity sets for which |N e,r | \u2264 \u03c4 .",
"cite_spans": [
{
"start": 41,
"end": 65,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neighbor-based entity representation",
"sec_num": "2.1"
},
{
"text": "Embedding models define for each triple (h, r, t) \u2208 G, a score function f (h, r, t) that measures its implausibility. The goal is to choose f such that the score f (h, r, t) of a plausible triple (h, r, t) is smaller than the score f (h , r , t ) of an implausible triple (h , r , t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TransE-NMM: applying neighborhood mixtures to TransE",
"sec_num": "2.2"
},
{
"text": "TransE (Bordes et al., 2013 ) is a simple embedding model for knowledge base completion, which, despite of its simplicity, obtains very competitive results (Garc\u00eda-Dur\u00e1n et al., 2016; Nickel et al., 2016) . In TransE, both entities e and relations r are represented with k-dimensional vectors v e \u2208 R k and v r \u2208 R k , respectively. These vectors are chosen such that for each triple (h, r, t) \u2208 G:",
"cite_spans": [
{
"start": 7,
"end": 27,
"text": "(Bordes et al., 2013",
"ref_id": "BIBREF4"
},
{
"start": 156,
"end": 183,
"text": "(Garc\u00eda-Dur\u00e1n et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 184,
"end": 204,
"text": "Nickel et al., 2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TransE-NMM: applying neighborhood mixtures to TransE",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v h + v r \u2248 v t",
"eq_num": "(4)"
}
],
"section": "TransE-NMM: applying neighborhood mixtures to TransE",
"sec_num": "2.2"
},
{
"text": "The score function of the TransE model is the norm of this translation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TransE-NMM: applying neighborhood mixtures to TransE",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (h, r, t) TransE = v h + v r \u2212 v t 1/2",
"eq_num": "(5)"
}
],
"section": "TransE-NMM: applying neighborhood mixtures to TransE",
"sec_num": "2.2"
},
{
"text": "We define the score function of our new model TransE-NMM in terms of the neighbor-based entity vectors as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TransE-NMM: applying neighborhood mixtures to TransE",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (h, r, t) = \u03d1 h,r + v r \u2212 \u03d1 t,r \u22121 1/2 ,",
"eq_num": "(6)"
}
],
"section": "TransE-NMM: applying neighborhood mixtures to TransE",
"sec_num": "2.2"
},
{
"text": "using either the 1 or the 2 -norm, and \u03d1 h,r and \u03d1 t,r \u22121 are defined following the Equation 1. The relation-specific entity vectors u e,r used to construct the neighbor-based entity vectors \u03d1 e,r are defined based on the TransE translation operator:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TransE-NMM: applying neighborhood mixtures to TransE",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u e,r = v e + v r",
"eq_num": "(7)"
}
],
"section": "TransE-NMM: applying neighborhood mixtures to TransE",
"sec_num": "2.2"
},
{
"text": "in which v r \u22121 = \u2212v r . For each correct triple (h, r, t), the sets of neighboring entities N h,r and N t,r \u22121 exclude the entities t and h, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TransE-NMM: applying neighborhood mixtures to TransE",
"sec_num": "2.2"
},
{
"text": "If we set the filtering threshold \u03c4 = 0 then \u03d1 h,r = v h and \u03d1 t,r \u22121 = v t for all triples. In this case, TransE-NMM reduces to the plain TransE model. In all our experiments presented in section 4, the baseline TransE results are obtained with the TransE-NMM with \u03c4 = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TransE-NMM: applying neighborhood mixtures to TransE",
"sec_num": "2.2"
},
{
"text": "The TransE-NMM model parameters include the vectors v e , v r for entities and relation types, the entity-specific weights \u03b1 = {\u03b1 e |e \u2208 E} and relation-specific weights \u03b2 = {\u03b2 r,r |r, r \u2208 R \u222a R \u22121 }. To learn these parameters, we minimize the L 2 -regularized margin-based objective function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter optimization",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = (h,r,t)\u2208G (h ,r,t )\u2208G (h,r,t) [\u03b3 + f (h, r, t) \u2212 f (h , r, t )] + + \u03bb 2 \u03b1 2 2 + \u03b2 2 2 ,",
"eq_num": "(8)"
}
],
"section": "Parameter optimization",
"sec_num": "2.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter optimization",
"sec_num": "2.3"
},
{
"text": "[x] + = max(0, x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter optimization",
"sec_num": "2.3"
},
{
"text": ", \u03b3 is the margin hyperparameter, \u03bb is the L 2 regularization parameter and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter optimization",
"sec_num": "2.3"
},
{
"text": "G (h,r,t) = {(h , r, t) | h \u2208 E, (h , r, t) / \u2208 G} \u222a {(h, r, t ) | t \u2208 E, (h, r, t ) / \u2208 G}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter optimization",
"sec_num": "2.3"
},
{
"text": "is the set of incorrect triples generated by corrupting the correct triple (h, r, t) \u2208 G. We applied the \"Bernoulli\" trick to choose whether to generate the head or tail entity when sampling an incorrect triple (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2016) . We use Stochastic Gradient Descent (SGD) with RMSProp adaptive learning rate to minimize L, and impose the following hard constraints during training: v e 2 1 and v r 2 1. We employ alternating optimization to minimize L. We first initialize the entity and relation-specific mixing parameters \u03b1 and \u03b2 to zero and only learn the randomly initialized entity and relation vectors v e and v r . Then we fix the learned vectors and only optimize the mixing parameters. In the final step, we fix again the mixing parameters and fine-tune the vectors. In all experiments presented in section 4, we train for 200 epochs during each three optimization step. Opt. ",
"cite_spans": [
{
"start": 211,
"end": 230,
"text": "(Wang et al., 2014;",
"ref_id": "BIBREF36"
},
{
"start": 231,
"end": 249,
"text": "Lin et al., 2015b;",
"ref_id": "BIBREF21"
},
{
"start": 250,
"end": 266,
"text": "Ji et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter optimization",
"sec_num": "2.3"
},
{
"text": "STransE W r,1 v h + v r \u2212 W r,2 v t 1/2 ; W r,1 , W r,2 \u2208 R k\u00d7k ; v r \u2208 R k SGD SE W r,1 v h \u2212 W r,2 v t 1/2 ; W r,1 , W r,2 \u2208 R k\u00d7k SGD Unstructured v h \u2212 v t 1/2 SGD TransE v h + v r \u2212 v t 1/2 ; v r \u2208 R k SGD TransH (I \u2212 r p r p )v h + v r \u2212 (I \u2212 r p r p )v t 1/2 SGD r p , v r \u2208 R k ; I: Identity matrix size k \u00d7 k TransD (I + r p h p )v h + v r \u2212 (I + r p t p )v t 1/2 AdaDelta r p , v r \u2208 R n ; h p , t p \u2208 R k ; I: Identity matrix size n \u00d7 k TransR W r v h + v r \u2212 W r v t 1/2 ; W r \u2208 R n\u00d7k ; v r \u2208 R n SGD TranSparse W h r (\u03b8 h r )v h + v r \u2212 W t r (\u03b8 t r )v t 1/2 ; W h r , W t r \u2208 R n\u00d7k ; \u03b8 h r , \u03b8 t r \u2208 R ; v r \u2208 R n SGD SME (W 1,1 v h + W 1,2 v r + b 1 ) (W 2,1 v t + W 2,2 v r + b 2 ) SGD b 1 , b 2 \u2208 R n ; W 1,1 , W 1,2 , W 2,1 , W 2,2 \u2208 R n\u00d7k DISTMULT v h W r v t ; W r is a diagonal matrix \u2208 R k\u00d7k AdaGrad NTN v r tanh(v h M r v t + W r,1 v h + W r,2 v t + b r ) L-BFGS v r , b r \u2208 R n ; M r \u2208 R k\u00d7k\u00d7n ; W r,1 , W r,2 \u2208 R n\u00d7k Bilinear-COMP v h W r 1 W r 2 ...W rm v t ; W r 1 , W r 2 , ..., W rm \u2208 R k\u00d7k AdaGrad TransE-COMP v h + v r 1 + v r 2 + ... + v rm \u2212 v t 1/2 ; v r 1 , v r 2 , ..., v rm \u2208 R k AdaGrad",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "v h and v t \u2208 R k respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "differ in their score function f (h, r, t) and the algorithm used to optimize their margin-based objective function, e.g., SGD, AdaGrad (Duchi et al., 2011) , AdaDelta (Zeiler, 2012) or L-BFGS (Liu and Nocedal, 1989 ).",
"cite_spans": [
{
"start": 136,
"end": 156,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF7"
},
{
"start": 193,
"end": 215,
"text": "(Liu and Nocedal, 1989",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "The Unstructured model ) assumes that the head and tail entity vectors are similar. As the Unstructured model does not take the relationship into account, it cannot distinguish different relation types. The Structured Embedding (SE) model (Bordes et al., 2011 ) extends the Unstructured model by assuming that the head and tail entities are similar only in a relation-dependent subspace, where each relation is represented by two different matrices. Futhermore, the SME model uses four different matrices to project entity and relation vectors into a subspace. The TransH model (Wang et al., 2014) associates each relation with a relation-specific hyperplane and uses a projection vector to project entity vectors onto that hyperplane. TransD and TransR/CTransR (Lin et al., 2015b) extend the TransH model by using two projection vectors and a matrix to project entity vectors into a relation-specific space, respectively. STransE (Nguyen et al., 2016) and TranSparse (Ji et al., 2016) are extensions of the TransR model, where head and tail entities are associated with their own projection matrices.",
"cite_spans": [
{
"start": 239,
"end": 259,
"text": "(Bordes et al., 2011",
"ref_id": "BIBREF2"
},
{
"start": 578,
"end": 597,
"text": "(Wang et al., 2014)",
"ref_id": "BIBREF36"
},
{
"start": 762,
"end": 781,
"text": "(Lin et al., 2015b)",
"ref_id": "BIBREF21"
},
{
"start": 931,
"end": 952,
"text": "(Nguyen et al., 2016)",
"ref_id": "BIBREF27"
},
{
"start": 968,
"end": 985,
"text": "(Ji et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "The DISTMULT model (Yang et al., 2015 ) is based on the Bilinear model (Nickel et al., 2011; Jenatton et al., 2012) where each relation is represented by a diagonal rather than a full matrix. The neural tensor network (NTN) model (Socher et al., 2013 ) uses a bilinear tensor operator to represent each relation. Similar quadratic forms are used to model entities and relations in KG2E Toutanova et al. (2016) showed that relation paths between entities in KBs provide richer information and improve the relationship prediction. In fact, our new TransE-NMM model can be also viewed as a three-relation path model as it takes into account the neighborhood entity and relation information of both head and tail entities in each triple. Luo et al. (2015) constructed relation paths between entities and viewing entities and relations in the path as pseudo-words applied Word2Vec algorithms (Mikolov et al., 2013) to produce pretrained vectors for these pseudo-words. Luo et al. (2015) showed that using these pre-trained vectors for initialization helps to improve the performance of the TransE, SME and SE models. RTransE (Garc\u00eda-Dur\u00e1n et al., 2015) , PTransE (Lin et al., 2015a) and TransE-COMP (Guu et al., 2015) are extensions of the TransE model. These models similarly represent a relation path by a vector which is the sum of the vectors of all relations in the path, whereas in the Bilinear-COMP model (Guu et al., 2015) , each relation is a matrix and so it represents the relation path by matrix multiplication. Our neighborhood mixture model can be adapted to both relation path models Bilinear-COMP and TransE-COMP, by replacing head and tail entity vectors by the neighborbased vector representations, thus combining advantages of both path and neighborhood information. Nickel et al. (2015) reviews other approaches for learning from KBs and multi-relational data.",
"cite_spans": [
{
"start": 19,
"end": 37,
"text": "(Yang et al., 2015",
"ref_id": "BIBREF38"
},
{
"start": 71,
"end": 92,
"text": "(Nickel et al., 2011;",
"ref_id": "BIBREF28"
},
{
"start": 93,
"end": 115,
"text": "Jenatton et al., 2012)",
"ref_id": "BIBREF14"
},
{
"start": 230,
"end": 250,
"text": "(Socher et al., 2013",
"ref_id": "BIBREF32"
},
{
"start": 386,
"end": 409,
"text": "Toutanova et al. (2016)",
"ref_id": "BIBREF35"
},
{
"start": 734,
"end": 751,
"text": "Luo et al. (2015)",
"ref_id": "BIBREF23"
},
{
"start": 887,
"end": 909,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF24"
},
{
"start": 964,
"end": 981,
"text": "Luo et al. (2015)",
"ref_id": "BIBREF23"
},
{
"start": 1120,
"end": 1147,
"text": "(Garc\u00eda-Dur\u00e1n et al., 2015)",
"ref_id": "BIBREF8"
},
{
"start": 1158,
"end": 1177,
"text": "(Lin et al., 2015a)",
"ref_id": "BIBREF20"
},
{
"start": 1194,
"end": 1212,
"text": "(Guu et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 1407,
"end": 1425,
"text": "(Guu et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 1781,
"end": 1801,
"text": "Nickel et al. (2015)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "To investigate the usefulness of the neighbor mixtures, we compare the performance of the TransE-NMM against the results of the baseline TransE and other state-of-the-art embedding models on the triple classification, entity prediction and relation prediction tasks. We conduct experiments using three publicly available datasets WN11, FB13 and NELL186. For all of them, the validation and test sets containing both correct and incorrect triples have already been constructed. Statistical information about these datasets is given in Table 2 . The two benchmark datasets 1 , WN11 and FB13, were produced by Socher et al. (2013) for triple classification. WN11 is derived from the large lexical KB WordNet (Miller, 1995) involving 11 relation types. FB13 is derived from the large real-world fact KB FreeBase (Bollacker et al., 2008) covering 13 relation types. The NELL186 dataset 2 was introduced by Guo et al. (2015) for both triple classification and entity prediction tasks, containing 186 most frequent relations in the KB of the CMU Never Ending Language Learning project (Carlson et al., 2010) .",
"cite_spans": [
{
"start": 607,
"end": 627,
"text": "Socher et al. (2013)",
"ref_id": "BIBREF32"
},
{
"start": 705,
"end": 719,
"text": "(Miller, 1995)",
"ref_id": "BIBREF25"
},
{
"start": 808,
"end": 832,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF0"
},
{
"start": 1078,
"end": 1100,
"text": "(Carlson et al., 2010)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 534,
"end": 541,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We evaluate our model on three commonly used benchmark tasks: triple classification, entity prediction and relation prediction. This subsection describes those tasks in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation tasks",
"sec_num": "4.2"
},
{
"text": "Triple classification: The triple classification task was first introduced by Socher et al. 2013, and since then it has been used to evaluate various embedding models. The aim of the task is to predict whether a triple (h, r, t) is correct or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation tasks",
"sec_num": "4.2"
},
{
"text": "For classification, we set a relation-specific threshold \u03b8 r for each relation type r. If the implausibility score of an unseen test triple (h, r, t) is smaller than \u03b8 r then the triple will be classified as correct, otherwise incorrect. Following Socher et al. (2013) , the relation-specific thresholds are determined by maximizing the micro-averaged accuracy, which is a per-triple average, on the validation set. We also report the macro-averaged accuracy, which is a per-relation average.",
"cite_spans": [
{
"start": 238,
"end": 268,
"text": "Following Socher et al. (2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation tasks",
"sec_num": "4.2"
},
{
"text": "Entity prediction: The entity prediction task (Bordes et al., 2013) predicts the head or the tail entity given the relation type and the other entity, i.e. predicting h given (?, r, t) or predicting t given (h, r, ?) where ? denotes the missing element. The results are evaluated using a ranking induced by the function f (h, r, t) on test triples. Note that the incorrect triples in the validation and test sets are not used for evaluating the entity prediction task nor the relation prediction task.",
"cite_spans": [
{
"start": 46,
"end": 67,
"text": "(Bordes et al., 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation tasks",
"sec_num": "4.2"
},
{
"text": "Each correct test triple (h, r, t) is corrupted by replacing either its head or tail entity by each of the possible entities in turn, and then we rank these candidates in ascending order of their implausibility score. This is called as the \"Raw\" setting protocol. For the \"Filtered\" setting protocol described in Bordes et al. (2013) , we also filter out before ranking any corrupted triples that appear in the KB. Ranking a corrupted triple appearing in the KB (i.e. a correct triple) higher than the original test triple is also correct, but is penalized by the \"Raw\" score, thus the \"Filtered\" setting provides a clearer view on the ranking performance.",
"cite_spans": [
{
"start": 313,
"end": 333,
"text": "Bordes et al. (2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation tasks",
"sec_num": "4.2"
},
{
"text": "In addition to the mean rank and the Hits@10 (i.e., the proportion of test triples for which the target entity was ranked in the top 10 predictions), which were originally used in the entity prediction task (Bordes et al., 2013) , we also report the mean reciprocal rank (MRR), which is commonly used in information retrieval. In both \"Raw\" and \"Filtered\" settings, mean rank is always greater or equal to 1 and lower mean rank indicates better entity prediction performance. The MRR and Hits@10 scores always range from 0.0 to 1.0, and higher score reflects better prediction result.",
"cite_spans": [
{
"start": 207,
"end": 228,
"text": "(Bordes et al., 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation tasks",
"sec_num": "4.2"
},
{
"text": "Relation prediction: The relation prediction task (Lin et al., 2015a) predicts the relation type given the head and tail entities, i.e. predicting r given (h, ?, t) where ? denotes the missing element. We corrupt each correct test triple (h, r, t) by replacing its relation r by each possible relation type in turn, and then rank these candidates in ascending order of their implausibility score. Just as in the entity prediction task, we use two setting protocols, \"Raw\" and \"Filtered\", and evaluate on mean rank, MRR and Hits@10.",
"cite_spans": [
{
"start": 50,
"end": 69,
"text": "(Lin et al., 2015a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation tasks",
"sec_num": "4.2"
},
{
"text": "For all evaluation tasks, results for TransE are obtained with TransE-NMM with the filtering threshold \u03c4 = 0, while we set \u03c4 = 10 for TransE-NMM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyper-parameter tuning",
"sec_num": "4.3"
},
{
"text": "For triple classification, we first performed a grid search to choose the optimal hyperparameters for TransE by monitoring the microaveraged triple classification accuracy after each training epoch on the validation set. For all datasets, we chose either the 1 or 2 norm in the score function f and the initial RMSProp learning rate \u03b7 \u2208 {0.001, 0.01}. Following the previous work (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2016) , we selected the margin hyper-parameter \u03b3 \u2208 {1, 2, 4} and the number of vector dimensions k \u2208 {20, 50, 100} on WN11 and FB13. On NELL186, we set \u03b3 = 1 and k = 50 Luo et al., 2015) . The highest accuracy on the validation set was obtained when using \u03b7 = 0.01 for all three datasets, and when using 2 norm for NELL186, \u03b3 = 4, k = 20 and 1 norm for WN11, and \u03b3 = 1, k = 100 and 2 norm for FB13.",
"cite_spans": [
{
"start": 380,
"end": 399,
"text": "(Wang et al., 2014;",
"ref_id": "BIBREF36"
},
{
"start": 400,
"end": 418,
"text": "Lin et al., 2015b;",
"ref_id": "BIBREF21"
},
{
"start": 419,
"end": 435,
"text": "Ji et al., 2016)",
"ref_id": "BIBREF16"
},
{
"start": 599,
"end": 616,
"text": "Luo et al., 2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyper-parameter tuning",
"sec_num": "4.3"
},
{
"text": "We set the hyper-parameters \u03b7, \u03b3, k, and the 1 or the 2 -norm in our TransE-NMM model to the same optimal hyper-parameters searched for TransE. We then used a grid search to select the hyper-parameter \u03b4 \u2208 {0, 1, 5, 10} and L 2 regularizer \u03bb \u2208 {0.005, 0.01, 0.05} for TransE-NMM. By monitoring the micro-averaged accuracy after each training epoch, we obtained the highest accuracy on validation set when using \u03b4 = 1 and \u03bb = 0.05 for both WN11 and FB13, and \u03b4 = 0 and \u03bb = 0.01 for NELL186.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyper-parameter tuning",
"sec_num": "4.3"
},
{
"text": "For both entity prediction and relation prediction tasks, we set the hyper-parameters \u03b7, \u03b3, k, and the 1 or the 2 -norm for both TransE and TransE-NMM to be the same as the optimal parameters found for the triple classification task. We then monitored on TransE the filtered MRR on validation set after each training epoch. We chose the model with highest validation MRR, which was then used to evaluate the test set. For TransE-NMM, we searched the hyperparameter \u03b4 \u2208 {0, 1, 5, 10} and L 2 regularizer \u03bb \u2208 {0.005, 0.01, 0.05}. By monitoring the filtered MRR after each training epoch, we selected the best model with the highest filtered MRR on the validation set. Specifically, for the entity prediction task, we selected \u03b4 = 10 and \u03bb = 0.005 for WN11, \u03b4 = 5 and \u03bb = 0.01 for FB13, and \u03b4 = 5 and \u03bb = 0.005 for NELL186. For the relation prediction task, we selected \u03b4 = 10 and \u03bb = 0.005 for WN11, \u03b4 = 10 and \u03bb = 0.05 for FB13, and \u03b4 = 1 and \u03bb = 0.05 for NELL186. Table 3 : Experimental results of TransE (i.e. TransE-NMM with \u03c4 = 0) and TransE-NMM with \u03c4 = 10. Micro-averaged (labeled as Mic.) and Macro-averaged (labeled as Mac.) accuracy results are for the triple classification task. MR, MRR and H@10 abbreviate the mean rank, the mean reciprocal rank and Hits@10 (in %), respectively. \"R\" and \"F\" denote the \"Raw\" and \"Filtered\" settings used in the entity prediction and relation prediction tasks, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 964,
"end": 971,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Hyper-parameter tuning",
"sec_num": "4.3"
},
{
"text": "TransR (Lin et al., 2015b) 85.9 82.5 CTransR (Lin et al., 2015b) 85.7 -TransD 86.4 89.1 TranSparse-S (Ji et al., 2016) 86.4 88.2 TranSparse-US (Ji et al., 2016) 86.8 87.5 NTN (Socher et al., 2013) 70.6 87.2 TransH (Wang et al., 2014) 78.8 83.3 SLogAn (Liang and Forbus, 2015) 75.3 85.3 KG2E 85.4 85.3 Bilinear-COMP (Guu et al., 2015) 77.6 86.1 TransE-COMP (Guu et al., 2015) 80 Table 5 : Results on on the NELL186 test set. Results for the entity prediction task are in the \"Raw\" setting. \"-SkipG\" abbreviates \"-Skip-gram\". Hits@10 scores than TransE on both FB13 and NELL186 datasets. Specifically, on NELL186 TransE-NMM gains a significant improvement of 279 \u2212 214 = 65 in the filtered mean rank (which is about 23% relative improvement), while on the FB13 dataset, TransE-NMM improves with 0.267\u22120.213 = 0.054 in the filtered MRR (which is about 25% relative improvement). On the WN11 dataset, TransE-NMM only achieves better mean rank for entity prediction. The relation prediction results of TransE-NMM and TransE are relatively similar on both WN11 and FB13 be-cause the number of relation types is small in these two datasets. On NELL186, however, TransE-NMM does significantly better than TransE.",
"cite_spans": [
{
"start": 7,
"end": 26,
"text": "(Lin et al., 2015b)",
"ref_id": "BIBREF21"
},
{
"start": 45,
"end": 64,
"text": "(Lin et al., 2015b)",
"ref_id": "BIBREF21"
},
{
"start": 101,
"end": 118,
"text": "(Ji et al., 2016)",
"ref_id": "BIBREF16"
},
{
"start": 143,
"end": 160,
"text": "(Ji et al., 2016)",
"ref_id": "BIBREF16"
},
{
"start": 175,
"end": 196,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF32"
},
{
"start": 214,
"end": 233,
"text": "(Wang et al., 2014)",
"ref_id": "BIBREF36"
},
{
"start": 251,
"end": 275,
"text": "(Liang and Forbus, 2015)",
"ref_id": "BIBREF19"
},
{
"start": 315,
"end": 333,
"text": "(Guu et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 356,
"end": 374,
"text": "(Guu et al., 2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 378,
"end": 385,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method W11 F13",
"sec_num": null
},
{
"text": "In Table 4 , we compare the micro-averaged triple classification accuracy of our TransE-NMM model with the previously reported results on the WN11 and FB13 datasets. The first five rows report the performance of models that use TransE to initialize the entity and relation vectors. The last eight rows present the accuracy of models with randomly initialized parameters. Table 4 shows that our TransE-NMM model obtains the highest accuracy on WN11 and achieves the second highest result on FB13. Note that there are higher results reported for NTN (Socher et al., 2013) , Bilinear-COMP (Guu et al., 2015) and TransE-COMP when entity vectors are initialized by averaging the pre-trained word vectors (Mikolov et al., 2013; Pennington et al., 2014) . It is not surprising as many entity names in Word-Net and FreeBase are lexically meaningful. It is possible for all other embedding models to utilize the pre-trained word vectors as well. However, as pointed out by Wang et al. (2014) and Guu et al. (2015) , averaging the pre-trained word vectors for initializing entity vectors is an open problem and it is not always useful since entity names in many domain-specific KBs are not lexically meaningful. Table 5 compares the accuracy for triple classification, the raw mean rank and raw Hits@10 scores for entity prediction on the NELL186 dataset. The first three rows present the best results reported in , while the next three rows present the best results reported in Luo et al. (2015) . TransE-NMM obtains the highest triple classification accuracy, the best raw mean rank and the second highest raw Hits@10 on the entity prediction task in this comparison. Table 6 presents some examples to illustrate the useful information modeled by the neighbors. We took the relation-specific mixture weights from the learned TransE-NMM model optimized on the entity prediction task, and then extracted three neighbor relations with the largest mixture weights given a relation. Table 6 shows that those relations are semantically coherent. For example, if we know the place of birth and/or the place of death of a person and/or the location where the person is living, it is likely that we can predict the person's nationality. On the other hand, if we know that a person works for an organization and that this person is also the top member of that organization, then it is possible that this person is the CEO of that organization.",
"cite_spans": [
{
"start": 548,
"end": 569,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF32"
},
{
"start": 586,
"end": 604,
"text": "(Guu et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 699,
"end": 721,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF24"
},
{
"start": 722,
"end": 746,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF31"
},
{
"start": 964,
"end": 982,
"text": "Wang et al. (2014)",
"ref_id": "BIBREF36"
},
{
"start": 987,
"end": 1004,
"text": "Guu et al. (2015)",
"ref_id": "BIBREF12"
},
{
"start": 1469,
"end": 1486,
"text": "Luo et al. (2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 371,
"end": 378,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 1202,
"end": 1209,
"text": "Table 5",
"ref_id": null
},
{
"start": 1660,
"end": 1667,
"text": "Table 6",
"ref_id": "TABREF8"
},
{
"start": 1970,
"end": 1977,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Method W11 F13",
"sec_num": null
},
{
"text": "Despite of the lower triple classification scores of TransE reported in Wang et al. (2014) , Table 4 shows that TransE in fact obtains a very competitive accuracy. Particularly, compared to the relation path model TransE-COMP (Guu et al., 2015) , when model parameters were randomly initialized, TransE obtains 85.2 \u2212 80.3 = 4.9% absolute accuracy improvement on the WN11 dataset while achieving similar score on the FB13 dataset. Our high results of the TransE model are probably due to a careful grid search and using the \"Bernoulli\" trick. Note that Lin et al. (2015b) , and Ji et al. (2016) did not report the TransE results used for initializing TransR, TransD and TranSparse, respectively. They directly copied the TransE results previously reported in Wang et al. (2014) . So it is difficult to determine exactly how much TransR, TransD and TranSparse gain over TransE. These models might obtain better results than previously reported when the TransE used for initalization performs as well as reported in this paper. Furthermore, Garc\u00eda-Dur\u00e1n et al. (2015), Lin et al. (2015a) , Garc\u00eda-Dur\u00e1n et al. (2016) and Nickel et al. (2016) also showed that for entity prediction TransE obtains very competitive results which are much higher than the TransE results Figure 2 : Relative improvement of TransE-NMM against TransE for entity prediction task in WN11 when the filtering threshold \u03c4 = {10, 100, 500} (with other hyper-parameters being the same as selected in Section 4.3). Prefixes \"R-\" and \"F-\" denote the \"Raw\" and \"Filtered\" settings, respectively. Suffixes \"-MR\", \"-MRR\" and \"-H@10\" abbreviate the mean rank, the mean reciprocal rank and Hits@10, respectively. originally published in Bordes et al. (2013) . 3 As presented in Table 3 , for entity prediction using WN11, TransE-NMM with the filtering threshold \u03c4 = 10 only obtains better mean rank than TransE (about 15% relative improvement) but lower Hits@10 and mean reciprocal rank. The reason might be that in semantic lexical KBs such as WordNet where relationships between words or word groups are manually constructed, whole neighborhood information might be useful. So when using a small filtering threshold, the model ignores a lot of potential information that could help predicting relationships. Figure 2 presents relative improvements in entity prediction of TransE-NMM over TransE on WN11 when varying the filtering threshold \u03c4 . Figure 2 shows that TransE-NMM gains better scores with higher \u03c4 value. Specifically, when \u03c4 = 500 TransE-NMM does significantly better than TransE in all entity prediction metrics.",
"cite_spans": [
{
"start": 72,
"end": 90,
"text": "Wang et al. (2014)",
"ref_id": "BIBREF36"
},
{
"start": 226,
"end": 244,
"text": "(Guu et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 553,
"end": 571,
"text": "Lin et al. (2015b)",
"ref_id": "BIBREF21"
},
{
"start": 578,
"end": 594,
"text": "Ji et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 759,
"end": 777,
"text": "Wang et al. (2014)",
"ref_id": "BIBREF36"
},
{
"start": 1067,
"end": 1085,
"text": "Lin et al. (2015a)",
"ref_id": "BIBREF20"
},
{
"start": 1088,
"end": 1114,
"text": "Garc\u00eda-Dur\u00e1n et al. (2016)",
"ref_id": "BIBREF9"
},
{
"start": 1119,
"end": 1139,
"text": "Nickel et al. (2016)",
"ref_id": "BIBREF30"
},
{
"start": 1698,
"end": 1718,
"text": "Bordes et al. (2013)",
"ref_id": "BIBREF4"
},
{
"start": 1721,
"end": 1722,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 93,
"end": 100,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 1265,
"end": 1273,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1739,
"end": 1746,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 2271,
"end": 2277,
"text": "Figure",
"ref_id": null
},
{
"start": 2407,
"end": 2416,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "We introduced a neighborhood mixture model for knowledge base completion by constructing 3 They did not report the results on WN11 and FB13 datasets, which are used in this paper, though.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "6"
},
{
"text": "neighbor-based vector representations for entities. We demonstrated its effect by extending TransE (Bordes et al., 2013) with our neighborhood mixture model. On three different datasets, experimental results show that our model significantly improves TransE and obtains better results than the other state-of-the-art embedding models on triple classification, entity prediction and relation prediction tasks. In future work, we plan to apply the neighborhood mixture model to other embedding models, especially to relation path models such as TransE-COMP, to combine the useful information from both relation paths and entity neighborhoods.",
"cite_spans": [
{
"start": 99,
"end": 120,
"text": "(Bordes et al., 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "6"
},
{
"text": "http://cs.stanford.edu/people/danqi/data/nips13-dataset.tar.bz22 http://aclweb.org/anthology/attachments/P/P15/ P15-1009.Datasets.zip",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported by a Google award through the Natural Language Understanding Focused Program, and under the Australian Research Council's Discovery Projects funding scheme (project number DP160102156). This research was also supported by NICTA, funded by the Australian Government through the Department of Communications and the Australian Research Council through the ICT Centre of Excellence Program. The first author was supported by an International Postgraduate Research Scholarship and a NICTA NRPA Top-Up Scholarship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Freebase: A Collaboratively Created Graph Database for Structuring Human Knowledge",
"authors": [
{
"first": "Kurt",
"middle": [],
"last": "Bollacker",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Praveen",
"middle": [],
"last": "Paritosh",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Sturge",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A Col- laboratively Created Graph Database for Structur- ing Human Knowledge. In Proceedings of the 2008",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "ACM SIGMOD International Conference on Management of Data",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1247--1250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ACM SIGMOD International Conference on Man- agement of Data, pages 1247-1250.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning Structured Embeddings of Knowledge Bases",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "301--306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning Structured Embed- dings of Knowledge Bases. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelli- gence, pages 301-306.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Semantic Matching Energy Function for Learning with Multi-relational Data",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2012,
"venue": "Machine Learning",
"volume": "94",
"issue": "",
"pages": "233--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2012. A Semantic Matching En- ergy Function for Learning with Multi-relational Data. Machine Learning, 94(2):233-259.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Translating Embeddings for Modeling Multirelational Data",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Garcia-Duran",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Oksana",
"middle": [],
"last": "Yakhnenko",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "2787--2795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating Embeddings for Modeling Multi- relational Data. In Advances in Neural Information Processing Systems 26, pages 2787-2795.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Toward an Architecture for Neverending Language Learning",
"authors": [
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1306--1313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell. 2010. Toward an Architecture for Never- ending Language Learning. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intel- ligence, pages 1306-1313.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2121--2159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. The Journal of Ma- chine Learning Research, 12:2121-2159.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Composing Relationships with Translations",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "Garc\u00eda-Dur\u00e1n",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "286--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alberto Garc\u00eda-Dur\u00e1n, Antoine Bordes, and Nicolas Usunier. 2015. Composing Relationships with Translations. In Proceedings of the 2015 Confer- ence on Empirical Methods in Natural Language Processing, pages 286-290.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Combining Two and Three-Way Embedding Models for Link Prediction in Knowledge Bases",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "Garc\u00eda-Dur\u00e1n",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Grandvalet",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Artificial Intelligence Research",
"volume": "55",
"issue": "",
"pages": "715--742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alberto Garc\u00eda-Dur\u00e1n, Antoine Bordes, Nicolas Usunier, and Yves Grandvalet. 2016. Combining Two and Three-Way Embedding Models for Link Prediction in Knowledge Bases. Journal of Artifi- cial Intelligence Research, 55:715-742.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Efficient and Expressive Knowledge Base Completion Using Subgraph Feature Extraction",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1488--1498",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Gardner and Tom Mitchell. 2015. Efficient and Expressive Knowledge Base Completion Using Subgraph Feature Extraction. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1488-1498.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semantically Smooth Knowledge Graph Embedding",
"authors": [
{
"first": "Shu",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Quan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lihong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "84--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shu Guo, Quan Wang, Bin Wang, Lihong Wang, and Li Guo. 2015. Semantically Smooth Knowledge Graph Embedding. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 84-94.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Traversing Knowledge Graphs in Vector Space",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Guu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "318--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Guu, John Miller, and Percy Liang. 2015. Traversing Knowledge Graphs in Vector Space. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 318-327.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning to Represent Knowledge Graphs with Gaussian Embedding",
"authors": [
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Guoliang",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th ACM International on Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "623--632",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shizhu He, Kang Liu, Guoliang Ji, and Jun Zhao. 2015. Learning to Represent Knowledge Graphs with Gaussian Embedding. In Proceedings of the 24th ACM International on Conference on Informa- tion and Knowledge Management, pages 623-632.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A latent factor model for highly multi-relational data",
"authors": [
{
"first": "Rodolphe",
"middle": [],
"last": "Jenatton",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [
"L"
],
"last": "Roux",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [
"R"
],
"last": "Obozinski",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in Neural Information Processing Systems",
"volume": "25",
"issue": "",
"pages": "3167--3175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rodolphe Jenatton, Nicolas L. Roux, Antoine Bordes, and Guillaume R Obozinski. 2012. A latent factor model for highly multi-relational data. In Advances in Neural Information Processing Systems 25, pages 3167-3175.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Knowledge Graph Embedding via Dynamic Mapping Matrix",
"authors": [
{
"first": "Guoliang",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Liheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "687--696",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge Graph Embedding via Dynamic Mapping Matrix. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 687-696.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Knowledge Graph Completion with Adaptive Sparse Transfer Matrix",
"authors": [
{
"first": "Guoliang",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "985--991",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. 2016. Knowledge Graph Completion with Adap- tive Sparse Transfer Matrix. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pages 985-991.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Type-Constrained Representation Learning in Knowledge Graphs",
"authors": [
{
"first": "Denis",
"middle": [],
"last": "Krompa",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Baier",
"suffix": ""
},
{
"first": "Volker",
"middle": [],
"last": "Tresp",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 14th International Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "640--655",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denis Krompa, Stephan Baier, and Volker Tresp. 2015. Type-Constrained Representation Learning in Knowledge Graphs. In Proceedings of the 14th In- ternational Semantic Web Conference, pages 640- 655.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "DBpedia -A Large-scale, Multilingual Knowledge Base Extracted from Wikipedia",
"authors": [
{
"first": "Jens",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Isele",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Jakob",
"suffix": ""
},
{
"first": "Anja",
"middle": [],
"last": "Jentzsch",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Kontokostas",
"suffix": ""
},
{
"first": "Pablo",
"middle": [
"N"
],
"last": "Mendes",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Hellmann",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Morsey",
"suffix": ""
},
{
"first": "S\u00f6ren",
"middle": [],
"last": "Patrick Van Kleef",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Auer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bizer",
"suffix": ""
}
],
"year": 2015,
"venue": "Semantic Web",
"volume": "6",
"issue": "2",
"pages": "167--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, S\u00f6ren Auer, and Christian Bizer. 2015. DBpedia -A Large-scale, Multilingual Knowledge Base Extracted from Wikipedia. Semantic Web, 6(2):167-195.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning Plausible Inferences from Semantic Web Knowledge by Combining Analogical Generalization with Structured Logistic Regression",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kenneth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Forbus",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "551--557",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Liang and Kenneth D. Forbus. 2015. Learn- ing Plausible Inferences from Semantic Web Knowl- edge by Combining Analogical Generalization with Structured Logistic Regression. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial In- telligence, pages 551-557.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Modeling Relation Paths for Representation Learning of Knowledge Bases",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Song",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "705--714",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Zhiyuan Liu, Huanbo Luan, Maosong Sun, Siwei Rao, and Song Liu. 2015a. Modeling Re- lation Paths for Representation Learning of Knowl- edge Bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Pro- cessing, pages 705-714.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning Entity and Relation Embeddings for Knowledge Graph Completion",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xuan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Learning",
"volume": "",
"issue": "",
"pages": "2181--2187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015b. Learning Entity and Re- lation Embeddings for Knowledge Graph Comple- tion. In Proceedings of the Twenty-Ninth AAAI Con- ference on Artificial Intelligence Learning, pages 2181-2187.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "On the Limited Memory BFGS Method for Large Scale Optimization",
"authors": [
{
"first": "D",
"middle": [
"C"
],
"last": "Liu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1989,
"venue": "Mathematical Programming",
"volume": "45",
"issue": "3",
"pages": "503--528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. C. Liu and J. Nocedal. 1989. On the Limited Memory BFGS Method for Large Scale Optimiza- tion. Mathematical Programming, 45(3):503-528.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Context-Dependent Knowledge Graph Embedding",
"authors": [
{
"first": "Yuanfei",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Quan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1656--1661",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuanfei Luo, Quan Wang, Bin Wang, and Li Guo. 2015. Context-Dependent Knowledge Graph Em- bedding. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing, pages 1656-1661.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Linguistic Regularities in Continuous Space Word Representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic Regularities in Continuous Space Word Representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746-751.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "WordNet: A Lexical Database for English",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. WordNet: A Lexical Database for English. Communications of the ACM, 38(11):39-41.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Compositional Vector Space Models for Knowledge Base Completion",
"authors": [
{
"first": "Arvind",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mc-Callum",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "156--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arvind Neelakantan, Benjamin Roth, and Andrew Mc- Callum. 2015. Compositional Vector Space Models for Knowledge Base Completion. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 156-166.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "STransE: a novel embedding model of entities and relationships in knowledge bases",
"authors": [
{
"first": "Kairit",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "Lizhen",
"middle": [],
"last": "Sirts",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "460--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu, and Mark Johnson. 2016. STransE: a novel embedding model of entities and relationships in knowledge bases. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 460-466.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A Three-Way Model for Collective Learning on Multi-Relational Data",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Nickel",
"suffix": ""
},
{
"first": "Hans-Peter",
"middle": [],
"last": "Volker Tresp",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kriegel",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 28th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "809--816",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A Three-Way Model for Collective Learning on Multi-Relational Data. In Proceedings of the 28th International Conference on Machine Learning, pages 809-816.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A Review of Relational Machine Learning for Knowledge Graphs",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Nickel",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Volker",
"middle": [],
"last": "Tresp",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. 2015. A Review of Relational Machine Learning for Knowledge Graphs. Proceed- ings of the IEEE, to appear.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Holographic Embeddings of Knowledge Graphs",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Nickel",
"suffix": ""
},
{
"first": "Lorenzo",
"middle": [],
"last": "Rosasco",
"suffix": ""
},
{
"first": "Tomaso",
"middle": [
"A"
],
"last": "Poggio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1955--1961",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Nickel, Lorenzo Rosasco, and Tomaso A. Poggio. 2016. Holographic Embeddings of Knowl- edge Graphs. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pages 1955- 1961.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Glove: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing, pages 1532-1543.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Reasoning With Neural Tensor Networks for Knowledge Base Completion",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "926--934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning With Neural Ten- sor Networks for Knowledge Base Completion. In Advances in Neural Information Processing Systems 26, pages 926-934.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "YAGO: A Core of Semantic Knowledge",
"authors": [
{
"first": "Fabian",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
},
{
"first": "Gjergji",
"middle": [],
"last": "Kasneci",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 16th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "697--706",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. YAGO: A Core of Semantic Knowl- edge. In Proceedings of the 16th International Con- ference on World Wide Web, pages 697-706.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Link Prediction in Relational Data",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Ming Fai Wong",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Abbeel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2004,
"venue": "Advances in Neural Information Processing Systems",
"volume": "16",
"issue": "",
"pages": "659--666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Taskar, Ming fai Wong, Pieter Abbeel, and Daphne Koller. 2004. Link Prediction in Relational Data. In Advances in Neural Information Processing Systems 16, pages 659-666.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Compositional Learning of Embeddings for Relation Paths in Knowledge Bases and Text",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Victoria",
"middle": [],
"last": "Xi",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Hoifung",
"middle": [],
"last": "Tau Yih",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Quirk",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Xi Victoria Lin, Wen tau Yih, Hoi- fung Poon, and Chris Quirk. 2016. Composi- tional Learning of Embeddings for Relation Paths in Knowledge Bases and Text. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics, June.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Knowledge Graph Embedding by Translating on Hyperplanes",
"authors": [
{
"first": "Zhen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianwen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jianlin",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1112--1119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge Graph Embedding by Translating on Hyperplanes. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intel- ligence, pages 1112-1119.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Knowledge Base Completion via Searchbased Question Answering",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "West",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Shaohua",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 23rd International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "515--526",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert West, Evgeniy Gabrilovich, Kevin Murphy, Shaohua Sun, Rahul Gupta, and Dekang Lin. 2014. Knowledge Base Completion via Search- based Question Answering. In Proceedings of the 23rd International Conference on World Wide Web, pages 515-526.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Embedding Entities and Relations for Learning and Inference in Knowledge Bases",
"authors": [
{
"first": "Bishan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. In Proceedings of the International Confer- ence on Learning Representations.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "ADADELTA: An Adaptive Learning Rate Method. CoRR",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew D. Zeiler. 2012. ADADELTA: An Adaptive Learning Rate Method. CoRR, abs/1212.5701.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "An example fragment of a KB.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "and TATEC (Garc\u00eda-Dur\u00e1n et al., 2016). Recently, Neelakantan et al. (2015), Gardner and Mitchell (2015), Luo et al. (2015), Lin et al. (2015a), Garc\u00eda-Dur\u00e1n et al. (2015), Guu et al. (2015) and",
"uris": null
},
"TABREF0": {
"type_str": "table",
"content": "<table/>",
"text": "",
"num": null,
"html": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"text": "The score functions f (h, r, t) and the optimization methods (Opt.) of several prominent embedding models for KB completion. In all of these models, the entities h and t are represented by vectors",
"num": null,
"html": null
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"text": "Statistics of the experimental datasets used in this study (and previous works). #E is the number of entities, #R is the number of relation types, and #Train, #Valid and #Test are the numbers of correct triples in the training, validation and test sets, respectively. Each validation and test set also contains the same number of incorrect triples as the number of correct triples.",
"num": null,
"html": null
},
"TABREF4": {
"type_str": "table",
"content": "<table><tr><td>presents the results of TransE and TransE-</td></tr><tr><td>NMM on triple classification, entity prediction</td></tr><tr><td>and relation prediction tasks on all experimental</td></tr><tr><td>datasets. The results show that TransE-NMM gen-</td></tr><tr><td>erally performs better than TransE in all three eval-</td></tr><tr><td>uation tasks.</td></tr><tr><td>Specifically, TransE-NMM obtains higher triple</td></tr></table>",
"text": "82.53 4324 0.102 19.21 2.37 0.679 99.93 TransE-NMM 86.82 84.37 3687 0.094 17.98 2.14 0.687 99.92",
"num": null,
"html": null
},
"TABREF6": {
"type_str": "table",
"content": "<table><tr><td>Method</td><td>Triple class. Entity pred. Mic. Mac. MR H@10</td></tr><tr><td>TransE-LLE</td><td>90.08 84.50 535 20.02</td></tr><tr><td>SME-LLE</td><td>93.64 89.39 253 37.14</td></tr><tr><td>SE-LLE</td><td>93.95 88.54 447 31.55</td></tr><tr><td colspan=\"2\">TransE-SkipG 85.33 80.06 385 30.52</td></tr><tr><td>SME-SkipG</td><td>92.86 89.65 293 39.70</td></tr><tr><td>SE-SkipG</td><td>93.07 87.98 412 31.12</td></tr><tr><td>TransE</td><td>92.13 88.96 309 36.55</td></tr><tr><td colspan=\"2\">TransE-NMM 94.57 90.95 238 37.55</td></tr><tr><td>: Micro-averaged accuracy results (in %)</td><td/></tr><tr><td>for triple classification on WN11 (labeled as W11)</td><td/></tr><tr><td>and FB13 (labeled as F13) test sets. Scores in bold</td><td/></tr><tr><td>and underline are the best and second best scores,</td><td/></tr><tr><td>respectively.</td><td/></tr><tr><td>classification results than TransE in all three ex-</td><td/></tr><tr><td>perimental datasets, for example, with 2.44% ab-</td><td/></tr><tr><td>solute improvement in the micro-averaged accu-</td><td/></tr><tr><td>racy on the NELL186 dataset (i.e. 31% reduc-</td><td/></tr><tr><td>tion in error). In terms of entity prediction,</td><td/></tr><tr><td>TransE-NMM obtains better mean rank, MRR and</td><td/></tr></table>",
"text": "",
"num": null,
"html": null
},
"TABREF8": {
"type_str": "table",
"content": "<table/>",
"text": "Qualitative examples.",
"num": null,
"html": null
}
}
}
}