|
{ |
|
"paper_id": "D15-1034", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:28:39.626237Z" |
|
}, |
|
"title": "Composing Relationships with Translations", |
|
"authors": [ |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Garc\u00eda-Dur\u00e1n", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "UTC CNRS", |
|
"location": { |
|
"addrLine": "Sorbonne universit\u00e9s", |
|
"postCode": "7253 60203", |
|
"settlement": "Heudiasyc, Compi\u00e8gne", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Usunier", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Performing link prediction in Knowledge Bases (KBs) with embedding-based models, like with the model TransE (Bordes et al., 2013) which represents relationships as translations in the embedding space, have shown promising results in recent years. Most of these works are focused on modeling single relationships and hence do not take full advantage of the graph structure of KBs. In this paper, we propose an extension of TransE that learns to explicitly model composition of relationships via the addition of their corresponding translation vectors. We show empirically that this allows to improve performance for predicting single relationships as well as compositions of pairs of them.", |
|
"pdf_parse": { |
|
"paper_id": "D15-1034", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Performing link prediction in Knowledge Bases (KBs) with embedding-based models, like with the model TransE (Bordes et al., 2013) which represents relationships as translations in the embedding space, have shown promising results in recent years. Most of these works are focused on modeling single relationships and hence do not take full advantage of the graph structure of KBs. In this paper, we propose an extension of TransE that learns to explicitly model composition of relationships via the addition of their corresponding translation vectors. We show empirically that this allows to improve performance for predicting single relationships as well as compositions of pairs of them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Performing link prediction on multi-relational data is becoming essential in order to complete the huge amount of missing information of the knowledge bases. These knowledge can be formalized as directed multi-relation graphs, whose node correspond to entities connected with edges encoding various kind of relationships. We denote these connections via triples (head, label, tail) . Link prediction consists in filling in incomplete triples like (head, label, ?) or (?, label, tail) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 362, |
|
"end": 381, |
|
"text": "(head, label, tail)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 447, |
|
"end": 463, |
|
"text": "(head, label, ?)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 467, |
|
"end": 483, |
|
"text": "(?, label, tail)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this context, embedding models (Wang et al., 2014; Lin et al., 2015; Jenatton et al., 2012; Socher et al., 2013 ) that attempt to learn lowdimensional vector or matrix representations of entities and relationships have shown promising performance in recent years. In particular, the basic model TRANSE (Bordes et al., 2013) has been proved to be very powerful. This model treats each relationship as a translation vector operating on the embedding representing the entities. Hence, for a triple (head, label, tail) , the vector embeddings of head and tail are learned so that they are connected through a translation parameterized by the vector associated with label. Many extensions have been proposed to improve the representation power of TRANSE while still keeping its simplicity, by adding some projections steps before the translation (Wang et al., 2014; Lin et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 53, |
|
"text": "(Wang et al., 2014;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 54, |
|
"end": 71, |
|
"text": "Lin et al., 2015;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 72, |
|
"end": 94, |
|
"text": "Jenatton et al., 2012;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 95, |
|
"end": 114, |
|
"text": "Socher et al., 2013", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 326, |
|
"text": "(Bordes et al., 2013)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 498, |
|
"end": 517, |
|
"text": "(head, label, tail)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 844, |
|
"end": 863, |
|
"text": "(Wang et al., 2014;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 864, |
|
"end": 881, |
|
"text": "Lin et al., 2015)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we propose an extension of TRANSE 1 that focuses on improving its representation of the underlying graph of multi-relational data by trying to learn compositions of relationships as sequences of translations in the embedding space. The idea is to train the embeddings by learning simple reasonings, such as the relationship people/nationality should give a similar result as the composition people/city of birth and city/country. In our approach, called RTRANSE, the training set is augmented with relevant examples of such compositions by performing constrained walks in the knowledge graph, and training so that sequences of translations lead to the desired result. The idea of compositionality to model multi-relational data was previously introduced in (Neelakantan et al., 2015) . That work composes relationships by means of recurrent neural networks (RNN) (one per relationship) with non-linearities. However, we show that there is a natural way to compose relationships by simply adding translation vectors and not requiring additional parameters, which makes it specially appealing because of its scalability.", |
|
"cite_spans": [ |
|
{ |
|
"start": 772, |
|
"end": 798, |
|
"text": "(Neelakantan et al., 2015)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We present experimental results that show the superiority of RTRANSE over TRANSE in terms of link prediction. A detailed evaluation, in which test examples are classified as easy or hard depending on their similarity with training data, highlights the improvement of RTRANSE on both categories. Our experiments include a new evaluation protocol, in which the model is directly asked to answer questions related to compositions of relations, such as (head, label 1 , label 2 , ?). RTRANSE also achieves significantly better performances than TRANSE on this new dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We describe RTRANSE in the next section, and present our experiments in Section 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The model we propose is inspired by TRANSE (Bordes et al., 2013) . In TRANSE, entities and relationships of a KB are mapped to low dimensional vectors, called embeddings. These embeddings are learnt so that for each fact (h, , t) in the KB, we have h + \u2248 t in the embedding space.", |
|
"cite_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 64, |
|
"text": "(Bordes et al., 2013)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Using translations for relationships naturally leads to embed the composition of two relationships as the sum of their embeddings: on a path (h, , t), (t, , t ), we should have h+ + \u2248 t in the embedding space. The original TRANSE does not enforce that the embeddings accurately reproduce such compositions. The recurrent TRANSE we propose here has a modified training stage to include such compositions. This should allow to model simple reasonings in the KB, such as people/nationality is similar to the composition of people/city of birth and city/country.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We describe in this section our model in its full generality, which allows to deal with compositions of an arbitrary number of relationships, even though in this first work we experimented only with compositions of two relationships.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent TransE", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Triples that are the result of a compositions are denoted by (h,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent TransE", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "{ i } p i=1 , t),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent TransE", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where p is the number of relationships that are composed to go from h to t. Such a path means that there exist entities e 1 , ..., e p+1 , with e 1 = h and e p+1 = t such that for all k, (e k , k , e k+1 ) is a fact in the KB. Our model, RTRANSE for recurrent TRANSE, represents each step", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent TransE", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "s k (h, { i } p i=1 , t)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent TransE", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "along the path in the KB with the recurrence relationship (boldface characters denote embedding vectors i.e. h is the embedding vector of the entity h):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent TransE", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "s 1 (h, { i } p i=1 , t) = h s k+1 (h, { i } p i=1 , t) = s k (h, { i } p i=1 , t) + k .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent TransE", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Then, the energy of a triple is computed as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent TransE", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "d(h, { i } p i=1 , t) = ||s p (h, { i } p i=1 , t) \u2212 t|| 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent TransE", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The experience of the paper is motivated by learning simple reasonings in the KB through the compositions of relationships. Therefore, we restricted our analysis to paths of length 2 created as follows. First, for each fact (h, , t), retrieve all paths (h, { 1 , 2 }, t) such that there is e such that both (h, 1 , e) and (e, 2 , t) are in the KB. Then, we filter out paths where (h, 1 , e) = (h, , t) or (e, 2 , t) = (h, , t), as well as the paths with 1 = 2 and h = e = t. We focused on \"unambiguous\" paths, so that the reasoning might actually make sense. In particular, we considered only paths where 1 is either a 1-to-1 or a 1-to-many relationship, and where 2 is either a 1-to-1 or a many-to-1 relationship. In our experiments, the paths created for training only consider the training subset of facts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Path construction and filtering", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In the remainder of the paper, such paths of length 2 are called quadruples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Path construction and filtering", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Our training objective is decomposed in two parts: the first one is the ranking criterion on triples of TRANSE, ignoring quadruples. Paths are then taken into account through additional regularization terms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and regularization", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Denoting by S the set of facts in the KB, the first part of the training objective is the following ranking criterion that operates on triples", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and regularization", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "(h, ,t)\u2208S (h , ,t )\u2208S (h, ,t) \u03b3 + d(h, , t) \u2212 d(h , , t ) + ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and regularization", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where [x] + = max(x, 0) is the positive part of x, \u03b3 is a margin hyperparameter and S (h, ,t) is the set of corrupted triples created from (h, , t) by replacing either h or t with another KB entity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and regularization", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "This ranking loss effectively trains so that the embedding of the tail is the nearest neighbor of the translated head, but it does not guarantee that the distance between the tail and the translated head is small. The nearest neighbor criterion is sufficient to make inference over simple triples, but making sure that the distance is small is necessary for the composition rule to be accurate. In order to account for the compositionality of relationships, we add two additional regularization terms: FAMILY FB15K ENTITIES 721 14,951 RELATIONSHIPS 7 1,345 TRAINING TRIPLES 8,461 483,142 TRAINING QUAD. -30,252 VALIDATION TRIPLES 2,820 50,000 TEST TRIPLES 2,821 59,071 TEST QUAD. -1,852 The first criterion only applies to original facts of the KB, while the second term applies to quadruples. N \u2192{ 1 , 2 } , which involves both the relationships of the quadruple and the relationship from which it was created, is the number of paths involving relationships { 1 , 2 } created from a fact involving , normalized by the total number of quadruples created from facts involving . This criterion puts more weight on paths that are reliable as an alternative for a relationship, for instance {people/city of birth, city/country} is likely a better alternative to people/nationality than {people/writer of the film, film/film release region}. Finally, a regularization term \u00b5||e|| 2 2 is added for each entity embedding e.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 502, |
|
"end": 697, |
|
"text": "FAMILY FB15K ENTITIES 721 14,951 RELATIONSHIPS 7 1,345 TRAINING TRIPLES 8,461 483,142 TRAINING QUAD. -30,252 VALIDATION TRIPLES 2,820 50,000 TEST TRIPLES 2,821 59,071 TEST QUAD.", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training and regularization", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 \u03bb (h, ,t)\u2208S d(h, , t) 2 \u2022 \u03b1 (h,{ 1 , 2 },t)\u2208S N \u2192{ 1 , 2 } d(h, { 1 , 2 }, t) 2 . DATA SET", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and regularization", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "This section presents experiments on the benchmark FB15K introduced in (Bordes et al., 2013) and on FAMILY, a slightly extended version of the artificial database described in (Garc\u00eda-Dur\u00e1n et al., 2014) . Table 1 gives their statistics.", |
|
"cite_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 92, |
|
"text": "(Bordes et al., 2013)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 176, |
|
"end": 203, |
|
"text": "(Garc\u00eda-Dur\u00e1n et al., 2014)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 206, |
|
"end": 213, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Data FB15K is a subset of Freebase, a very large database of generic facts gathering more than 1.2 billion triples and 80 million entities. Inspired by (Hinton, 1986) , FAMILY is a database that contains triples expressing family relationships (cousin of, has ancestor, married to, parent of, related to, sibling of, uncle of) among the mem-bers of 5 families along 6 generations. This dataset is artificial and each family is organized in a layered tree structure where each layer refers to a generation. Families are connected among them by marriage links between two members, randomly sampled from the same layer of different families. Interestingly on this dataset, there are obvious compositional relationships like uncle of \u2248 sibling of + parent of or parent of \u2248 married to + parent of, among others.", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 166, |
|
"text": "(Hinton, 1986)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Protocol", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Setting Our main comparison is TRANSE so we followed the same experimental setting as in (Bordes et al., 2013), using ranking metrics for evaluation. For each test triple we replaced the head by each of the entities in turn, and then computed the score of each of these candidates and sorted them. Since other positive candidates (i.e. entities forming true triples) can be ranked higher than the target one, we filtered out all the positive candidates existing in either the training, validation and test set, except the target one, from the ranking and then we kept the rank of the target entity. The same procedure is repeated but removing the tail instead of the head. The filtered mean rank (mean rank in the rest) is the average of these ranks, and the filtered Hits@10 (H@10 in the rest) is the proportion of target entities in the top 10 predictions. The embedding dimensions were set to 20 for FAMILY and 100 for FB15K. Training was performed by stochastic gradient descent, stopping after for 500 epochs. On FB15K, we used the embeddings of TRANSE to initialize RTRANSE, and we set a learning rate of 0.001 to fine-tune RTRANSE. On FAMILY, both algorithms were initialized randomly and used a learning rate of 0.01. The mean rank was used as a validation criterion, and the values of \u03b3, \u03bb, \u03b1 and \u00b5 were chosen respectively among {0.25, 0.5, 1}, {1e \u22124 , 1e \u22125 , 0}, {0.1, 0.05, 0.1, 0.01, 0.005} and {1e \u22124 , 1e \u22125 , 0}.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Protocol", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Overall performances Experiments on FAM-ILY show a quantitative improvement of the performance of RTRANSE : where TRANSE gets a mean rank of 6.7 and a H@5 of 68.7, RTRANSE get a performance of 6.3 and 72.3 respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Similarly, on FB15K, Table 2 (last row) shows that training on longer paths (length 2 here) actually consistently improves the performance while predicting heads and tails of triples only: the overall H@10 improves by almost 5% from 71.5 for Table 3 : Examples of predictions on quadruples of TRANSE and RTRANSE. The relation paths {l 1 , l 2 } of the first two examples encode the single the relationship l tv program/country of origin; the third one stands for /language/human language/region and the last two ones for /location/location/containedby. The correct answer is in bold.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 28, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 249, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Detailed results In order to better understand the gains of RTRANSE, we performed a detailed evaluation on FB15K, by classifying the test triples along two axes: easy vs hard and with composition vs without composition. A test triple (h, , t) is easy if its head and tail are connected by a triple in the training set, i.e. if either (h, , t) or (t, , h) is seen in train for some relationship . Otherwise, the triple is hard. Orthogonally, the test triple (h, , t) is with composition if there is at least one path { 1 , 2 } for the relationship , regardless of the existence of that specific path between the entities h and t. If no such path exists, (h, , t) is without composition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TRANSE to 76.2 for RTRANSE.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The detailed results are shown in Table 2 . We can see that comparatively to TRANSE, RTANSE particularly improves performances in terms of H@10 on triples with composition, improving on easy triples by 4.2% (from 78.8% to 83,0%) and hard triples by 2.5% (from 46.8% to 49.3%). The main gains are still on easy triples, and in fact the H@10 on easy triples without composition increases by 4%, from 71.3% to 75.3%. The mean rank also considerably improves on easy triples, and stays somehow still on hard ones. All in all, the results show that considering paths during training very significantly improves performances, and the results on triples with composition suggest that RTRANSE is indeed capable of capturing the evidence of links that exist in longer paths.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 41, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "TRANSE to 76.2 for RTRANSE.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "While usual evaluations for link prediction in KBs focus on predicting a missing element of a test triple, we propose here to extend the evaluation to answering more complex questions, such as (h, { 1 , 2 }, ?) or (?, { 1 , 2 }, t).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results on quadruples", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Examples Table 3 presents examples of predictions of both TRANSE and our model RTRANSE on such quadruples. The two first examples try to predict the origin of two TV series from the nationality of the actors that regularly appear in them (regular tv appearance). In the first one, the american actor phil lamarr is the only entity connected to the american TV show madtv through the relationship regular tv appearance. RTRANSE is able to correctly infer the country of origin from this information since it forces country of origin \u2248 regular tv appearance + nationality. On the other side TRANSE is affected by the cascading error since the ranking loss does not guarantee that the distance between h + l 1 and phil lamarr is small, so when summing l 2 it eventually ends up closer to Ireland rather than USA. In contrast, the second example shows that answering that question by using that path is sometimes difficult: the members of the cast of that TV show have different nationalities, so RTRANSE lists the nationalities of these ones and the correct one is ranked third. TRANSE is again more affected than RTRANSE by the cascading error. In the third one, RTRANSE deducts the main region where malay is spoken from the continent of the country with the most number of speakers of that language. In the last two examples, our model infers the location of those universities by forcing an equivalence between their location and the location of their respective campus.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 16, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results on quadruples", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Prediction performance For a more quantitative analysis, we have generated a new test dataset of link prediction on quadruples on FB15K. This test set was created by generating the paths from the usual test set (the triple test set) and removing those quadruples that are used for training. We obtain 1,852 quadruples. The overall experimental protocol is the same as before, trying to predict the head or tail of these quadruple in turn.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results on quadruples", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "On that evaluation protocol, RTRANSE has a mean rank of 114.0 and a H@10 of 68.2%, while TRANSE obtains a mean rank of 159.9 and a H@10 of 65.2% (using the same models as in the previous subsection). We can see that learning on paths improves performances on both metrics, with a gain of 3% in terms of H@10 and an important gain of about 46 in mean rank, which corresponds to a relative improvement of about 30%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results on quadruples", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We have proposed to learn embeddings of compositions of relationships in the translation model for link prediction in KBs. Our experimental results show that this approach is promising.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We considered in this work a restricted set of small paths of length two. We leave the study of more general paths to future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Code available in https://github.com/glorotxa/SME", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was carried out in the framework of the Labex MS2T (ANR-11-IDEX-0004-02), and was funded by the French National Agency for Research (EVEREST-12-JS02-005-01).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Translating embeddings for modeling multirelational data", |
|
"authors": [ |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Usunier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Garc\u00eda-Dur\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oksana", |
|
"middle": [], |
|
"last": "Yakhnenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2787--2795", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garc\u00eda- Dur\u00e1n, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Advances in Neural Information Processing Systems, pages 2787-2795.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Effective blending of two and threeway interactions for modeling multi-relational data", |
|
"authors": [ |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Garc\u00eda-Dur\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Usunier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "ECML PKDD", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alberto Garc\u00eda-Dur\u00e1n, Antoine Bordes, and Nicolas Usunier. 2014. Effective blending of two and three- way interactions for modeling multi-relational data. In ECML PKDD 2014. Springer Berlin Heidelberg.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Learning distributed representations of concepts", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Geoffrey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "Proceedings of the eighth annual conference of the cognitive science society", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Geoffrey E Hinton. 1986. Learning distributed repre- sentations of concepts. In Proceedings of the eighth annual conference of the cognitive science society, volume 1, page 12. Amherst, MA.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A latent factor model for highly multi-relational data", |
|
"authors": [ |
|
{ |
|
"first": "Rodolphe", |
|
"middle": [], |
|
"last": "Jenatton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [ |
|
"Le" |
|
], |
|
"last": "Roux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Obozinski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "NIPS 25", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rodolphe Jenatton, Nicolas Le Roux, Antoine Bordes, and Guillaume Obozinski. 2012. A latent factor model for highly multi-relational data. In NIPS 25.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Learning entity and relation embeddings for knowledge graph completion", |
|
"authors": [ |
|
{ |
|
"first": "Yankai", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuan", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of AAAI'15", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation em- beddings for knowledge graph completion. In Pro- ceedings of AAAI'15.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Compositional vector space models for knowledge base completion", |
|
"authors": [ |
|
{ |
|
"first": "Arvind", |
|
"middle": [], |
|
"last": "Neelakantan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mc-Callum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1504.06662" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arvind Neelakantan, Benjamin Roth, and Andrew Mc- Callum. 2015. Compositional vector space mod- els for knowledge base completion. arXiv preprint arXiv:1504.06662.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Reasoning With Neural Tensor Networks For Knowledge Base Completion", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems 26", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning With Neural Tensor Networks For Knowledge Base Completion. In Advances in Neural Information Processing Sys- tems 26.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Knowledge graph embedding by translating on hyperplanes", |
|
"authors": [ |
|
{ |
|
"first": "Zhen", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianwen", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianlin", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1112--1119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by trans- lating on hyperplanes. In Proceedings of the Twenty- Eighth AAAI Conference on Artificial Intelligence, pages 1112-1119.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td>MODEL</td><td colspan=\"2\">TRANSE MR H@10</td><td colspan=\"2\">RTRANSE MR H@10</td></tr><tr><td>EASY</td><td>17.7</td><td>76.8</td><td>12.5</td><td>82.2</td></tr><tr><td>HARD</td><td>191.0</td><td>48.9</td><td>205.7</td><td>51.0</td></tr><tr><td>EASY W. COMP.</td><td>16.4</td><td>78.8</td><td>11.6</td><td>83.0</td></tr><tr><td>EASY W/O COMP.</td><td>21.6</td><td>71.3</td><td>16.0</td><td>75.3</td></tr><tr><td>HARD W. COMP.</td><td>208.1</td><td>46.8</td><td>212.2</td><td>49.3</td></tr><tr><td colspan=\"2\">HARD W/O COMP. 122.9</td><td>57.0</td><td>123.8</td><td>57.5</td></tr><tr><td>OVERALL</td><td>50.7</td><td>71.5</td><td>49.5</td><td>76.2</td></tr></table>", |
|
"html": null, |
|
"text": "Statistics of the datasets." |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Detailed performances on FB15k of TRANSE and RTRANSE. H@10 are in %. W." |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td/><td>RTRANSE</td><td>TRANSE</td></tr><tr><td>h: madtv</td><td>U.S.A.</td><td>Ireland</td></tr><tr><td>l 1 : regular TV appearance</td><td>Ireland</td><td>U.S.A.</td></tr><tr><td>l 2 : nationality</td><td>Japan</td><td>U.K.</td></tr><tr><td>h: stargate atlantis</td><td>Hawaii</td><td>Scotland</td></tr><tr><td>l 1 : regular TV appearance</td><td>Scotland</td><td>Hawaii</td></tr><tr><td>l 2 : nationality</td><td>U.S.A.</td><td>U.K.</td></tr><tr><td>h: malay</td><td>southeast asia</td><td>taiwan</td></tr><tr><td>l 1 : language/main country</td><td>malaysia</td><td>southeast asia</td></tr><tr><td>l 2 : continent</td><td>asia</td><td>philippines</td></tr><tr><td>h: indiana state university</td><td>the hoosier state</td><td>maryland</td></tr><tr><td>l 1 : institution/campuses</td><td>terre haute</td><td>rhode island</td></tr><tr><td>l 2 : location/state province region</td><td>rhode island</td><td>the constitution state</td></tr><tr><td>h: university of victoria</td><td>victoria</td><td>kelowna</td></tr><tr><td>l 1 : institution/campuses</td><td>kurnaby</td><td>toronto</td></tr><tr><td>l 2 : location/citytown</td><td>kelowna</td><td>ottawa</td></tr></table>", |
|
"html": null, |
|
"text": "Nearest entities to h + l 1 + l 2" |
|
} |
|
} |
|
} |
|
} |