Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S17-1012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:30:11.906699Z"
},
"title": "Distributed Prediction of Relations for Entities: The Easy, The Difficult, and The Impossible",
"authors": [
{
"first": "Abhijeet",
"middle": [],
"last": "Gupta",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stuttgart University",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universitat Pompeu Fabra",
"location": {
"settlement": "Barcelona",
"country": "Spain"
}
},
"email": "[email protected]"
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stuttgart University",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word embeddings are supposed to provide easy access to semantic relations such as \"male of\" (man-woman). While this claim has been investigated for concepts, little is known about the distributional behavior of relations of (Named) Entities. We describe two word embedding-based models that predict values for relational attributes of entities, and analyse them. The task is challenging, with major performance differences between relations. Contrary to many NLP tasks, high difficulty for a relation does not result from low frequency, but from (a) one-to-many mappings; and (b) lack of context patterns expressing the relation that are easy to pick up by word embeddings.",
"pdf_parse": {
"paper_id": "S17-1012",
"_pdf_hash": "",
"abstract": [
{
"text": "Word embeddings are supposed to provide easy access to semantic relations such as \"male of\" (man-woman). While this claim has been investigated for concepts, little is known about the distributional behavior of relations of (Named) Entities. We describe two word embedding-based models that predict values for relational attributes of entities, and analyse them. The task is challenging, with major performance differences between relations. Contrary to many NLP tasks, high difficulty for a relation does not result from low frequency, but from (a) one-to-many mappings; and (b) lack of context patterns expressing the relation that are easy to pick up by word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A central claim about distributed models of word meaning (e.g., Mikolov et al. (2013) ) is that word embedding space provides easy access to semantic relations. E.g., Mikolov et al.'s space was shown to encode the \"male-female relation\" linearly, as a vector ( # \u00bb man \u2212 # \u00bb woman = # \u00bb king \u2212 # \u00bb queen). The accessibility of semantic relations was subsequently examined in more detail. Rei and Briscoe (2014) and Melamud et al. (2014) reported successful modeling of lexical relations such as hypernymy and synonymy. K\u00f6per et al. (2015) considered a broader range of relationships,with mixed results. Levy and Goldberg (2014b) developed an improved, nonlinear relation extraction method.",
"cite_spans": [
{
"start": 64,
"end": 85,
"text": "Mikolov et al. (2013)",
"ref_id": "BIBREF12"
},
{
"start": 388,
"end": 410,
"text": "Rei and Briscoe (2014)",
"ref_id": "BIBREF15"
},
{
"start": 415,
"end": 436,
"text": "Melamud et al. (2014)",
"ref_id": "BIBREF11"
},
{
"start": 519,
"end": 538,
"text": "K\u00f6per et al. (2015)",
"ref_id": "BIBREF7"
},
{
"start": 603,
"end": 628,
"text": "Levy and Goldberg (2014b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These studies were conducted primarily on concepts and their semantic relations, like hypernym(politician) = person.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Meanwhile, entities and the relations they partake in are much less well understood. 1 Entities are instances of concepts, i.e., they refer to specific individual objects in the real world, for example, Donald Trump is an instance of the concept politician. Consequently, entities are generally associated with a rich set of numeric and relational attributes (for politician instances: size, office, etc.). In contrast to concepts, the values of these attributes tend to be discrete (Herbelot, 2015) : while the size of politician is best described by a probability distribution, the size of Donald Trump is 1.88m. Since distributional representations are notoriously bad at handling discrete knowledge (Fodor and Lepore, 1999; Smolensky, 1990) , this raises the question of how well such models can capture entity-related knowledge.",
"cite_spans": [
{
"start": 483,
"end": 499,
"text": "(Herbelot, 2015)",
"ref_id": "BIBREF6"
},
{
"start": 703,
"end": 727,
"text": "(Fodor and Lepore, 1999;",
"ref_id": "BIBREF2"
},
{
"start": 728,
"end": 744,
"text": "Smolensky, 1990)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our previous work (Gupta et al., 2015) , we analysed distributional prediction of numeric attributes of entities, found a large variance in quality among attributes, and identified factors determining prediction difficulty. A corresponding analysis for relational (categorial) attributes of entities is still missing, even though entities are highly relevant for NLP. This is evident from the highly active area of knowledge base completion (KBC), the task of extending incomplete entity information in knowledge bases such as Yago or Wikidata (e.g., Bordes et al., 2013; Freitas et al., 2014; Neelakantan and Chang, 2015; Guu et al., 2015; Krishnamurthy and Mitchell, 2015) .",
"cite_spans": [
{
"start": 21,
"end": 41,
"text": "(Gupta et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 554,
"end": 574,
"text": "Bordes et al., 2013;",
"ref_id": null
},
{
"start": 575,
"end": 596,
"text": "Freitas et al., 2014;",
"ref_id": "BIBREF3"
},
{
"start": 597,
"end": 625,
"text": "Neelakantan and Chang, 2015;",
"ref_id": "BIBREF13"
},
{
"start": 626,
"end": 643,
"text": "Guu et al., 2015;",
"ref_id": "BIBREF5"
},
{
"start": 644,
"end": 677,
"text": "Krishnamurthy and Mitchell, 2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we assess to what extent relational attributes of entities are easily accessible from word embedding space. To this end, we define two models that predict, given a target entity (Star Wars) and a relation (director), a distributed representation for the relatum (George Lucas). We carry out a detailed per-relation analyses of their performance on seven ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Both models predict a vector for a relatum r (plural: relata) given a target entity vector t and a symbolic relation \u03c1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Relatum Prediction Models",
"sec_num": "2"
},
{
"text": "The Linear Model (LinM) is inspired by Mikolov et al.'s \"phrase analogy\" evaluation of word embeddings ( # \u00bb man \u2212 # \u00bb woman = # \u00bb king \u2212 # \u00bb queen). However, instead of looking at individual words, we extract representations of semantic relations from sets of pairs T \u03c1 = {(t i , \u03c1, r i )} instantiating the relation \u03c1. For each relation \u03c1, LinM computes the average (or centroid) difference vector over the set of training pairs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Relatum Prediction Models",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r(t, \u03c1) = t + (r,\u03c1,t)\u2208T\u03c1 (r \u2212 t)/N",
"eq_num": "(1)"
}
],
"section": "Two Relatum Prediction Models",
"sec_num": "2"
},
{
"text": "That is, the predictedr for an input (t, \u03c1) is the sum of the target vector and the relation's prototype. This model should work well if relations are represented additively in the embedding space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Relatum Prediction Models",
"sec_num": "2"
},
{
"text": "The Nonlinear Model (NonLinM) is a feedforward network ( Figure 1 ) introducing a nonlinearity, inspired by Levy and Goldberg (2014b) and similar to models used in KBC, e.g., Socher et al. (2013) . The relatum vector is predicted a\u015d",
"cite_spans": [
{
"start": 108,
"end": 133,
"text": "Levy and Goldberg (2014b)",
"ref_id": "BIBREF10"
},
{
"start": 175,
"end": 195,
"text": "Socher et al. (2013)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 57,
"end": 65,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Two Relatum Prediction Models",
"sec_num": "2"
},
{
"text": "r \u03b8 (t, \u03c1) = \u03c3(\u03c3(t \u2022 W in + v \u03c1 \u2022 W r ) \u2022 W out ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Relatum Prediction Models",
"sec_num": "2"
},
{
"text": "where v \u03c1 is the relation encoded as an mdimensional one-hot vector and the three matrices W in , W r , W out form the model parameters \u03b8. For the nonlinearity \u03c3, we use tanh.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Relatum Prediction Models",
"sec_num": "2"
},
{
"text": "In this model, the hidden layer represents a nonlinearly transformed composition of target and relation from which the relatum can be predicted. NonLinM can theoretically make accurate predictions even if relations are not additive in embedding space. Also, its sharing of training data among relations should lead to more reliable learning for infrequent relations. As objective function, we use",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Relatum Prediction Models",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(\u03b8) = (t,r) [ cos(r \u03b8 (t, \u03c1), r) \u2212 \u03b1 \u2022 cos(r \u03b8 (t, \u03c1), nc(r \u03b8 (t, \u03c1)))]",
"eq_num": "(3)"
}
],
"section": "Two Relatum Prediction Models",
"sec_num": "2"
},
{
"text": "where nc(v) is the nearest confounder of v, i.e., the next neighbor of v that is not a relatum for the current target-relation pair. Thus, we minimize the cosine distance between the predicted vector and the gold vector for the relatum while maximizing the cosine distance of the prediction to the closest negative example. We introduce a weight \u03b1 \u2208 [0, 1] for the negative sampling term as a hyper-parameter optimized on the development set. During training, we apply gradient descent with the adaptive learning rate method AdaDelta (Zeiler, 2012).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Relatum Prediction Models",
"sec_num": "2"
},
{
"text": "Data. We extract relation data from FreeBase. We follow our earlier work Gupta et al. (2015) , but go beyond its limitation to two domains (country, citytown). We experiment with seven major FreeBase domains: animal, book, citytown, country, employer, organization, people.",
"cite_spans": [
{
"start": 73,
"end": 92,
"text": "Gupta et al. (2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We limit the number of datapoints of very large relation types to 3000 with random sampling for efficiency reasons. We only remove relation types with fewer than 3 datapoints. This results in a quite challenging dataset that demonstrates the generalizability of our models and is roughly comparable, in variety and size, to the FB15K dataset (Bordes et al., 2013) . The distributed representations for all entities come from the 1000-dimensional \"Google News\" skip-gram model (Mikolov et al., 2013) for Free-Base entities 2 trained on a 100G token news corpus. We only retain relation datapoints where both target and relatum are covered in the Google News vectors. Table 1 shows the numbers of relations and unique objects (target plus relata). We split all domains into training, validation, and test sets (60%-20%-20%). The split applies to each relation type: in test, we face no unseen relation types, but unseen datapoints for each relation. 3 Hyperparameter settings. The NonLinM model uses an L 2 norm constraint of s=3. We adopt the best AdaDelta parameters from Zeiler (2012), viz. \u03c1 = 0.95 and = 10 \u22126 . We optimize the negative sampling weight \u03b1 (cf. Eq. 3) by line search with a step size of 0.1 on the largest domain, country, and find 0.6 to be the optimal value, which we reuse for all domains. Due to the varying dimensionality m of the relation vector per domain, we set the size of the hidden layer to k = 2n + m/10 (n is the dimensionality of the word embeddings, cf. Figure 1) . We train all models for a maximum of 1000 epochs with early stopping.",
"cite_spans": [
{
"start": 342,
"end": 363,
"text": "(Bordes et al., 2013)",
"ref_id": null
},
{
"start": 476,
"end": 498,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF12"
},
{
"start": 948,
"end": 949,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 666,
"end": 673,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 1488,
"end": 1497,
"text": "Figure 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Evaluation. Models that predict vectors in a continuous vector space, like ours, cannot expect to predict the output vector precisely. Thus, we apply nearest neighbor mapping using the set of all unique targets and relata in each domain (cf. Table 1) to identify the correct relatum name. We then perform an Information Retrieval-style ranking evaluation: We compute the rank of the correct relatum r, given the target t and the relation \u03c1, in the test set T and aggregate these ranks to compute the mean reciprocal rank (MRR):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M RR = 1 ||T || (t,\u03c1,r)\u2208T 1 rank t,\u03c1 (r)",
"eq_num": "(4)"
}
],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "where rank is the nearest neighbor rank of the relatum vector r given the prediction of the model for the input t, \u03c1. We report results at the relation level as well as macro-and micro-averaged MRR for the complete dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Frequency Baseline (BL). Our baseline model ignores the target. For each relation, it predicts the frequency-ordered list of all training set relata.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Overall results. Table 1 shows that the nonlinear model NonLinM consistently gives the best results and statistically outperforms the linear model on all domains according to a Wilcoxon test (\u03b1=0.05). Both LinM and NonLinM clearly outclass the baseline. Most MRRs are around 0.25 (micro average 0.22), with one outlier, at 0.18, for country, the largest domain. Overall, the numbers may appear disappointing at first glance: these MRRs mean that the correct relatum is typically around the fourth nearest neighbor of the prediction vector. This indicates that open-vocabulary relatum prediction in a space of tens of thousands of words is a challenging task that warrants more detailed analysis. We observe that the nonlinear model achieves reasonable results even for sparse domains (cf. the low baseline), which we take as evidence for its generalization capabilities.",
"cite_spans": [],
"ref_spans": [
{
"start": 17,
"end": 24,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "Analysis at relation level. Table 1 shows the number of relations with good MRRs (greater than 0.3) and bad MRRs (smaller than 0.1) for each relation. While the numbers vary across domains, the models tend to do badly on around 40-50% of all relations, and obtain good scores for less than one third of all relations. Figure 2 shows the distribution for the best domain (animal) and the worst one (country) . Both plots show a Zipfian distribution with a rel- MRR for country atively small set of well-modelled relations and a long tail of poorly modelled ones. NonLinM does better or as well as LinM for almost all relations. The performances of the two models are very tighly correlated for difficult relations; they only differ for the easier ones, where NonLinM's evidently captures the data better.",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 318,
"end": 326,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022\u2022 \u2022\u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022\u2022 \u2022 \u2022\u2022 \u2022\u2022\u2022 \u2022 \u2022 \u2022\u2022\u2022\u2022\u2022\u2022\u2022 \u2022 \u2022 \u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022 \u2022 NonLinM LinM",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "Qualitatively, the two models differ substantially with regard to prediction patterns at the level of targets. Table 2 shows the first predictions for three targets from two relations: continent, where NonLinM outperforms LinM, and capital, where it is the other way around. NonLinM's errors consist almost exclusively in predicting semantically similar entities of the correct relatum type, e.g., predicting Quito (the capital of Ecuador) as capital of Venezuela. In contrast, the LinM model has a harder time capturing the correct type, predicting country entities as capitals (e.g., Nepal as the capital of Nepal). ",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 118,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "FreeBase relations hard to model? To test for sparsity problems, we first computed the correlation between model performance and the \"usual suspect\" relation frequency (number of instances for each relation). In NLP applications, this typically yields a high positive correlation. The second-tolast column of Table 1 shows that this is not true for our dataset. We find a substantial positive correlation only for people, correlations around zero for most domains, and substantial negative ones for organization and country. For these domains, therefore, frequent relations are actually harder to model. Further analysis revealed two main sources of difficulty:",
"cite_spans": [],
"ref_spans": [
{
"start": 309,
"end": 316,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Analysis of Difficulty. So what makes many",
"sec_num": null
},
{
"text": "(1) One-to-many relations. Relations with many datapoints tend to be one-to-many. We assume this to be a major source of difficulty, since the model is presented with multiple relata for the same target during training and will typically learn to predict a centroid of these relata. As an extreme case, consider a relation like administrative divisions that relates the US to all of its federal states: the resulting prediction will arguably be dissimilar to every individual state. To test this hypothesis, we computed the rank correlation at the relation level between the number of relata per target and NonLinM performance, shown in the last column of Table 1 . Indeed, we find a strong negative correlation for every single domain. In addition, Figure 3 plots relation performance (y axis) against the ratio of relata per target (x axis: one-to-one on the left, one-to-many on the right) for animal and country.",
"cite_spans": [],
"ref_spans": [
{
"start": 656,
"end": 663,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 750,
"end": 758,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Analysis of Difficulty. So what makes many",
"sec_num": null
},
{
"text": "Qualitatively, Table 3 shows examples for the three most easy and difficult relations for country. The list suggests that relations tend to be easy when they associate targets with single relata: the relation country maps territories and colonies onto their motherlands, and the tournaments relation is only populated with a few Commonwealth games (cf. the high baseline). In contrast, relations that map targets on many relata are difficult, such as administrative divisions of countries, or a list of disputed territories. Note that this is not an evaluation issue, since MRR can deal with multiple correct answers. Our models do badly because they lack strategies to address these cases.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Analysis of Difficulty. So what makes many",
"sec_num": null
},
{
"text": "(2) Lack of contextual support. One-to-many relations are not the only culprit. Strikingly, Figure 2 shows that a low target-relatum ratio is a necessary condition for good performance (the upper right corners are empty), but not a sufficient one (the lower left corners are not empty). Some relations are not modelled well even though they are (almost) one-to-one. Examples include currency formerly used or named after for country and place of origin for animal. Further analysis indicated that these relations suffer from what Gupta et al. (2015) called lack of contextual support: Although they are expressed overtly in the linguistic context of the target and relatum (and often even frequently so), their realizations cannot be tied to individual words or topics. Instead, they are expressed by relatively specific linguistic patterns, often predicate-argument structures (X used to pay with Y, X is named in the honor of Y). Such structures are hard to pick up by word embedding models that make the bag-of-words independence assumption among context words.",
"cite_spans": [
{
"start": 530,
"end": 549,
"text": "Gupta et al. (2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 92,
"end": 100,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of Difficulty. So what makes many",
"sec_num": null
},
{
"text": "This paper considers the prediction of related entities (\"relata\") given a pair of a target Named Entity and a relation (Star Wars, director, ?) on the basis of distributional information. This task is challenging due to the more discrete behavior of attributes of entities as compared to concepts. We provide an analysis based on two models that use vector representations for both the targets and the relata.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Our results yield new insights into how embedding spaces represent entity relations: they are generally not represented additively, and nonlinearity helps. They also complement insights on the be- (Gupta et al., 2015) : Relations, like numeric attributes, are difficult to model if they are not specifically expressed in the lingusitic context of target and relatum. A new challenge specific to relations are situations where a single target maps onto many relata. If none of the two problems applies, relations are easy to model. If one applies, they are difficult.",
"cite_spans": [
{
"start": 197,
"end": 217,
"text": "(Gupta et al., 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "And if both apply, they are essentially impossible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Among the two challenges, the problem of oneto-many relations appears easier to address, since a continuous output vector is, at least in principle, able to be similar to many relata. In the future, we will extend the model to deal better with oneto-many relations. While the lack of contextual support seems more fundamental, it could be addressed by either using syntax-based embeddings (Levy and Goldberg, 2014a ) that can better pick up the specific context patterns characteristic for these relations, or by optimizing the input word embeddings for the task. This becomes a similar problem to joint training of representations from knowledge base structure and textual evidence (Perozzi et al., 2014; Toutanova et al., 2015) .",
"cite_spans": [
{
"start": 389,
"end": 414,
"text": "(Levy and Goldberg, 2014a",
"ref_id": "BIBREF9"
},
{
"start": 683,
"end": 705,
"text": "(Perozzi et al., 2014;",
"ref_id": "BIBREF14"
},
{
"start": 706,
"end": 729,
"text": "Toutanova et al., 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The original dataset byMikolov et al. (2013) did contain a small number of entity-entity relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://code.google.com/p/word2vec/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The dataset are available at: http://www.ims.unistuttgart.de/data/RelationPrediction.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 715154) and the DFG (SFB 732, Project D10). This paper reflects the authors' view only, and the EU is not responsible for any use that may be made of the information it contains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Translating embeddings for modeling multirelational data",
"authors": [],
"year": null,
"venue": "Proceedings of Neural Information Processing Systems 26",
"volume": "",
"issue": "",
"pages": "2787--2795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Translating embeddings for modeling multi- relational data. In Proceedings of Neural Informa- tion Processing Systems 26. pages 2787-2795.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "All at Sea in Semantic Space: Churchland on Meaning Similarity",
"authors": [
{
"first": "Jerry",
"middle": [],
"last": "Fodor",
"suffix": ""
},
{
"first": "Ernie",
"middle": [],
"last": "Lepore",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of Philosophy",
"volume": "96",
"issue": "8",
"pages": "381--403",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerry Fodor and Ernie Lepore. 1999. All at Sea in Se- mantic Space: Churchland on Meaning Similarity. Journal of Philosophy 96(8):381-403.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A distributional semantics approach for selective reasoning on commonsense graph knowledge bases",
"authors": [
{
"first": "Andr\u00e9",
"middle": [],
"last": "Freitas",
"suffix": ""
},
{
"first": "Joao",
"middle": [
"Carlos"
],
"last": "Pereira Da Silva",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Curry",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Buitelaar",
"suffix": ""
}
],
"year": 2014,
"venue": "Natural Language Processing and Information Systems",
"volume": "",
"issue": "",
"pages": "21--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andr\u00e9 Freitas, Joao Carlos Pereira da Silva, Ed- ward Curry, and Paul Buitelaar. 2014. A distribu- tional semantics approach for selective reasoning on commonsense graph knowledge bases. In Nat- ural Language Processing and Information Systems, Springer, pages 21-32.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Distributional vectors encode referential attributes",
"authors": [
{
"first": "Abhijeet",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "12--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhijeet Gupta, Gemma Boleda, Marco Baroni, and Sebastian Pad\u00f3. 2015. Distributional vectors encode referential attributes. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing. Lisbon, Portugal, pages 12-21.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Traversing knowledge graphs in vector space",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Guu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "318--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Guu, John Miller, and Percy Liang. 2015. Traversing knowledge graphs in vector space. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon, Portugal, pages 318-327.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Mr Darcy and Mr Toad, gentlemen: Distributional names and their kinds",
"authors": [
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "Herbelot",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 11th International Conference on Computational Semantics",
"volume": "",
"issue": "",
"pages": "151--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aur\u00e9lie Herbelot. 2015. Mr Darcy and Mr Toad, gen- tlemen: Distributional names and their kinds. Pro- ceedings of the 11th International Conference on Computational Semantics pages 151-161.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multilingual Reliability and \"Semantic\" Structure of Continuous Word Spaces",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "K\u00f6per",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Scheible",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 11th Conference on Computational Semantics",
"volume": "",
"issue": "",
"pages": "40--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian K\u00f6per, Christian Scheible, and Sabine Schulte im Walde. 2015. Multilingual Reliabil- ity and \"Semantic\" Structure of Continuous Word Spaces. In Proceedings of the 11th Conference on Computational Semantics. London, UK, pages 40- 45.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning a compositional semantics for freebase with an open predicate vocabulary",
"authors": [
{
"first": "Jayant",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Tom",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "257--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jayant Krishnamurthy and Tom M Mitchell. 2015. Learning a compositional semantics for freebase with an open predicate vocabulary. Transactions of the Association for Computational Linguistics 3:257-270.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Dependencybased word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "302--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014a. Dependency- based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Com- putational Linguistics. Baltimore, Maryland, pages 302-308.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Linguistic regularities in sparse and explicit word representations",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "171--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014b. Linguistic regularities in sparse and explicit word representa- tions. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning. Ann Arbor, Michigan, pages 171-180.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Probabilistic modeling of joint-context in distributional similarity",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Melamud",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
},
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "181--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Melamud, Ido Dagan, Jacob Goldberger, Idan Szpektor, and Deniz Yuret. 2014. Probabilistic mod- eling of joint-context in distributional similarity. In Proceedings of the Eighteenth Conference on Com- putational Natural Language Learning. Ann Arbor, Michigan, pages 181-190.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems. Lake Tahoe",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems. Lake Tahoe, NV, pages 3111-3119.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "ferring missing entity type instances for knowledge base completion: New dataset and methods",
"authors": [
{
"first": "Arvind",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "515--525",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arvind Neelakantan and Ming-Wei Chang. 2015. In- ferring missing entity type instances for knowledge base completion: New dataset and methods. In Pro- ceedings of the North American Chapter of the Asso- ciation for Computational Linguistics. Denver, CO, pages 515-525.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Deepwalk: Online learning of social representations",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Perozzi",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Skiena",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "701--710",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social represen- tations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, New York City, NY, pages 701-710.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Looking for hyponyms in vector space",
"authors": [
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "68--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marek Rei and Ted Briscoe. 2014. Looking for hy- ponyms in vector space. In Proceedings of the Eigh- teenth Conference on Computational Natural Lan- guage Learning. Ann Arbor, Michigan, pages 68- 77.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Tensor product variable binding and the representation of symbolic structures in connectionist systems",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Smolensky",
"suffix": ""
}
],
"year": 1990,
"venue": "Artificial Intelligence",
"volume": "46",
"issue": "1-2",
"pages": "159--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Smolensky. 1990. Tensor product variable bind- ing and the representation of symbolic structures in connectionist systems. Artificial Intelligence 46(1- 2):159-216.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Reasoning with neural tensor networks for knowledge base completion",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "926--934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural ten- sor networks for knowledge base completion. In Advances in Neural Information Processing Systems. Lake Tahoe, CA, pages 926-934.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Representing text for joint embedding of text and knowledge bases",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Pallavi",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1499--1509",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Danqi Chen, Patrick Pantel, Pallavi Choudhury, and Michael Gamon. 2015. Represent- ing text for joint embedding of text and knowledge bases. In Proceedings of EMNLP. Lisbon, Portugal, pages 1499-1509.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adadelta: An adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew D. Zeiler. 2012. Adadelta: An adaptive learn- ing rate method. In CoRR, abs/1212.5701.",
"links": null
}
},
"ref_entries": {
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Scatterplot: MRR vs. number of relata per target (above: animal, below: country)"
},
"TABREF2": {
"type_str": "table",
"num": null,
"text": "Test set statistics and results. #R: relations; #Ts+Ra: unique targets and relata; BL/LM/NLM: Baseline, linear and nonlinear model (macro-average MRR); %R\u2276x: percent of relations with MRR \u2276x;",
"content": "<table/>",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"text": "Example predictions for two country relations (correct answer in boldface)",
"content": "<table/>",
"html": null
},
"TABREF6": {
"type_str": "table",
"num": null,
"text": "The three most easy and most difficult relations for the country domain havior of numeric attributes of entities",
"content": "<table/>",
"html": null
}
}
}
}