Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D15-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:26:29.339584Z"
},
"title": "Aligning Knowledge and Text Embeddings by Entity Descriptions",
"authors": [
{
"first": "Huaping",
"middle": [],
"last": "Zhong",
"suffix": "",
"affiliation": {},
"email": "zhonghp@mail2"
},
{
"first": "Jianwen",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Zhen",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": "wangzh56@mail2"
},
{
"first": "Hai",
"middle": [],
"last": "Wan",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We study the problem of jointly embedding a knowledge base and a text corpus. The key issue is the alignment model making sure the vectors of entities, relations and words are in the same space. Wang et al. (2014a) rely on Wikipedia anchors, making the applicable scope quite limited. In this paper we propose a new alignment model based on text descriptions of entities, without dependency on anchors. We require the embedding vector of an entity not only to fit the structured constraints in KBs but also to be equal to the embedding vector computed from the text description. Extensive experiments show that, the proposed approach consistently performs comparably or even better than the method of Wang et al. (2014a), which is encouraging as we do not use any anchor information.",
"pdf_parse": {
"paper_id": "D15-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "We study the problem of jointly embedding a knowledge base and a text corpus. The key issue is the alignment model making sure the vectors of entities, relations and words are in the same space. Wang et al. (2014a) rely on Wikipedia anchors, making the applicable scope quite limited. In this paper we propose a new alignment model based on text descriptions of entities, without dependency on anchors. We require the embedding vector of an entity not only to fit the structured constraints in KBs but also to be equal to the embedding vector computed from the text description. Extensive experiments show that, the proposed approach consistently performs comparably or even better than the method of Wang et al. (2014a), which is encouraging as we do not use any anchor information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Knowledge base embedding has attracted surging interest recently. The aim is to learn continuous vector representations (embeddings) for entities and relations of a structured knowledge base (KB) such as Freebase. Typically it optimizes a global objective function over all the facts in the KB and hence the embedding vector of an entity / relation is expected to encode global information in the KB. It is capable of reasoning missing facts in a KB and helping facts extraction (Bordes et al., 2011; Bordes et al., 2012; Socher et al., 2013; Chang et al., 2013; Wang et al., 2014b; Lin et al., 2015) .",
"cite_spans": [
{
"start": 479,
"end": 500,
"text": "(Bordes et al., 2011;",
"ref_id": "BIBREF0"
},
{
"start": 501,
"end": 521,
"text": "Bordes et al., 2012;",
"ref_id": "BIBREF1"
},
{
"start": 522,
"end": 542,
"text": "Socher et al., 2013;",
"ref_id": "BIBREF9"
},
{
"start": 543,
"end": 562,
"text": "Chang et al., 2013;",
"ref_id": "BIBREF3"
},
{
"start": 563,
"end": 582,
"text": "Wang et al., 2014b;",
"ref_id": "BIBREF12"
},
{
"start": 583,
"end": 600,
"text": "Lin et al., 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although seeming encouraging, the approaches in the aforementioned literature suffer from two common issues: (1) Embeddings are exclusive to entities/relations within KBs. Computation between KBs and text cannot be handled, which are prevalent in practice. For example, in fact extraction, a candidate value may be just a phrase in text. (2) KB sparsity. The above approaches are only based on structured facts of KBs, and thus cannot work well on entities with few facts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An important milestone, the approach of Wang et al. (2014a) solves issue (1) by jointly embedding entities, relations, and words into the same vector space and hence is able to deal with words/phrases beyond entities in KBs. The key component is the so-called alignment model, which makes sure the embeddings of entities, relations, and words are in the same space. Two alignment models are introduced there: one uses entity names and another uses Wikipedia anchors. However, both of them have drawbacks. As reported in the paper, using entity names severely pollutes the embeddings of words. Thus it is not recommended in practice. Using Wikipedia anchors completely relies on the special data source and hence the approach cannot be applied to other customer data.",
"cite_spans": [
{
"start": 40,
"end": 59,
"text": "Wang et al. (2014a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To fully address the two issues, this paper proposes a new alignment method, aligning by entity descriptions. We only assume some entities in KBs have text descriptions, which almost always holds in practice. We require the embedding of an entity not only fits the structured constraints in KBs but also equals the vector computed from the text description. Meanwhile, if an entity has few facts, the description will provide information for embedding, thus the issue of KB sparsity is also well handled. We conduct extensive experiments on the tasks of triplet classification, link prediction, relational fact extraction, and analogical reasoning to compare with the previous approach (Wang et al., 2014a) . Results show that our approach consistently achieves better or comparable performance.",
"cite_spans": [
{
"start": 686,
"end": 706,
"text": "(Wang et al., 2014a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "TransE This is a representative knowledge embedding model proposed by . For a fact (h, r, t) in KBs, where h is the head entity, r is the relation, and t is the tail entity, TransE models the relation r as a translation vector r connecting the embeddings h and t of the two entities, i.e., h + r is close to t. The model is simple, effective and efficient. Most knowledge embedding models thereafter including this paper are variants of this model (Wang et al., 2014b; Wang et al., 2014a; Lin et al., 2015) .",
"cite_spans": [
{
"start": 448,
"end": 468,
"text": "(Wang et al., 2014b;",
"ref_id": "BIBREF12"
},
{
"start": 469,
"end": 488,
"text": "Wang et al., 2014a;",
"ref_id": "BIBREF11"
},
{
"start": 489,
"end": 506,
"text": "Lin et al., 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Skip-gram This is an efficient word embedding method proposed by Mikolov et al. (2013a) , which learns word embeddings from word concurrencies in text windows. Without any supervision, it amazingly recovers the semantic relations between words in a vector space such as 'King' \u2212 'Queen' \u2248 'Man' \u2212 'Women'. However, as it is unsupervised, it cannot tell the exact relation between two words.",
"cite_spans": [
{
"start": 65,
"end": 87,
"text": "Mikolov et al. (2013a)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Wang et al. (2014a) combines knowledge embedding and word embedding in a joint framework so that the entities/relations and words are in the same vector space and hence operators like inner product (similarity) between them are meaningful. This brings convenience to tasks requiring computation between knowledge bases and text. Meanwhile, jointly embedding utilizes information from both structured KBs and unstructured text and hence the knowledge embedding and word embedding can be enhanced by each other. Their model is composed of three components: a knowledge model to embed entities and relations, a text model to embed words, and an alignment model to make sure entities/relations and words are in the same vector space. The knowledge model and text model are variants of TransE and Skip-gram respectively. The key component is the alignment model. They introduced two: alignment by entity names and alignment by Wikipedia anchors. (1) Alignment by Entity Names makes a replicate of KB facts but replaces each entity ID with its name string, i.e., the vector of a name phrase is encouraged to equal to the vector of the entity (identified by ID). It has problems with ambiguous entity names and observed polluting word embeddings thus it is not recommended by the authors. (2) Alignment by Wikipedia Anchors replaces the surface phrase v of a Wikipedia anchor with its corresponding Freebase entity e v and defines the likelihood",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge and Text Jointly Embedding",
"sec_num": null
},
{
"text": "L AA = (w,v)\u2208C,v\u2208A log Pr(w|e v ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge and Text Jointly Embedding",
"sec_num": null
},
{
"text": "where C is the collection of observed word and context pairs and A refers to the set of all anchors in Wikipedia. Pr(w|e v ) is the probability of the anchor predicting its context word, which takes a form similar to Skip-gram for word embedding. Alignment by anchors works well in both improving knowledge embedding and word embeddings. However, it completely relies on the special data source of Wikipedia anchors and cannot be applied to other general data settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge and Text Jointly Embedding",
"sec_num": null
},
{
"text": "We first describe the settings and notations. Given a knowledge base, i.e., a set of facts (h, r, t), where h, t \u2208 E (the set of entities) and r \u2208 R (the set of relations). Some entities have text descriptions. The description of entity e is denoted as D e . w i,n is the n th word in the description of e i . N i is the length (in words) of the description of e i . We try to learn embeddings e i , r j and w l for each entity e i , relation r j and word w l respectively. The vocabulary of words is V. The union vocabulary of entities and words together is I = E \u222a V. In this paper \"word(s)\" refers to \"word(s)/phrase(s)\". We follow the jointly embedding framework of (Wang et al., 2014a), i.e., learning optimal embeddings by minimizing the following loss",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "L ({e i }, {r j }, {w l }) = L K + L T + L A , (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "where L K , L T and L A are the component loss functions of the knowledge model, text model and alignment model respectively. Our focus is on a new alignment model L A while the knowledge model L K and text model L T are the same as the counterparts in (Wang et al., 2014a) . However, to make the content self-contained, we still need to briefly explain L K and L T .",
"cite_spans": [
{
"start": 253,
"end": 273,
"text": "(Wang et al., 2014a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "Knowledge Model Describes the plausibility of a triplet (h, r, t) by defining",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr(h|r, t) = exp{z(h, r, t)} h \u2208I exp{z(h, r, t)} ,",
"eq_num": "(3)"
}
],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "where z(h, r, t) = b \u2212 0.5 \u2022 h + r \u2212 t 2 2 , b = 7 as suggested by Wang et al. (2014a) . Pr(r|h, t) and",
"cite_spans": [
{
"start": 67,
"end": 86,
"text": "Wang et al. (2014a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "Pr(t|h, r) are defined in the same way. The loss function of knowledge model is then defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "L K = \u2212 (h,r,t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "log Pr(h|r, t) + log Pr(t|h, r) + log Pr(r|h, t) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "Text Model Defines the probability of a pair of words w and v co-occurring in a text window:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr(w|v) = exp{z(w, v)} w\u2208V exp{z(w, v)}",
"eq_num": "(5)"
}
],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "z(w, v) = b \u2212 0.5 \u2022 w \u2212 v 2 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "Then the loss function of text model is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L T = \u2212 (w,v) log Pr(w|v)",
"eq_num": "(6)"
}
],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "Alignment Model This part is different from Wang et al. (2014a) . For each word w in the description of entity e, we define Pr(w|e), the conditional probability of predicting w given e:",
"cite_spans": [
{
"start": 44,
"end": 63,
"text": "Wang et al. (2014a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr(w|e) = exp{z(e, w)} w\u2208V exp{z(e,w)} ,",
"eq_num": "(7)"
}
],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "where z(e, w) = b \u2212 0.5 \u2022 e \u2212 w 2 2 . Notice that e is the same vector of entity e appearing in the knowledge model of Eq. 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "We also define Pr(e|w) in the same way by revising the normalization term",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr(e|w) = exp{z(e, w)} \u1ebd\u2208E exp{z(\u1ebd, w)}",
"eq_num": "(8)"
}
],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "Then the loss function of alignment model is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "L A = \u2212 e\u2208E w\u2208De",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "[log Pr(w|e) + log Pr(e|w)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "Training We use stochastic gradient descent (S-GD) to minimize the overall loss of Eq. (2), which sequentially updates the embeddings. Negative sampling is used to calculate the normalization items over large vocabularies. We implement a multi-threading version to deal with large data sets, where memory is shared and lock-free.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Entity Descriptions",
"sec_num": "3"
},
{
"text": "We conduct experiments on the following tasks: link prediction , triplet classification (Socher et al., 2013) , relational fact extraction , and analogical reasoning (Mikolov et al., 2013b) . The last one evaluates quality of word embeddings. We try to study whether the proposed alignment model, without using any anchor information, is able to achieve comparable or better performance than alignment by anchors. As to the methods, \"Separately\" denotes the method of separately embedding knowledge bases and text. \"Jointly(anchor)\" and \"Jointly(name)\" denote the jointly embedding methods based on Alignment by Wikipedia Anchors and Alignment by Entity Names in (Wang et al., 2014a) respectively. \"Jointly(desp)\" is the joint embedding method based on alignment by entity descriptions.",
"cite_spans": [
{
"start": 88,
"end": 109,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF9"
},
{
"start": 166,
"end": 189,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF6"
},
{
"start": 663,
"end": 683,
"text": "(Wang et al., 2014a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Data For link prediction, FB15K from is used as the knowledge base. For triplet classification, a large dataset provided by (Wang et al., 2014a) is used as the knowledge base. Both sets are subsets of Freebase. For all tasks, Wikipedia articles are used as the text corpus. As many Wikipedia articles can be mapped to Freebase entities, we regard a Wikipedia article as the description for the corresponding entity in Freebase. Following the settings in (Wang et al., 2014a) , we apply the same preprocessing steps, including sentence segmentation, tokenization, and named entity recognition. We combine the consecutive tokens covered by an anchor or identically tagged as \"Location/Person/Organization\" and regard them as phrases.",
"cite_spans": [
{
"start": 124,
"end": 144,
"text": "(Wang et al., 2014a)",
"ref_id": "BIBREF11"
},
{
"start": 454,
"end": 474,
"text": "(Wang et al., 2014a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Link Prediction This task aims to complete a fact (h, r, t) in absence of h or t, simply based on h + r \u2212 t . We follow the same protocol in . We directly copy the results of the baseline (TransE) from and implement \"Jointly(anchor)\". The results are in Table 1 . \"MEAN\" is the average rank of the true absent entity. \"HITS@10\" is accuracy of the top (Mintz et al., 2009) as base extractor (b) MIML (Surdeanu et al., 2012) as base extractor.",
"cite_spans": [
{
"start": 351,
"end": 371,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF7"
},
{
"start": 399,
"end": 422,
"text": "(Surdeanu et al., 2012)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 254,
"end": 261,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "10 predictions containing the true entity. Lower \"MEAN\" and higher \"HITS@10\" is better. \"Raw\" and \"Filtered\" are two settings on processing candidates .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We train \"Jointly(anchor)\" and \"Jointly(desp)\" with the embedding dimension k among {50, 100, 150}, the learning rate \u03b1 in {0.01, 0.025}, the number of negative examples per positive example c in {5, 10}, the max skiprange s in {5, 10} and traverse the text corpus with only 1 epoch. The best configurations of \"Jointly(anchor)\" and \"Jointly(desp)\" are exactly the same: k = 100, \u03b1 = 0.025, c = 10, s = 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "From the results, we observe that: (1) Both jointly embedding methods are much better than the baseline TransE, which demonstrates that external textual resources make entity embeddings become more discriminative. Intuitively, \"Jointly(anchor)\" indicates \"how to use an entity in text\", while \"Jointly(desp)\" shows \"what is the definition/meaning of an entity\". Both are helpful to distinguish an entity from others. (2) Under the setting of \"Raw\", \"Jointly(desp)\" and \"Jointly(anchor)\" are comparable. In other settings \"Jointly(desp)\" wins.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Triplet Classification This is a binary classification task, predicting whether a candidate triplet (h, r, t) is a correct fact or not. It is used in (Socher et al., 2013; Wang et al., 2014b; Wang et al., 2014a) . We follow the same protocol in (Wang et al., 2014a) .",
"cite_spans": [
{
"start": 150,
"end": 171,
"text": "(Socher et al., 2013;",
"ref_id": "BIBREF9"
},
{
"start": 172,
"end": 191,
"text": "Wang et al., 2014b;",
"ref_id": "BIBREF12"
},
{
"start": 192,
"end": 211,
"text": "Wang et al., 2014a)",
"ref_id": "BIBREF11"
},
{
"start": 245,
"end": 265,
"text": "(Wang et al., 2014a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We train their models via our own implemen-tation on our dataset. The results are in Table 2 . \"e-e\" means both sides of a triplet (h, r, t) are entities in KB, \"e-w\" means the tail side is a word out of KB entity vocabulary, similarly for \"w-e\" and \"w-w\". The best configurations of the models are: k = 150, \u03b1 = 0.025, c = 10, s = 5 and traversing the text corpus with 6 epochs. The results reveal that: (1) Jointly embedding is indeed effective. Both jointly embedding methods can well handle the cases of \"e-w\", \"w-e\" and \"ww\", which means the vector computation between entities/relations and words are really meaningful. Meanwhile, even the case of \"e-e\" is also improved. (2) Our method, \"Jointly(desp)\", outperforms \"Jointly(anchor)\" on all types of triplets. We believe that the good performance of \"Jointly(desp)\" is due to the appropriate design of the alignment mechanism. Using entity's description information is a more straightforward and effective way to align entity embeddings and word embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 92,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "This task is to extract facts (h, r, t) from plain text. show that combing scores from TransE and some text side base extractor achieved much better precision-recall curve compared to the base extractor. Wang et al. (2014a) confirm this observation and show that jointly embedding brings further encouraging improvement over TransE. In this experiment, we follow the same settings as (Wang et al., 2014a) to investigate the performance of our new alignment model. We use the same public dataset NYT+FB, released by Riedel et al. (2010) and used in and (Wang et al., 2014a) . We use Mintz (Mintz et al., 2009) and MIML (Surdeanu et al., 2012) as our base extractors. In order to combine the score of a base extractor and the score from embeddings, we only reserve the testing triplets whose entitites and relations can be mapped to the embeddings learned from the triplet classification experiment. Since both Mintz and MIML are probabilistic models, we use the same method in (Wang et al., 2014a) to linearly combine the scores.",
"cite_spans": [
{
"start": 204,
"end": 223,
"text": "Wang et al. (2014a)",
"ref_id": "BIBREF11"
},
{
"start": 384,
"end": 404,
"text": "(Wang et al., 2014a)",
"ref_id": "BIBREF11"
},
{
"start": 515,
"end": 535,
"text": "Riedel et al. (2010)",
"ref_id": "BIBREF8"
},
{
"start": 552,
"end": 572,
"text": "(Wang et al., 2014a)",
"ref_id": "BIBREF11"
},
{
"start": 588,
"end": 608,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF7"
},
{
"start": 618,
"end": 641,
"text": "(Surdeanu et al., 2012)",
"ref_id": "BIBREF10"
},
{
"start": 976,
"end": 996,
"text": "(Wang et al., 2014a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Fact Extraction",
"sec_num": null
},
{
"text": "The precision-recall curves are plot in Fig. (1) . On both base extractors, the jointly embedding methods outperform separate embedding. Moreover, \"Jointly(desp)\" is slightly better than \"Jointly(anchor)\", which is in accordance with the results from the link prediction experiment and the triplet classification experiment.",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 48,
"text": "Fig. (1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Relational Fact Extraction",
"sec_num": null
},
{
"text": "Analogical Reasoning This task evaluates the quality of word embeddings (Mikolov et al., 2013b) . We use the original dataset released by (Mikolov et al., 2013b) and follow the same evaluation protocol of (Wang et al., 2014a) . For a true analogical pair like (\"France\", \"Paris\") and (\"China\", \"Beijing\"), we hide \"Beijing\" and predict it by selecting the word from the vocabulary whose vector has highest similarity with the vector of \"China\" + \"Paris\" -\"France\". We use the word embeddings learned for the triplet classification experiment and conduct the analogical reasoning experiment for \"Skip-gram\", \"Jointly(anchor)\", \"Jointly(name)\" and \"Jointly(desp)\".",
"cite_spans": [
{
"start": 72,
"end": 95,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF6"
},
{
"start": 138,
"end": 161,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF6"
},
{
"start": 205,
"end": 225,
"text": "(Wang et al., 2014a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Fact Extraction",
"sec_num": null
},
{
"text": "Results are presented in Table 3 . \"Acc\" is the accuracy of the predicted word. \"HITS@10\" is the accuracy of the top 10 candidates containing the ground truth. The evaluation analogical pairs are organized into two groups, \"Words\" and \"Phrases\", by whether an analogical pair contains phrases (i.e., multiple words). From the table we observe that: (1) Both \"Jointly(anchor)\" and \"Jointly(desp)\" outperform \"Skip-gram\". (2) \"Joint-ly(desp)\" achieves the best results, especially for the case of \"Phrases\". Both \"Jointly(anchor)\" and \"Skip-gram\" only consider the context of words, while \"Jointly(desp)\" not only consider the context but also use the whole document to disambiguate words. Intuitively, the whole document is also a valuable resource to disambiguate words. 3We further verify that \"Jointly(name)\", i.e., using entity names for alignment, indeed pollutes word embeddings, which is consistent with the reports in (Wang et al., 2014a) .",
"cite_spans": [
{
"start": 925,
"end": 945,
"text": "(Wang et al., 2014a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Relational Fact Extraction",
"sec_num": null
},
{
"text": "The above four experiments are consistent in results: without using any anchor information, alignment by entity description is able to achieve better or comparable performance, compared to alignment by Wikipedia anchors proposed by Wang et al. (2014a) .",
"cite_spans": [
{
"start": 232,
"end": 251,
"text": "Wang et al. (2014a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Fact Extraction",
"sec_num": null
},
{
"text": "We propose a new alignment model based on entity descriptions for jointly embedding a knowledge base and a text corpus. Compared to the method of alignment using Wikipedia anchors Wang et al. (2014a) , our method has no dependency on special data sources of anchors and hence can be applied to any knowledge bases with text descriptions for entities. Extensive experiments on four prevalent tasks to evaluate the quality of knowledge and word embeddings produce very consistent results: our alignment model achieves better or comparable performance.",
"cite_spans": [
{
"start": 180,
"end": 199,
"text": "Wang et al. (2014a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning structured embeddings of knowledge bases",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 25th AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning structured embed- dings of knowledge bases. In Proceedings of the 25th AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A semantic matching energy function for learning with multi-relational data",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2012,
"venue": "Machine Learning",
"volume": "",
"issue": "",
"pages": "1--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2012. A semantic matching en- ergy function for learning with multi-relational data. Machine Learning, pages 1-27.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Translating embeddings for modeling multirelational data",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Garcia-Duran",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Oksana",
"middle": [],
"last": "Yakhnenko",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2787--2795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Advances in Neural Information Processing Systems, pages 2787-2795.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multi-relational latent semantic analysis",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Meek",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1602--1612",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai-Wei Chang, Wen-tau Yih, and Christopher Meek. 2013. Multi-relational latent semantic analysis. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1602-1612, Seattle, Washington, USA, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning entity and relation embeddings for knowledge graph completion",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2181--2187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Zheng Chen. 2015. Learning entity and relation em- beddings for knowledge graph completion. In Pro- ceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pages 2181-2187.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word rep- resentations in vector space. arXiv preprint arX- iv:1301.3781.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems, pages 3111-3119.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
},
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "2",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Ju- rafsky. 2009. Distant supervision for relation ex- traction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Vol- ume 2-Volume 2, pages 1003-1011. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Modeling relations and their mentions without labeled text",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "Machine Learning and Knowledge Discovery in Databases",
"volume": "",
"issue": "",
"pages": "148--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCal- lum. 2010. Modeling relations and their mention- s without labeled text. In Machine Learning and Knowledge Discovery in Databases, pages 148-163. Springer.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Reasoning with neural tensor networks for knowledge base completion",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "926--934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural ten- sor networks for knowledge base completion. In Ad- vances in Neural Information Processing Systems, pages 926-934.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Multi-instance multi-label learning for relation extraction",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Tibshirani",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "455--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Pro- ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning, pages 455- 465. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Knowledge graph and text jointly embedding",
"authors": [
{
"first": "Zhen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianwen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jianlin",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1591--1601",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014a. Knowledge graph and text jointly em- bedding. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguis- tics, pages 1591-1601.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Knowledge graph embedding by translating on hyperplanes",
"authors": [
{
"first": "Zhen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianwen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jianlin",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1112--1119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014b. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intel- ligence, pages 1112-1119.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Connecting language and knowledge bases with embedding models for relation extraction",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Oksana",
"middle": [],
"last": "Yakhnenko",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1307.7973"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Weston, Antoine Bordes, Oksana Yakhnenko, and Nicolas Usunier. 2013. Connecting language and knowledge bases with embedding models for re- lation extraction. arXiv preprint arXiv:1307.7973.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Precision-recall curves for relation extraction. (a) Mintz"
},
"TABREF0": {
"type_str": "table",
"text": "Link prediction results.",
"html": null,
"content": "<table><tr><td>Metric</td><td colspan=\"4\">MEAN Raw Filtered Raw Filtered HITS@10</td></tr><tr><td>TransE</td><td>243</td><td>125</td><td>34.9</td><td>47.1</td></tr><tr><td>Jointly(anchor)</td><td>166</td><td>47</td><td>49.9</td><td>72.0</td></tr><tr><td>Jointly(desp)</td><td>167</td><td>39</td><td>51.7</td><td>77.3</td></tr></table>",
"num": null
},
"TABREF1": {
"type_str": "table",
"text": "Triplet classification results.",
"html": null,
"content": "<table><tr><td>Type</td><td colspan=\"3\">e -e w -e e -w w -w</td><td>all</td></tr><tr><td>Separately</td><td>94.0 51.7</td><td>51.0</td><td>69.0</td><td>73.6</td></tr><tr><td colspan=\"2\">Jointly(anchor) 95.2 65.3</td><td>65.1</td><td>76.2</td><td>79.9</td></tr><tr><td>Jointly(desp)</td><td>96.1 66.7</td><td>66.1</td><td>76.4</td><td>80.9</td></tr></table>",
"num": null
},
"TABREF2": {
"type_str": "table",
"text": "Analogical reasoning results",
"html": null,
"content": "<table><tr><td>Metric</td><td colspan=\"4\">Words Acc. Hits@10 Acc. Hits@10 Phrases</td></tr><tr><td>Skip-gram</td><td>67.4</td><td>86.7</td><td>22.0</td><td>63.6</td></tr><tr><td colspan=\"2\">Jointly(anchor) 69.4</td><td>87.7</td><td>26.2</td><td>68.1</td></tr><tr><td>Jointly(name)</td><td>44.5</td><td>69.7</td><td>11.5</td><td>46.0</td></tr><tr><td>Jointly(desp)</td><td>69.3</td><td>88.3</td><td>49.0</td><td>86.5</td></tr></table>",
"num": null
}
}
}
}