Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S17-2008",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:28:56.024881Z"
},
"title": "ConceptNet at SemEval-2017 Task 2: Extending Word Embeddings with Multilingual Relational Knowledge",
"authors": [
{
"first": "Robyn",
"middle": [],
"last": "Speer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Luminoso Technologies, Inc",
"location": {
"addrLine": "675 Massachusetts Avenue Cambridge",
"postCode": "02139",
"region": "MA"
}
},
"email": "[email protected]"
},
{
"first": "Joanna",
"middle": [],
"last": "Lowry-Duda",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Luminoso Technologies, Inc",
"location": {
"addrLine": "675 Massachusetts Avenue Cambridge",
"postCode": "02139",
"region": "MA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes Luminoso's participation in SemEval 2017 Task 2, \"Multilingual and Cross-lingual Semantic Word Similarity\", with a system based on Con-ceptNet. ConceptNet is an open, multilingual knowledge graph that focuses on general knowledge that relates the meanings of words and phrases. Our submission to SemEval was an update of previous work that builds high-quality, multilingual word embeddings from a combination of Con-ceptNet and distributional semantics. Our system took first place in both subtasks. It ranked first in 4 out of 5 of the separate languages, and also ranked first in all 10 of the cross-lingual language pairs.",
"pdf_parse": {
"paper_id": "S17-2008",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes Luminoso's participation in SemEval 2017 Task 2, \"Multilingual and Cross-lingual Semantic Word Similarity\", with a system based on Con-ceptNet. ConceptNet is an open, multilingual knowledge graph that focuses on general knowledge that relates the meanings of words and phrases. Our submission to SemEval was an update of previous work that builds high-quality, multilingual word embeddings from a combination of Con-ceptNet and distributional semantics. Our system took first place in both subtasks. It ranked first in 4 out of 5 of the separate languages, and also ranked first in all 10 of the cross-lingual language pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "ConceptNet 5 (Speer and Havasi, 2013 ) is a multilingual, domain-general knowledge graph that connects words and phrases of natural language (terms) with labeled, weighted edges. Compared to other knowledge graphs, it avoids trying to be a large gazetteer of named entities. It aims most of all to cover the frequently-used words and phrases of every language, and to represent generally-known relationships between the meanings of these terms.",
"cite_spans": [
{
"start": 13,
"end": 36,
"text": "(Speer and Havasi, 2013",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper describing ConceptNet 5.5 (Speer et al., 2017) showed that it could be used in combination with sources of distributional semantics, particularly the word2vec Google News skip-gram embeddings (Mikolov et al., 2013) and GloVe 1.2 (Pennington et al., 2014) , to produce new embeddings with state-of-the-art performance across many word-relatedness evaluations. The three data sources are combined using an extension of the technique known as retrofitting (Faruqui et al., 2015) . The result is a system of pre-computed word embeddings we call \"ConceptNet Numberbatch\".",
"cite_spans": [
{
"start": 36,
"end": 56,
"text": "(Speer et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 202,
"end": 224,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF8"
},
{
"start": 239,
"end": 264,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF9"
},
{
"start": 463,
"end": 485,
"text": "(Faruqui et al., 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The system we submitted to SemEval-2017 Task 2, \"Multilingual and Cross-lingual Semantic Word Similarity\", is an update of that system, coinciding with the release of version 5.5.3 of Con-ceptNet 1 . We added multiple fallback methods for assigning vectors to out-of-vocabulary words. We also experimented with, but did not submit, systems that used additional sources of word embeddings in the five languages of this SemEval task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This task (Camacho-Collados et al., 2017) evaluated systems at their ability to rank pairs of words by their semantic similarity or relatedness. The words are in five languages: English, German, Italian, Spanish, and Farsi. Subtask 1 compares pairs of words within each of the five languages; subtask 2 compares pairs of words that are in different languages, for each of the ten pairs of distinct languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our system took first place in both subtasks. Detailed results for our system appear in Section 3.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The way we built our embeddings is based on retrofitting (Faruqui et al., 2015) , and in particular, the elaboration of it we call \"expanded retrofitting\" (Speer et al., 2017) . Retrofitting, as originally described, adjusts the values of existing word embeddings based on a new objective function that also takes a knowledge graph into account. Its output has the same vocabulary as its input. In expanded retrofitting, on the other hand, terms that are only present in the knowledge graph are added to the vocabulary and are also assigned vectors.",
"cite_spans": [
{
"start": 57,
"end": 79,
"text": "(Faruqui et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 155,
"end": 175,
"text": "(Speer et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "2"
},
{
"text": "As described in the ConceptNet 5.5 paper (Speer et al., 2017) , we apply expanded retrofitting separately to multiple sources of embeddings (such as pre-trained word2vec and GloVe), then align the results on a unified vocabulary and reduce its dimensionality.",
"cite_spans": [
{
"start": 41,
"end": 61,
"text": "(Speer et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Multiple Sources of Vectors",
"sec_num": "2.1"
},
{
"text": "First, we make a unified matrix of embeddings, M 1 , as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Multiple Sources of Vectors",
"sec_num": "2.1"
},
{
"text": "\u2022 Take the subgraph of ConceptNet consisting of nodes whose degree is at least 3. Remove edges corresponding to negative relations (such as NotUsedFor and Antonym). Remove phrases with 4 or more words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Multiple Sources of Vectors",
"sec_num": "2.1"
},
{
"text": "\u2022 Standardize the sources of embeddings by case-folding their terms and L 1 -normalizing their columns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Multiple Sources of Vectors",
"sec_num": "2.1"
},
{
"text": "\u2022 For each source of embeddings, apply expanded retrofitting over that source with the subgraph of ConceptNet. In each case, this provides vectors for a vocabulary of terms that includes the ConceptNet vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Multiple Sources of Vectors",
"sec_num": "2.1"
},
{
"text": "\u2022 Choose a unified vocabulary (described below), and look up the vectors for each term in this vocabulary in the expanded retrofitting outputs. If a vector is missing from the vocabulary of a retrofitted output, fill in zeroes for those components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Multiple Sources of Vectors",
"sec_num": "2.1"
},
{
"text": "\u2022 Concatenate the outputs of expanded retrofitting over this unified vocabulary to give M 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Multiple Sources of Vectors",
"sec_num": "2.1"
},
{
"text": "Expanded retrofitting produces vectors for all the terms in its knowledge graph and all the terms in the input embeddings. Some terms from outside the ConceptNet graph have useful embeddings, representing knowledge we would like to keep, but using all such terms would be noisy and wasteful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vocabulary Selection",
"sec_num": "2.2"
},
{
"text": "To select the vocabulary of our term vectors, we used a heuristic that takes advantage of the fact that the pre-computed word2vec and GloVe embeddings we used have their rows (representing terms) sorted by term frequency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vocabulary Selection",
"sec_num": "2.2"
},
{
"text": "To find appropriate terms, we take all the terms that appear in the first 500,000 rows of both the word2vec and GloVe inputs, and appear in the first 200,000 rows of at least one of them. We take the union of these with the terms in the ConceptNet subgraph described above. The resulting vocabulary, of 1,884,688 ConceptNet terms plus 99,869 additional terms, is the vocabulary we use in the system we submitted and its variants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vocabulary Selection",
"sec_num": "2.2"
},
{
"text": "The concatenated matrix M 1 has k columns representing features that may be redundant with each other. Our next step is to reduce its dimensionality to a smaller number k , which we set to 300, the dimensionality of the largest input matrix. Our goal is to learn a projection from k dimensions to k dimensions that removes the redundancy that comes from concatenating multiple sources of embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dimensionality Reduction",
"sec_num": "2.3"
},
{
"text": "We sample 5% of the rows of M 1 to get M 2 , which we will use to find the projection more efficiently, assuming that its vectors represent approximately the same distribution as M 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dimensionality Reduction",
"sec_num": "2.3"
},
{
"text": "M 2 can be approximated with a truncated SVD:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dimensionality Reduction",
"sec_num": "2.3"
},
{
"text": "M 2 \u2248 U \u03a3 1/2 V T ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dimensionality Reduction",
"sec_num": "2.3"
},
{
"text": "where \u03a3 is truncated to a k \u00d7k diagonal matrix of the k largest singular values, and U and V are correspondingly truncated to have only these k columns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dimensionality Reduction",
"sec_num": "2.3"
},
{
"text": "U is a matrix mapping the same vocabulary to a smaller set of features. Because V is orthonormal, U \u03a3 is a rotation and truncation of the original data, where each feature contributes the same amount of variance as it did in the original data. U \u03a3 1/2 is a version that removes some of the variance that came from redundant features, and also is analogous to the decomposition used by Levy et al. (2015) in their SVD process.",
"cite_spans": [
{
"start": 385,
"end": 403,
"text": "Levy et al. (2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dimensionality Reduction",
"sec_num": "2.3"
},
{
"text": "We can solve for the operator that projects M 2 into U \u03a3 1/2 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dimensionality Reduction",
"sec_num": "2.3"
},
{
"text": "U \u03a3 1/2 \u2248 M 2 V \u03a3 \u22121/2 V \u03a3 \u22121/2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dimensionality Reduction",
"sec_num": "2.3"
},
{
"text": "is therefore a k \u00d7 k operator that, when applied on the right, projects vectors from our larger space of features to our smaller space of features. It can be applied to any vector in the space of M 1 , not just the ones we sampled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dimensionality Reduction",
"sec_num": "2.3"
},
{
"text": "M 3 = M 1 V \u03a3 \u22121/2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dimensionality Reduction",
"sec_num": "2.3"
},
{
"text": "is the projection of the selected vocabulary into k dimensions, which is the matrix of term vectors that we output and evaluate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dimensionality Reduction",
"sec_num": "2.3"
},
{
"text": "Published evaluations of word embeddings can be inconsistent about what to do with out-of-vocabulary (OOV) words, those words that the system has learned no representation for. Some evaluators, such as Bojanowski et al. (2016) , discard all pairs containing an OOV word. This makes different systems with different vocabularies difficult to compare. It enables gaming the evaluation by limiting the system's vocabulary, and gives no incentive to expand the vocabulary.",
"cite_spans": [
{
"start": 202,
"end": 226,
"text": "Bojanowski et al. (2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Don't Take \"OOV\" for an Answer",
"sec_num": "2.4"
},
{
"text": "This SemEval task took a more objective position: no word pairs may be discarded. Every system must submit a similarity value for every word pair, and \"OOV\" is no excuse. The organizers recommended using the midpoint of the similarity scale as a default.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Don't Take \"OOV\" for an Answer",
"sec_num": "2.4"
},
{
"text": "In our previous work with ConceptNet, we eliminated one possible cause of OOV terms. A term that is outside of the selected vocabulary, perhaps because its degree in ConceptNet is too low, can still be assigned a vector. When we encounter a word with no computed vector, we look it up in ConceptNet, find its neighbors, and take the average of whatever vectors those neighboring terms have. This approximates the vector the term would have been assigned if it had participated in retrofitting. If the term has no neighbors with vectors, it remains OOV.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Don't Take \"OOV\" for an Answer",
"sec_num": "2.4"
},
{
"text": "For this SemEval task, we recognized the importance of minimizing OOV terms, and implemented two additional fallback strategies for the terms that are still OOV.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Don't Take \"OOV\" for an Answer",
"sec_num": "2.4"
},
{
"text": "It is unavoidable that training data in non-English languages will be harder to come by and sparser than data in English. It is also true that some words in non-English languages are borrowed directly from English, and are therefore exact cognates for English words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Don't Take \"OOV\" for an Answer",
"sec_num": "2.4"
},
{
"text": "As such, we used a simple strategy to further increase the coverage of our non-English vocabularies: if a term is not associated with a vector in matrix M 3 , we first look up the vector for the term that is spelled identically in English. If that vector is present, we use it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Don't Take \"OOV\" for an Answer",
"sec_num": "2.4"
},
{
"text": "This method is in theory vulnerable to false cognates, such as the German word Gift (meaning \"poison\"). However, false cognates tend to appear among common words, not rare ones, so they are unlikely to use this fallback strategy. Our German embeddings do contain a vector for \"Gift\", and it is similar to English \"poison\", not English \"gift\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Don't Take \"OOV\" for an Answer",
"sec_num": "2.4"
},
{
"text": "As a second fallback strategy, when a term cannot be found in its given language or in English, we look for terms in the vocabulary that have the given term as a prefix. If we find none of those, we drop a letter from the end of the unknown term, and look for that as a prefix. We continue dropping letters from the end until a result is found. When a prefix yields results, we use the mean of all the resulting vectors as the word's vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Don't Take \"OOV\" for an Answer",
"sec_num": "2.4"
},
{
"text": "In this task, systems were scored by the harmonic mean of their Pearson and Spearman correlation with the test set for each language (or language pair in Subtask 2). Systems were assigned aggregate scores, averaging their top 4 languages on Subtask 1 and their top 6 pairs on Subtask 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "The system we submitted applied the retrofittingand-merging process described above, with Con-ceptNet 5.5.3 as the knowledge graph and two well-regarded sources of English word embeddings. The first source is the word2vec Google News embeddings 2 , and the second is the GloVe 1.2 embeddings that were trained on 840 billion tokens of the Common Crawl 3 . Because the input embeddings are only in English, the vectors in other languages depended entirely on propagating these English embeddings via the multilingual links in ConceptNet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Submitted System: ConceptNet + word2vec + GloVe",
"sec_num": "3.1"
},
{
"text": "This system appears in the results as \"Luminoso-run2\". Run 1 was similar, but it was looking up neighbors in an unreleased version of the ConceptNet graph with fewer edges from DBPedia in it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Submitted System: ConceptNet + word2vec + GloVe",
"sec_num": "3.1"
},
{
"text": "This system's aggregate score on subtask 1 was 0.743. Its combined score on subtask 2 (averaged over its six best language pairs) was 0.754.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Submitted System: ConceptNet + word2vec + GloVe",
"sec_num": "3.1"
},
{
"text": "Instead of relying entirely on English knowledge propagated through ConceptNet, it seemed reasonable to also include pre-calculated word embeddings in other languages as inputs. In Variant A, we added inputs from the Polyglot embeddings (Al-Rfou et al., 2013) in German, Spanish, Italian, and Farsi as four additional inputs to the retrofitting-and-merging process.",
"cite_spans": [
{
"start": 237,
"end": 259,
"text": "(Al-Rfou et al., 2013)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Variant A: Adding Polyglot Embeddings",
"sec_num": "3.2"
},
{
"text": "The results of this variant on the trial data were noticeably lower, and when we evaluate it on the test data in retrospect, its test results are lower as well. Its aggregate scores are .720 on subtask 1 and .736 on subtask 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variant A: Adding Polyglot Embeddings",
"sec_num": "3.2"
},
{
"text": "In Variant B, we calculated our own multilingual distributional embeddings from word cooccurrences in the OpenSubtitles2016 parallel corpus (Lison and Tiedemann, 2016) , and used this as a third input alongside word2vec and GloVe. For each pair of aligned subtitles among the five languages, we combined the language-tagged words into a single set of n words, then added 1/n to the co-occurrence frequency of each pair of words, yielding a sparse matrix of word cooccurrences within and across languages. We then used the SVD-of-PPMI process described by Levy et al. (2015) to convert these sparse cooccurrences into 300-dimensional vectors.",
"cite_spans": [
{
"start": 140,
"end": 167,
"text": "(Lison and Tiedemann, 2016)",
"ref_id": "BIBREF7"
},
{
"start": 555,
"end": 573,
"text": "Levy et al. (2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Variant B: Adding Parallel Text from OpenSubtitles",
"sec_num": "3.3"
},
{
"text": "On the trial data, this variant compared inconclusively to Run 2. We submitted Run 2 instead of Variant B because Run 2 was simpler and seemed to perform slightly better on average.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variant B: Adding Parallel Text from OpenSubtitles",
"sec_num": "3.3"
},
{
"text": "However, when we run variant B on the released test data, we note that it would have scored better than the system we submitted. Its aggregate scores are .759 on subtask 1 and .767 on subtask 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variant B: Adding Parallel Text from OpenSubtitles",
"sec_num": "3.3"
},
{
"text": "The released results 4 show that our system, listed as Luminoso-Run2, got the highest aggregate score on both subtasks, and the highest score on each test set except the monolingual Farsi set. Table 1 compares the results per language of the system we submitted, the same system without our OOV-handling strategies, variants A and B, and the baseline Nasari (Camacho-Collados et al., 2016) system.",
"cite_spans": [],
"ref_spans": [
{
"start": 193,
"end": 200,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison of Results",
"sec_num": "3.4"
},
{
"text": "Variant B performed the best in the end, so we will incorporate parallel text from OpenSubtitles in the next release of the ConceptNet Numberbatch system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Results",
"sec_num": "3.4"
},
{
"text": "The idea of producing word embeddings from a combination of distributional and relational Table 1 : Evaluation scores by language. \"Score 1\" and \"Score 2\" are the combined subtask scores. \"Base\" is the Nasari baseline, \"Ours\" is Luminoso-Run2 as submitted, \"\u2212OOV\" removes our OOV strategy, and \"Var. A\" and \"Var. B\" are the variants we describe in this paper.",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 97,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "knowldedge has been implemented by many others, including Iacobacci et al. (2015) and various implementations of retrofitting (Faruqui et al., 2015) . ConceptNet is distinguished by the large improvement in evaluation scores that occurs when it is used as the source of relational knowledge. This indicates that ConceptNet's particular blend of crowd-sourced, gamified, and expert knowledge is providing valuable information that is not learned from distributional semantics alone. The results transfer well to other languages, showing ConceptNet's usefulness as \"multilingual glue\" that can combine knowledge in multiple languages into a single representation.",
"cite_spans": [
{
"start": 58,
"end": 81,
"text": "Iacobacci et al. (2015)",
"ref_id": "BIBREF5"
},
{
"start": 126,
"end": 148,
"text": "(Faruqui et al., 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Our submitted system relies heavily on interlanguage links in ConceptNet that represent direct translations, as well as exact cognates. We suspect that this makes it perform particularly well at directly-translated English. It would have more difficulty determining the similarity of words that lack direct translations into English that are known or accurate. This is a weak point of many current word-similarity evaluations: The words that are vague when translated, or that have languagespecific connotations, tend not to appear.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "On a task with harder-to-translate words, we may have to rely more on observing the distributional semantics of corpus text in each language, as we did in the unsubmitted variants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Data and code are available at http:// conceptnet.io.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://code.google.com/archive/p/ word2vec/ 3 http://nlp.stanford.edu/projects/ glove/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://alt.qcri.org/semeval2017/ task2/index.php?id=results",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Polyglot: Distributed word representations for multilingual NLP",
"authors": [
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Perozzi",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Skiena",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "183--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representa- tions for multilingual NLP. In Proceedings of the Seventeenth Conference on Computational Natu- ral Language Learning. Association for Computa- tional Linguistics, Sofia, Bulgaria, pages 183-192. http://www.aclweb.org/anthology/W13-3520.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.04606"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vec- tors with subword information. arXiv preprint arXiv:1607.04606 https://arxiv.org/pdf/1607.04606.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "SemEval-2017 Task 2: Multilingual and cross-lingual semantic word similarity",
"authors": [
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [
"Taher"
],
"last": "Pilehvar",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of SemEval. Vancouver",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jose Camacho-Collados, Mohammad Taher Pilehvar, Nigel Collier, and Roberto Navigli. 2017. SemEval- 2017 Task 2: Multilingual and cross-lingual seman- tic word similarity. In Proceedings of SemEval. Van- couver, Canada.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Nasari: Integrating explicit knowledge and corpus statistics for a multilingual representation of concepts and entities",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2016,
"venue": "Artificial Intelligence",
"volume": "240",
"issue": "",
"pages": "36--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Nasari: Integrating ex- plicit knowledge and corpus statistics for a multilin- gual representation of concepts and entities. Artifi- cial Intelligence 240:36-64.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Retrofitting word vectors to semantic lexicons",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Sujay",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Jauhar",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Hovy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Jesse Dodge, Sujay K. Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to se- mantic lexicons. In Proceedings of NAACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "SensEmbed: Learning sense embeddings for word and relational similarity",
"authors": [
{
"first": "Ignacio",
"middle": [],
"last": "Iacobacci",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL (1)",
"volume": "",
"issue": "",
"pages": "95--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. SensEmbed: Learning sense embeddings for word and relational similarity. In ACL (1). pages 95-105.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Improving distributional similarity with lessons learned from word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "211--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the As- sociation for Computational Linguistics 3:211-225. http://www.aclweb.org/anthology/Q15-1016.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "OpenSub-titles2016: Extracting large parallel corpora from movie and TV subtitles",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Lison",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Lison and J\u00f6rg Tiedemann. 2016. OpenSub- titles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word rep- resentations in vector space. CoRR abs/1301.3781. http://arxiv.org/abs/1301.3781.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Empiricial Methods in Natural Language Processing",
"volume": "12",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D Manning. 2014. GloVe: Global vec- tors for word representation. Proceedings of the Empiricial Methods in Natural Language Process- ing (EMNLP 2014) 12:1532-1543. http://www- nlp.stanford.edu/pubs/glove.pdf.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "ConceptNet 5.5: An open multilingual graph of general knowledge",
"authors": [
{
"first": "Robyn",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chin",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. ConceptNet 5.5: An open multilingual graph of general knowledge. San Francisco.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "ConceptNet 5: A large semantic network for relational knowledge",
"authors": [
{
"first": "Robyn",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
}
],
"year": 2013,
"venue": "The People's Web Meets NLP",
"volume": "",
"issue": "",
"pages": "161--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robyn Speer and Catherine Havasi. 2013. ConceptNet 5: A large semantic network for relational knowl- edge. In The People's Web Meets NLP, Springer, pages 161-176.",
"links": null
}
},
"ref_entries": {}
}
}