Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q15-1016",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:07:54.235325Z"
},
"title": "Improving Distributional Similarity with Lessons Learned from Word Embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ilan University Ramat-Gan",
"location": {
"country": "Israel"
}
},
"email": "[email protected]"
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ilan University Ramat-Gan",
"location": {
"country": "Israel"
}
},
"email": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ilan University Ramat-Gan",
"location": {
"country": "Israel"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent trends suggest that neuralnetwork-inspired word embedding models outperform traditional count-based distributional models on word similarity and analogy detection tasks. We reveal that much of the performance gains of word embeddings are due to certain system design choices and hyperparameter optimizations, rather than the embedding algorithms themselves. Furthermore, we show that these modifications can be transferred to traditional distributional models, yielding similar gains. In contrast to prior reports, we observe mostly local or insignificant performance differences between the methods, with no global advantage to any single approach over the others.",
"pdf_parse": {
"paper_id": "Q15-1016",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent trends suggest that neuralnetwork-inspired word embedding models outperform traditional count-based distributional models on word similarity and analogy detection tasks. We reveal that much of the performance gains of word embeddings are due to certain system design choices and hyperparameter optimizations, rather than the embedding algorithms themselves. Furthermore, we show that these modifications can be transferred to traditional distributional models, yielding similar gains. In contrast to prior reports, we observe mostly local or insignificant performance differences between the methods, with no global advantage to any single approach over the others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Understanding the meaning of a word is at the heart of natural language processing (NLP). While a deep, human-like, understanding remains elusive, many methods have been successful in recovering certain aspects of similarity between words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, neural-network based approaches in which words are \"embedded\" into a lowdimensional space were proposed by various authors (Bengio et al., 2003; Collobert and Weston, 2008) . These models represent each word as a ddimensional vector of real numbers, and vectors that are close to each other are shown to be semantically related. In particular, a sequence of papers by Mikolov et al. (2013a; 2013b) culminated in the skip-gram with negative-sampling training method (SGNS): an efficient embedding algorithm that provides state-of-the-art results on various linguistic tasks. It was popularized via word2vec, a program for creating word embeddings.",
"cite_spans": [
{
"start": 133,
"end": 154,
"text": "(Bengio et al., 2003;",
"ref_id": "BIBREF5"
},
{
"start": 155,
"end": 182,
"text": "Collobert and Weston, 2008)",
"ref_id": "BIBREF11"
},
{
"start": 378,
"end": 400,
"text": "Mikolov et al. (2013a;",
"ref_id": "BIBREF24"
},
{
"start": 401,
"end": 407,
"text": "2013b)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A recent study by Baroni et al. (2014) conducts a set of systematic experiments comparing word2vec embeddings to the more traditional distributional methods, such as pointwise mutual information (PMI) matrices (see Turney and Pantel (2010) and Baroni and Lenci (2010) for comprehensive surveys). These results suggest that the new embedding methods consistently outperform the traditional methods by a non-trivial margin on many similarity-oriented tasks. However, state-of-the-art embedding methods are all based on the same bag-of-contexts representation of words. Furthermore, analysis by Levy and Goldberg (2014c) shows that word2vec's SGNS is implicitly factorizing a word-context PMI matrix. That is, the mathematical objective and the sources of information available to SGNS are in fact very similar to those employed by the more traditional methods.",
"cite_spans": [
{
"start": 18,
"end": 38,
"text": "Baroni et al. (2014)",
"ref_id": "BIBREF4"
},
{
"start": 226,
"end": 239,
"text": "Pantel (2010)",
"ref_id": "BIBREF33"
},
{
"start": 244,
"end": 267,
"text": "Baroni and Lenci (2010)",
"ref_id": "BIBREF3"
},
{
"start": 592,
"end": 617,
"text": "Levy and Goldberg (2014c)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "What, then, is the source of superiority (or perceived superiority) of these recent embeddings?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While the focus of the presentation in the wordembedding literature is on the mathematical model and the objective being optimized, other factors affect the results as well. In particular, embedding algorithms suggest some natural hyperparameters that can be tuned; many of which were already tuned to some extent by the algorithms' designers. Some hyperparameters, such as the number of negative samples to use, are clearly marked as tunable. Other modifications, such as smoothing the negative-sampling distribution, are reported in passing and considered thereafter as part of the algorithm. Others still, such as dynamically-sized context windows, are not even mentioned in some of the papers, but are part of the canonical implementation. All of these modifications and system design choices, which we collectively denote as hyperparameters, are part of the final algorithm, and, as we show, have a substantial impact on performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we make these hyperparameters explicit, and show how they can be adapted and transferred into the traditional count-based approach. To asses how each hyperparameter contributes to the algorithms' performance, we conduct a comprehensive set of experiments and compare four different representation methods, while controlling for the various hyperparameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Once adapted across methods, hyperparameter tuning significantly improves performance in every task. In many cases, changing the setting of a single hyperparameter yields a greater increase in performance than switching to a better algorithm or training on a larger corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In particular, word2vec's smoothing of the negative sampling distribution can be adapted to PPMI-based methods by introducing a novel, smoothed variant of the PMI association measure (see Section 3.2). Using this variant increases performance by over 3 points per task, on average. We suspect that this smoothing partially addresses the \"Achilles' heel\" of PMI: its bias towards cooccurrences of rare words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also show that when all methods are allowed to tune a similar set of hyperparameters, their performance is largely comparable. In fact, there is no consistent advantage to one algorithmic approach over another, a result that contradicts the claim that embeddings are superior to count-based methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We consider four word representation methods: the explicit PPMI matrix, SVD factorization of said matrix, SGNS, and GloVe. For historical reasons, we refer to PPMI and SVD as \"countbased\" representations, as opposed to SGNS and GloVe, which are often referred to as \"neural\" or \"prediction-based\" embeddings. All of these methods (as well as all other \"skip-gram\"-based embedding methods) are essentially bag-of-words models, in which the representation of each word reflects a weighted bag of context-words that cooccur with it. Such bag-of-word embedding models were previously shown to perform as well as or better than more complex embedding methods on similarity and analogy tasks (Mikolov et al., 2013a; Pennington et al., 2014) .",
"cite_spans": [
{
"start": 686,
"end": 709,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF24"
},
{
"start": 710,
"end": 734,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Notation We assume a collection of words w \u2208 V W and their contexts c \u2208 V C , where V W and V C are the word and context vocabularies, and denote the collection of observed word-context pairs as D. We use #(w, c) to denote the number of times the pair (w, c) appears in D. Similarly, #(w) = c \u2208V C #(w, c ) and #(c) = w \u2208V W #(w , c) are the number of times w and c occurred in D, respectively. In some algorithms, words and contexts are embedded in a space of d dimensions. In these cases, each word w \u2208 V W is associated with a vector w \u2208 R d and similarly each context c \u2208 V C is represented as a vector c \u2208 R d . We sometimes refer to the vectors w as rows in a |V W |\u00d7d matrix W , and to the vectors c as rows in a |V C |\u00d7d matrix C. When referring to embeddings produced by a specific method x, we may use W x and C x (e.g. W SGN S or C SV D ). All vectors are normalized to unit length before they are used for similarity calculation, making cosine similarity and dot-product equivalent (see Section 3.3 for further discussion).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Contexts D is commonly obtained by taking a corpus w 1 , w 2 , . . . , w n and defining the contexts of word w i as the words surrounding it in an Lsized window w i\u2212L , . . . , w i\u22121 , w i+1 , . . . , w i+L . While other definitions of contexts have been studied (Pad\u00f3 and Lapata, 2007; Baroni and Lenci, 2010; Levy and Goldberg, 2014a) this work focuses on fixed-window bag-of-words contexts.",
"cite_spans": [
{
"start": 263,
"end": 286,
"text": "(Pad\u00f3 and Lapata, 2007;",
"ref_id": "BIBREF27"
},
{
"start": 287,
"end": 310,
"text": "Baroni and Lenci, 2010;",
"ref_id": "BIBREF3"
},
{
"start": 311,
"end": 336,
"text": "Levy and Goldberg, 2014a)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "The traditional way to represent words in the distributional approach is to construct a highdimensional sparse matrix M , where each row represents a word w in the vocabulary V W and each column a potential context c \u2208 V C . The value of each matrix cell M ij represents the association between the word w i and the context c j . A popular measure of this association is pointwise mutual information (PMI) (Church and Hanks, 1990) . PMI is defined as the log ratio between w and c's joint probability and the product of their marginal probabilities, which can be estimated by:",
"cite_spans": [
{
"start": 406,
"end": 430,
"text": "(Church and Hanks, 1990)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Representations (PPMI Matrix)",
"sec_num": "2.1"
},
{
"text": "P M I(w, c) = logP (w,c) P (w)P (c) = log #(w,c)\u2022|D| #(w)\u2022#(c)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Representations (PPMI Matrix)",
"sec_num": "2.1"
},
{
"text": "The rows of M PMI contain many entries of wordcontext pairs (w, c) that were never observed in the corpus, for which P M I(w, c) = log 0 = \u2212\u221e. A common approach is thus to replace the M PMI matrix with M PMI 0 , in which P M I(w, c) = 0 in cases where #(w, c) = 0. A more consistent approach is to use positive PMI (PPMI), in which all negative values are replaced by 0:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Representations (PPMI Matrix)",
"sec_num": "2.1"
},
{
"text": "P P M I(w, c) = max (P M I (w, c) , 0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Representations (PPMI Matrix)",
"sec_num": "2.1"
},
{
"text": "Bullinaria and Levy (2007) showed that M PPMI outperforms M PMI 0 on semantic similarity tasks. A well-known shortcoming of PMI, which persists in PPMI, is its bias towards infrequent events (Turney and Pantel, 2010) . A rare context c that co-occurred with a target word w even once, will often yield relatively high PMI score becaus\u00ea P (c), which is in PMI's denominator, is very small. This creates a situation in which the top \"distributional features\" (contexts) of w are often extremely rare words, which do not necessarily appear in the respective representations of words that are semantically similar to w. Nevertheless, the PPMI measure is widely regarded as state-ofthe-art for these kinds of distributional-similarity models.",
"cite_spans": [
{
"start": 15,
"end": 26,
"text": "Levy (2007)",
"ref_id": "BIBREF7"
},
{
"start": 203,
"end": 216,
"text": "Pantel, 2010)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Representations (PPMI Matrix)",
"sec_num": "2.1"
},
{
"text": "While sparse vector representations work well, there are also advantages to working with dense low-dimensional vectors, such as improved computational efficiency and, arguably, better generalization. Such vectors can be obtained by performing dimensionality reduction over the sparse high-dimensional matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Singular Value Decomposition (SVD)",
"sec_num": "2.2"
},
{
"text": "A common method of doing so is truncated Singular Value Decomposition (SVD), which finds the optimal rank d factorization with respect to L 2 loss (Eckart and Young, 1936) . It was popularized in NLP via Latent Semantic Analysis (LSA) (Deerwester et al., 1990) .",
"cite_spans": [
{
"start": 147,
"end": 171,
"text": "(Eckart and Young, 1936)",
"ref_id": "BIBREF13"
},
{
"start": 235,
"end": 260,
"text": "(Deerwester et al., 1990)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Singular Value Decomposition (SVD)",
"sec_num": "2.2"
},
{
"text": "SVD factorizes M into the product of three matrices U \u2022 \u03a3 \u2022 V , where U and V are orthonormal and \u03a3 is a diagonal matrix of eigenvalues in decreasing order. By keeping only the top d elements of \u03a3, we obtain",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Singular Value Decomposition (SVD)",
"sec_num": "2.2"
},
{
"text": "M d = U d \u2022 \u03a3 d \u2022 V d . The dot-products between the rows of W = U d \u2022\u03a3 d are equal to the dot-products between rows of M d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Singular Value Decomposition (SVD)",
"sec_num": "2.2"
},
{
"text": "In the setting of word-context matrices, the dense, d-dimensional rows of W can substitute the very high-dimensional rows of M . Indeed, a common approach in NLP literature is factorizing the PPMI matrix M PPMI with SVD, and then taking the rows of:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Singular Value Decomposition (SVD)",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "W SVD = U d \u2022 \u03a3 d C SVD = V d",
"eq_num": "(1)"
}
],
"section": "Singular Value Decomposition (SVD)",
"sec_num": "2.2"
},
{
"text": "as word and context representations, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Singular Value Decomposition (SVD)",
"sec_num": "2.2"
},
{
"text": "We present a brief sketch of SGNS -the skip-gram embedding model introduced in (Mikolov et al., 2013a ) trained using the negative-sampling procedure presented in (Mikolov et al., 2013b) . A detailed derivation of SGNS is available in (Goldberg and . SGNS seeks to represent each word w \u2208 V W and each context c \u2208 V C as d-dimensional vectors w and c, such that words that are \"similar\" to each other will have similar vector representations. It does so by trying to maximize a function of the product w \u2022 c for (w, c) pairs that occur in D, and minimize it for negative examples: (w, c N ) pairs that do not necessarily occur in D. The negative examples are created by stochastically corrupting observed (w, c) pairs from D -hence the name \"negative sampling\". For each observation of (w, c), SGNS draws k contexts from the empirical unigram distribution P D (c) = #(c) |D| . In word2vec's implementation of SGNS, this distribution is smoothed, a design choice that boosts its performance. We explore this hyperparameter and others in Section 3. Levy and Golberg (2014c) show that SGNS's corpuslevel objective achieves its optimal value when:",
"cite_spans": [
{
"start": 79,
"end": 101,
"text": "(Mikolov et al., 2013a",
"ref_id": "BIBREF24"
},
{
"start": 163,
"end": 186,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF25"
},
{
"start": 866,
"end": 870,
"text": "#(c)",
"ref_id": null
},
{
"start": 1047,
"end": 1071,
"text": "Levy and Golberg (2014c)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Grams with Negative Sampling (SGNS)",
"sec_num": "2.3"
},
{
"text": "w \u2022 c = PMI(w, c) \u2212 log k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS as Implicit Matrix Factorization",
"sec_num": null
},
{
"text": "Hence, SGNS is implicitly factorizing a wordcontext matrix whose cell's values are PMI, shifted by a global constant (log k):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS as Implicit Matrix Factorization",
"sec_num": null
},
{
"text": "W \u2022 C = M PMI \u2212 log k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS as Implicit Matrix Factorization",
"sec_num": null
},
{
"text": "SGNS performs a different kind of factorization from traditional SVD (see 2.2). In particular, the factorization's loss function is not based on L 2 , and is much less sensitive to extreme and infinite values due to a sigmoid function surrounding w \u2022 c. Furthermore, the loss is weighted, causing rare (w, c) pairs to affect the objective much less than frequent ones. Thus, while many cells in M PMI equal log 0 = \u2212\u221e, the cost incurred for reconstructing these cells as a small negative value, such as \u22125 instead of as \u2212\u221e, is negligible. 1 An additional difference from SVD, which will be explored further in Section 3.3, is that SVD factorizes M into three matrices, two of them orthonormal and one diagonal, while SGNS factorizes M into two unconstrained matrices.",
"cite_spans": [
{
"start": 539,
"end": 540,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS as Implicit Matrix Factorization",
"sec_num": null
},
{
"text": "GloVe (Pennington et al., 2014) seeks to represent each word w \u2208 V W and each context c \u2208 V C as d-dimensional vectors w and c such that:",
"cite_spans": [
{
"start": 6,
"end": 31,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Global Vectors (GloVe)",
"sec_num": "2.4"
},
{
"text": "w \u2022 c + b w + b c = log (#(w, c)) \u2200(w, c) \u2208 D",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Vectors (GloVe)",
"sec_num": "2.4"
},
{
"text": "Here, b w and b c (scalars) are word/context-specific biases, and are also parameters to be learned in addition to w and c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Vectors (GloVe)",
"sec_num": "2.4"
},
{
"text": "GloVe's objective is explicitly defined as a factorization of the log-count matrix, shifted by the entire vocabularies' bias terms:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Vectors (GloVe)",
"sec_num": "2.4"
},
{
"text": "M log(#(w,c)) \u2248 W \u2022 C + b w + b c Where b w is a |V W | dimensional row vector and b c is a |V C | dimensional column vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Vectors (GloVe)",
"sec_num": "2.4"
},
{
"text": "If we were to fix b w = log #(w) and b c = log #(c), this would be almost 2 equivalent to factorizing the PMI matrix shifted by log(|D|). However, GloVe learns these parameters, giving an extra degree of freedom over SVD and SGNS. The model is fit to minimize a weighted least square loss, giving more weight to frequent (w, c) pairs. 3 Finally, an important novelty introduced in (Pennington et al., 2014) is that, assuming V C = V W , one could take the representation of a word w to be w + c w where c w is the row corresponding to w in C . This may improve results considerably in some circumstances, as we discuss in Sections 3.3 and 6.2.",
"cite_spans": [
{
"start": 335,
"end": 336,
"text": "3",
"ref_id": null
},
{
"start": 381,
"end": 406,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Global Vectors (GloVe)",
"sec_num": "2.4"
},
{
"text": "This section presents various hyperparameters implemented in word2vec and GloVe, and shows how to adapt and apply them to count-based methods. We divide these into: pre-processing hyperparameters, which affect the algorithms' input data; association metric hyperparameters, which define how word-context interactions are calculated; and post-processing hyperparameters, which modify the resulting word vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transferable Hyperparameters",
"sec_num": "3"
},
{
"text": "All the matrix-based algorithms rely on a collection D of word-context pairs (w, c) as inputs. word2vec introduces three novel variations on the way D is collected, which can be easily applied to other methods beyond SGNS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing Hyperparameters",
"sec_num": "3.1"
},
{
"text": "The traditional approaches usually use a constant-sized unweighted context window. For instance, if the window size is 5, then a word five tokens apart from the target is treated the same as an adjacent word. Following the intuition that contexts closer to the target are more important, context words can be weighted according to their distance from the focus word. Both GloVe and word2vec employ such a weighting scheme, and while less common, this approach was also explored in traditional count-based methods, e.g. (Sahlgren, 2006) .",
"cite_spans": [
{
"start": 519,
"end": 535,
"text": "(Sahlgren, 2006)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Context Window (dyn)",
"sec_num": null
},
{
"text": "GloVe's implementation weights contexts using the harmonic function, e.g. a context word three tokens away will be counted as 1 3 of an occurrence. On the other hand, word2vec's implementation is equivalent to weighing by the distance from the focus word divided by the window size. For example, a size-5 window will weigh its contexts by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Context Window (dyn)",
"sec_num": null
},
{
"text": "5 5 , 4 5 , 3 5 , 2 5 , 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Context Window (dyn)",
"sec_num": null
},
{
"text": "5 . The reason we call this modification dynamic context windows is because word2vec implements its weighting scheme by uniformly sampling the actual window size between 1 and L, for each token (Mikolov et al., 2013a) . The sampling method is faster than the direct method in terms of training time, since there are fewer SGD updates in SGNS and fewer non-zero matrix cells in the other methods. For our systematic experiments, we used the word2vec-style sampled version for all methods, including GloVe.",
"cite_spans": [
{
"start": 194,
"end": 217,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Context Window (dyn)",
"sec_num": null
},
{
"text": "Subsampling (sub) Subsampling is a method of diluting very frequent words, akin to removing stop-words. The subsampling method presented in (Mikolov et al., 2013a) randomly removes words that are more frequent than some threshold t with a probability of p, where f marks the word's corpus frequency:",
"cite_spans": [
{
"start": 140,
"end": 163,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Context Window (dyn)",
"sec_num": null
},
{
"text": "p = 1 \u2212 t f (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Context Window (dyn)",
"sec_num": null
},
{
"text": "Following the recommendation in (Mikolov et al., 2013a) , we use t = 10 \u22125 in our experiments. 4",
"cite_spans": [
{
"start": 32,
"end": 55,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Context Window (dyn)",
"sec_num": null
},
{
"text": "Another implementation detail of subsampling in word2vec is that the removal of tokens is done before the corpus is processed into wordcontext pairs. This practically enlarges the context window's size for many tokens, because they can now reach words that were not in their original L-sized windows. We call this kind of subsampling \"dirty\", as opposed to \"clean\" subsampling, which removes subsampled words without affecting the context window's size. We found their impact on performance comparable, and report results of only the \"dirty\" variant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Context Window (dyn)",
"sec_num": null
},
{
"text": "Deleting Rare Words (del) While it is common to ignore words that are rare in the training corpus, word2vec removes these tokens from the corpus before creating context windows. As with subsampling, this variation narrows the distance between tokens, inserting new word-context pairs that did not exist in the original corpus with the same window size. Though this variation may also have an effect on performance, preliminary experiments showed that it was small, and we therefore do not investigate its effect in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Context Window (dyn)",
"sec_num": null
},
{
"text": "The PMI (or PPMI) between a word and its context is well known to be an effective association measure in the word similarity literature. Levy and Golberg (2014c) show that SGNS is implicitly factorizing a word-context matrix whose cell's values are shifted PMI. Following their analysis, we present two variations of the PMI (and implicitly PPMI) association metric, which we adopt from SGNS. These enhancements of PMI are not directly applicable to GloVe, which, by definition, uses a different association measure.",
"cite_spans": [
{
"start": 137,
"end": 161,
"text": "Levy and Golberg (2014c)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Association Metric Hyperparameters",
"sec_num": "3.2"
},
{
"text": "Shifted PMI (neg) SGNS has a natural hyperparameter k (the number of negative samples), which affects the value that SGNS is trying to optimize for each (w, c): P M I(w, c) \u2212 log k. The shift caused by k > 1 can be applied to distributional methods through shifted PPMI (Levy and Goldberg, 2014c) :",
"cite_spans": [
{
"start": 270,
"end": 296,
"text": "(Levy and Goldberg, 2014c)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Association Metric Hyperparameters",
"sec_num": "3.2"
},
{
"text": "SP P M I(w, c) = max (P M I (w, c) \u2212 log k, 0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Association Metric Hyperparameters",
"sec_num": "3.2"
},
{
"text": "It is important to understand that in SGNS, k has two distinct functions. First, it is used to better estimate the distribution of negative examples; a higher k means more data and better estimation. Second, it acts as a prior on the probability of observing a positive example (an actual occurrence of (w, c) in the corpus) versus a negative example; a higher k means that negative examples are more probable. Shifted PPMI captures only the second aspect of k (a prior). We experiment with three values of k: 1, 5, 15.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Association Metric Hyperparameters",
"sec_num": "3.2"
},
{
"text": "Context Distribution Smoothing (cds) In word2vec, negative examples (contexts) are sampled according to a smoothed unigram distribution. In order to smooth the original contexts' distribution, all context counts are raised to the power of \u03b1 (Mikolov et al. (2013b) found \u03b1 = 0.75 to work well). This smoothing variation has an analog when calculating PMI directly:",
"cite_spans": [
{
"start": 241,
"end": 264,
"text": "(Mikolov et al. (2013b)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Association Metric Hyperparameters",
"sec_num": "3.2"
},
{
"text": "P M I \u03b1 (w, c) = logP (w, c) P (w)P \u03b1 (c) (3) P \u03b1 (c) = # (c) \u03b1 c # (c) \u03b1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Association Metric Hyperparameters",
"sec_num": "3.2"
},
{
"text": "Like other smoothing techniques (Pantel and Lin, 2002; Turney and Littman, 2003) , context distribution smoothing alleviates PMI's bias towards rare words. It does so by enlarging the probability of sampling a rare context (sinceP \u03b1 (c) >P (c) when c is infrequent), which in turn reduces the PMI of (w, c) for any w co-occurring with the rare context c. In Section 6.2 we demonstrate that this novel variant of PMI is very effective, and consistently improves performance across tasks, methods, and configurations. We experiment with two values of \u03b1: 1 (unsmoothed) and 0.75 (smoothed).",
"cite_spans": [
{
"start": 32,
"end": 54,
"text": "(Pantel and Lin, 2002;",
"ref_id": "BIBREF28"
},
{
"start": 55,
"end": 80,
"text": "Turney and Littman, 2003)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Association Metric Hyperparameters",
"sec_num": "3.2"
},
{
"text": "We present three hyperparameters that modify the algorithms' output: the word vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "Adding Context Vectors (w+c) Pennington et al. (2014) propose using the context vectors in addition to the word vectors as GloVe's output. For example, the word \"cat\" can be represented as:",
"cite_spans": [
{
"start": 29,
"end": 53,
"text": "Pennington et al. (2014)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "v cat = w cat + c cat",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "where w and c are the word and context embeddings, respectively. This vector combination was originally motivated as an ensemble method. Here, we provide a different interpretation of its effect on the cosine similarity function. Specifically, we show that adding context vectors effectively adds firstorder similarity terms to the second-order similarity function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "Consider the cosine similarity of two words:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "cos(x, y) = v x \u2022 v y \u221a v x \u2022 v x v y \u2022 v y = ( w x + c x ) \u2022 ( w y + c y ) ( w x + c x ) \u2022 ( w x + c x ) ( w y + c y ) \u2022 ( w y + c y ) = w x \u2022 w y + c x \u2022 c y + w x \u2022 c y + c x \u2022 w y w 2 x + 2 w x \u2022 c x + c 2 x w 2 y + 2 w y \u2022 c y + c 2 y = w x \u2022 w y + c x \u2022 c y + w x \u2022 c y + c x \u2022 w y 2 \u221a w x \u2022 c x + 1 w y \u2022 c y + 1",
"eq_num": "(4)"
}
],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "(The last step follows because, as noted in Section 2, the word and context vectors are normalized after training.) The resulting expression combines similarity terms which can be divided into two groups: second-order similarity (w x \u2022 w y , c x \u2022 c y ) and firstorder similarity (w * \u2022 c * ). The second-order terms measure the extent to which the two words are replaceable based on their tendencies to appear in similar contexts, and are the manifestation of Harris's (1954) distributional hypothesis. The firstorder terms measure the tendency of one word to appear in the context of the other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "In SVD and SGNS, the first-order similarity terms between w and c converge to P M I(w, c), while in GloVe it converges into their log-count (with some bias terms).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "The similarity calculated in equation 4 is thus a symmetric combination of the first-order and second order similarities of x and y, normalized by a function of their reflective first-order similarities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "sim(x, y) = sim 2 (x, y) + sim 1 (x, y) sim 1 (x, x) + 1 sim 1 (y, y) + 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "This similarity measure states that words are similar if they tend to appear in similar contexts, or if they tend to appear in the contexts of each other (and preferably both).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "The additive w+c representation can be trivially applied to other methods that produce distinct word and context vectors (e.g. SVD and SGNS). On the other hand, explicit methods such as PPMI are sparse by definition, and nullify the vast majority of first-order similarities. We therefore do not apply w+c to PPMI in this study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "Eigenvalue Weighting (eig) As mentioned in Section 2.2, the word and context vectors derived using SVD are typically represented by (equation 1):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "W SVD = U d \u2022 \u03a3 d C SVD = V d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "However, this is not necessarily the optimal construction of W SVD for word similarity tasks. We note that in the SVD-based factorization, the resulting word and context matrices have very different properties. In particular, the context matrix C SVD is orthonormal while the word matrix W SVD is not. On the other hand, the factorization achieved by SGNS's training procedure is much more \"symmetric\", in the sense that neither W W2V nor C W2V is orthonormal, and no particular bias is given to either of the matrices in the training objective. Similar symmetry can be achieved with the following factorization:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "W = U d \u2022 \u03a3 d C = V d \u2022 \u03a3 d",
"eq_num": "(5)"
}
],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "Alternatively, the eigenvalue matrix can be dismissed altogether:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "W = U d C = V d",
"eq_num": "(6)"
}
],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "While it is not theoretically clear why the symmetric approach is better for semantic tasks, it does work much better empirically (see Section 6.1). A similar observation was made by Caron (2001) , who suggested adding a parameter p to control the eigenvalue matrix \u03a3:",
"cite_spans": [
{
"start": 183,
"end": 195,
"text": "Caron (2001)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "W SVDp = U d \u2022 \u03a3 p d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "Later studies show that weighting the eigenvalue matrix \u03a3 d with the exponent p can have a significant effect on performance, and should be tuned (Bullinaria and Levy, 2012; Turney, 2012) . Adapting the notion of symmetric decomposition from SGNS, this study experiments only with symmetric variants of SVD (p = 0, p = 0.5; equations (6) and (5)) and the traditional factorization (p = 1; equation (1)).",
"cite_spans": [
{
"start": 162,
"end": 173,
"text": "Levy, 2012;",
"ref_id": "BIBREF8"
},
{
"start": 174,
"end": 187,
"text": "Turney, 2012)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "Vector Normalization (nrm) As mentioned in Section 2, all vectors (i.e. W 's rows) are normalized to unit length (L 2 normalization), rendering the dot product operation equivalent to cosine similarity. This normalization is a hyperparameter setting in itself, and other normalizations are also applicable. The trivial case is using no normalization at all. Another setting, used by Pennington et al. (2014) , normalizes the columns of W rather than its rows. It is also possible to consider a fourth setting that combines both row and column normalizations.",
"cite_spans": [
{
"start": 383,
"end": 407,
"text": "Pennington et al. (2014)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "Note that column normalization is akin to dismissing the eigenvalues in SVD. While the hyperparameter setting eig = 0 has an important positive impact on SVD, the same cannot be said of column normalization on other methods. In preliminary experiments, we tried the four different normalization schemes described above (none, row, column, and both), and found the standard L 2 normalization of W 's rows (i.e. using the cosine similarity measure) to be consistently superior.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing Hyperparameters",
"sec_num": "3.3"
},
{
"text": "We explored a large space of hyperparameters, representations, and evaluation datasets. Table 1 enumerates the hyperparameter space. We generated 72 PPMI, 432 SVD, 144 SGNS, and 24 GloVe representations; 672 overall.",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 95,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Corpus All models were trained on English Wikipedia (August 2013 dump), pre-processed by removing non-textual elements, sentence splitting, and tokenization. The corpus contains 77.5 million sentences, spanning 1.5 billion tokens. Models were derived using windows of 2, 5, and 10 tokens to each side of the focus word (the window size parameter is denoted win). Words that appeared less than 100 times in the corpus were ignored, resulting in vocabularies of 189,533 terms for both words and contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Representations",
"sec_num": "4.2"
},
{
"text": "We trained a 500dimensional representation with SVD, SGNS, and GloVe. SGNS was trained using a modified version of word2vec which receives a sequence of pre-extracted word-context pairs (Levy and Goldberg, 2014a) . GloVe was trained with 50 iterations using the original implementation (Pennington et al., 2014), applied to the pre-extracted wordcontext pairs.",
"cite_spans": [
{
"start": 186,
"end": 212,
"text": "(Levy and Goldberg, 2014a)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Embeddings",
"sec_num": null
},
{
"text": "We evaluated each word representation on eight datasets covering similarity and analogy tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Datasets",
"sec_num": "4.3"
},
{
"text": "Word Similarity We used six datasets to evaluate word similarity: the popular WordSim353 (Finkelstein et al., 2002) partitioned into two datasets, WordSim Similarity and WordSim Relatedness (Zesch et al., 2008; Agirre et al., 2009 Analogy The two analogy datasets present questions of the form \"a is to a * as b is to b * \", where b * is hidden, and must be guessed from the entire vocabulary. MSR's analogy dataset (Mikolov et al., 2013c) contains 8000 morpho-syntactic analogy questions, such as \"good is to best as smart is to smartest\". Google's analogy dataset (Mikolov et al., 2013a) contains 19544 questions, about half of the same kind as in MSR (syntactic analogies), and another half of a more semantic nature, such as capital cities (\"Paris is to France as Tokyo is to Japan\"). After filtering questions involving outof-vocabulary words, i.e. words that appeared in English Wikipedia less than 100 times, we remain with 7118 instances in MSR and 19258 instances in Google. The analogy questions are answered using 3CosAdd (addition and subtraction): as well as 3CosMul, which is state-of-the-art in analogy recovery (Levy and Goldberg, 2014b) : Table 2 : Performance of each method across different tasks in the \"vanilla\" scenario (all hyperparameters set to default): win = 2; dyn = none; sub = none; neg = 1; cds = 1; w+c = only w; eig = 0.0. Table 4 : Performance of each method across different tasks using the best configuration for that method and task combination, assuming win = 2. \u03b5 = 0.001 is used to prevent division by zero. We abbreviate the two methods \"Add\" and \"Mul\", respectively. The evaluation metric for the analogy questions is the percentage of questions for which the argmax result was the correct answer (b * ).",
"cite_spans": [
{
"start": 89,
"end": 115,
"text": "(Finkelstein et al., 2002)",
"ref_id": "BIBREF16"
},
{
"start": 190,
"end": 210,
"text": "(Zesch et al., 2008;",
"ref_id": "BIBREF35"
},
{
"start": 211,
"end": 230,
"text": "Agirre et al., 2009",
"ref_id": null
},
{
"start": 416,
"end": 439,
"text": "(Mikolov et al., 2013c)",
"ref_id": "BIBREF26"
},
{
"start": 566,
"end": 589,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF24"
},
{
"start": 1127,
"end": 1153,
"text": "(Levy and Goldberg, 2014b)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 1156,
"end": 1163,
"text": "Table 2",
"ref_id": null
},
{
"start": 1356,
"end": 1363,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Test Datasets",
"sec_num": "4.3"
},
{
"text": "arg max b * \u2208V W \\{a * ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Datasets",
"sec_num": "4.3"
},
{
"text": "We begin by comparing the effect of various hyperparameter configurations, and observe that different settings have a substantial impact on performance (Section 5.1); at times, this improvement is greater than that of switching to a different representation method. We then show that, in some tasks, careful hyperparameter tuning can also outweigh the importance of adding more data (5.2). Finally, we observe that our results do not agree with a few recent claims in the word embedding literature, and suggest that these discrepancies stem from hyperparameter settings that were not controlled for in previous experiments (5.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We first examine a \"vanilla\" scenario (Table 2) , in which all hyperparameters are \"turned off\" (set to default values): small context windows (win = 2), no dynamic contexts (dyn = none), no subsampling (sub = none), one negative sample (neg = 1), no smoothing (cds = 1), no context vectors (w+c = only w), and default eigenvalue weights (eig = 0.0). 5 Overall, SVD outperforms other methods on most word similarity tasks, often having a considerable advantage over the secondbest. In contrast, analogy tasks present mixed results; SGNS yields the best result in MSR's analogies, while PPMI dominates Google's dataset. The second scenario (Table 3 ) sets the hyperparameters to word2vec's default values: small context windows (win = 2), 6 dynamic contexts (dyn = with), dirty subsampling (sub = dirty), five negative samples (neg = 5), context distribution smoothing (cds = 0.75), no context vectors (w+c = only w), and default eigenvalue weights (eig = 0.0). The results in this scenario are quite different than those of the vanilla scenario, with better performance in many cases. However, this change is not uniform, as we observe that different settings boost different algorithms. In fact, the question \"Which method is best?\" might have a completely different answer when comparing on the same task but with different hyperparameter values. Looking at Table 2 and Table 3 , for example, SVD is the best algorithm for SimLex-999 in the vanilla scenario, whereas in the word2vec scenario, it does not perform as well as SGNS. The third scenario (Table 4 ) enables the full range of hyperparameters given small context windows (win = 2); we evaluate each method on each task given every hyperparameter configuration, and choose the best performance. We see a considerable performance increase across all methods when comparing to both the vanilla (Table 2) and word2vec scenarios (Table 3) : the best combination of hyperparameters improves up to 15.7 points beyond the vanilla setting, and over 6 points on average. It appears that selecting the right hyperparameter settings often has more impact than choosing the most suitable algorithm. Table 4 result from an \"oracle\" experiment, in which the hyperparameters are tuned on the test data, providing an upper bound on the potential performance improvement of hyperparameter tuning. Are such gains achievable in practice? Table 5 describes a realistic scenario, where the hyperparameters are tuned on a training set, which is separate from the unseen test data. We also report results for different window sizes (win = 2, 5, 10). We use 2-fold cross validation, in which, for each task, the hyperparameters are tuned on each half of the data and evaluated on the other half. The numbers reported in Table 5 are the averages of the two runs for each data-point.",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 47,
"text": "(Table 2)",
"ref_id": null
},
{
"start": 639,
"end": 647,
"text": "(Table 3",
"ref_id": "TABREF3"
},
{
"start": 1360,
"end": 1379,
"text": "Table 2 and Table 3",
"ref_id": "TABREF3"
},
{
"start": 1551,
"end": 1559,
"text": "(Table 4",
"ref_id": null
},
{
"start": 1885,
"end": 1894,
"text": "(Table 3)",
"ref_id": "TABREF3"
},
{
"start": 2147,
"end": 2154,
"text": "Table 4",
"ref_id": null
},
{
"start": 2379,
"end": 2386,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 2756,
"end": 2763,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Hyperparameters vs Algorithms",
"sec_num": "5.1"
},
{
"text": "The results indicate that approaching the oracle's improvements are indeed feasible. When comparing the performance of the trained configuration (Table 5) to that of the optimal one (Table 4), their average difference is about 1%, with larger datasets usually finding the optimal configuration. It is therefore both practical and beneficial to properly tune hyperparameters for word similarity and analogy detection tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 154,
"text": "(Table 5)",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Main Result The numbers in",
"sec_num": null
},
{
"text": "An interesting observation, which immediately appears when looking at Table 5 , is that there is no single method that consistently performs better than the rest. This behavior is visible across all window sizes, and is discussed in further detail in Section 5.3.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Main Result The numbers in",
"sec_num": null
},
{
"text": "An important factor in evaluating distributional methods is the size of corpus and vocabulary, where larger corpora tend to yield better representations. However, training word vectors from larger corpora is more costly in computation time, which could be spent in tuning hyperparameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameters vs Big Data",
"sec_num": "5.2"
},
{
"text": "To compare the effect of bigger data versus more flexible hyperparameter settings, we created a large corpus with over 10.5 billion words (7 times larger than our original corpus). This corpus was built from an 8.5 billion word corpus sug-gested by Mikolov for training word2vec, 7 to which we added UKWaC (Ferraresi et al., 2008) . As with the original setup, our vocabulary contained every word that appeared at least 100 times in the corpus, amounting to about 620,000 words. Finally, we fixed the context windows to be broad and dynamic (win = 10, dyn = with), and explored 16 hyperparameter settings comprising of: subsampling (sub), shifted PMI (neg = 1, 5), context distribution smoothing (cds), and adding context vectors (w+c). This space is somewhat more restricted than the original hyperparameter space.",
"cite_spans": [
{
"start": 306,
"end": 330,
"text": "(Ferraresi et al., 2008)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameters vs Big Data",
"sec_num": "5.2"
},
{
"text": "In terms of computation, SGNS scales nicely, requiring about half a day of computation per setup. GloVe, on the other hand, took several days to run a single 50-iteration instance for this corpus. Applying the traditional count-based methods to this setting proved technically challenging, as they consumed too much memory to be efficiently manipulated. We thus present results for only SGNS and GloVe (Table 5) .",
"cite_spans": [],
"ref_spans": [
{
"start": 402,
"end": 411,
"text": "(Table 5)",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Hyperparameters vs Big Data",
"sec_num": "5.2"
},
{
"text": "Remarkably, there are some cases (3/6 word similarity tasks) in which tuning a larger space of hyperparameters is indeed more beneficial than expanding the corpus. In other cases, however, more data does seem to pay off, as evident with both analogy tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameters vs Big Data",
"sec_num": "5.2"
},
{
"text": "Prior art raises several claims regarding the superiority of certain methods over the others. However, these studies did not control for the hyperparameters presented in this work. We thus revisit these claims, and examine their validity based on the results in Table 5. 8 Are embeddings superior to count-based distributional methods? It is commonly believed that modern prediction-based embeddings perform better than traditional count-based methods. This claim was recently supported by a series of systematic evaluations by Baroni et al. (2014) . However, our results suggest a different trend. Table 5 shows that in word similarity tasks, the average score of SGNS is actually lower than SVD's when win = 2, 5, and it never outperforms SVD by more than 1.7 points in those cases. In Google's analogies SGNS and GloVe indeed perform better than PPMI, but only by a margin of 3.7 points (compare PPMI with win = 2 and SGNS with win = 5). MSR's analogy dataset is the only case where SGNS and GloVe substantially outperform PPMI and SVD. 9 Overall, there does not seem to be a consistent significant advantage to one approach over the other, thus refuting the claim that prediction-based methods are superior to countbased approaches.",
"cite_spans": [
{
"start": 271,
"end": 272,
"text": "8",
"ref_id": null
},
{
"start": 528,
"end": 548,
"text": "Baroni et al. (2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 262,
"end": 270,
"text": "Table 5.",
"ref_id": "TABREF5"
},
{
"start": 599,
"end": 606,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Re-evaluating Prior Claims",
"sec_num": "5.3"
},
{
"text": "The contradictory results in (Baroni et al., 2014) stem from creating word2vec embeddings with somewhat pre-tuned hyperparameters (recommended by word2vec), and comparing them to \"vanilla\" PPMI and SVD representations. In particular, shifted PMI (negative sampling) and context distribution smoothing (cds = 0.75, equation 3in Section 3.2) were turned on for SGNS, but not for PPMI and SVD. An additional difference is Baroni et al.'s setting of eig=1, which significantly deteriorates SVD's performance (see Section 6.1).",
"cite_spans": [
{
"start": 29,
"end": 50,
"text": "(Baroni et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Re-evaluating Prior Claims",
"sec_num": "5.3"
},
{
"text": "Is GloVe superior to SGNS? Pennington et al. (2014) show a variety of experiments in which GloVe outperforms SGNS (among other methods). However, our results show the complete opposite. In fact, SGNS outperforms GloVe in every task (Table 5 ). Only when restricted to 3CosAdd, a suboptimal configuration, does GloVe show a 0.8 point advantage over SGNS. This trend persists when scaling up to a larger corpus and vocabulary.",
"cite_spans": [
{
"start": 27,
"end": 51,
"text": "Pennington et al. (2014)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 232,
"end": 240,
"text": "(Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Re-evaluating Prior Claims",
"sec_num": "5.3"
},
{
"text": "This contradiction can be explained by three major differences in the experimental setup. First, in our experiments, hyperparameters were allowed to vary; in particular, w+c was applied to all the methods, including SGNS. Secondly, Pennington et al. (2014) only evaluated on Google's analogies, but not on MSR's. Finally, in our work, all methods are compared using the same underlying corpus.",
"cite_spans": [
{
"start": 232,
"end": 256,
"text": "Pennington et al. (2014)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Re-evaluating Prior Claims",
"sec_num": "5.3"
},
{
"text": "It is also important to bear in mind that, by definition, GloVe cannot use two hyperparameters: shifted PMI (neg) and context distribution smoothing (cds). Instead, GloVe learns a set of bias parameters that subsumes these two modifications and many other potential changes to the PMI metric. Albeit its greater flexibility, GloVe does not fair better than SGNS in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-evaluating Prior Claims",
"sec_num": "5.3"
},
{
"text": "Is PPMI on-par with SGNS on analogy tasks? Levy and Goldberg (2014b) show that PPMI and SGNS perform similarly on both Google's and MSR's analogy tasks. Nevertheless, the results in Table 5 show a clear advantage to SGNS. While the gap on Google's analogies is not very large (PPMI lags behind SGNS by only 3.7 points), SGNS consistently outperforms PPMI by a large margin on the MSR dataset. MSR's analogy dataset captures syntactic relations, such as singular-plural inflections for nouns and tense modifications for verbs. We conjecture that capturing these syntactic relations may rely on certain types of contexts, such as determiners and function words, which SGNS might be better at capturing -perhaps due to the way it assigns weights to different examples, or because it also captures negative correlations which are filtered by PPMI.",
"cite_spans": [
{
"start": 43,
"end": 68,
"text": "Levy and Goldberg (2014b)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 182,
"end": 189,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Re-evaluating Prior Claims",
"sec_num": "5.3"
},
{
"text": "A deeper look into Levy and Goldberg's (2014b) experiments reveals the use of PPMI with positional contexts (i.e. each context is a conjunction of a word and its relative position to the target word), whereas SGNS was employed with regular bag-of-words contexts. Positional contexts might contain relevant information for recovering syntactic analogies, explaining PPMI's relatively high score on MSR's analogy task in (Levy and Goldberg, 2014b) .",
"cite_spans": [
{
"start": 19,
"end": 46,
"text": "Levy and Goldberg's (2014b)",
"ref_id": "BIBREF20"
},
{
"start": 419,
"end": 445,
"text": "(Levy and Goldberg, 2014b)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Re-evaluating Prior Claims",
"sec_num": "5.3"
},
{
"text": "Does 3CosMul recover more analogies than 3CosAdd? Levy and Goldberg (2014b) show that using similarity multiplication (3CosMul) rather than addition (3CosAdd) improves results on all methods and on every task. This claim is consistent with our findings; indeed, 3CosMul dominates 3CosAdd in every case. The improvement is particularly noticeable for SVD and PPMI, which considerably underperform other methods when using 3CosAdd.",
"cite_spans": [
{
"start": 50,
"end": 75,
"text": "Levy and Goldberg (2014b)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Re-evaluating Prior Claims",
"sec_num": "5.3"
},
{
"text": "Another algorithm featured in word2vec is CBOW. Unlike the other methods, CBOW cannot be easily expressed as a factorization of a wordcontext matrix; it ties together the tokens of each context window by representing the context vector as the sum of its words' vectors. It is thus more expressive than the other methods, and has a potential of deriving better word representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with CBOW",
"sec_num": "5.4"
},
{
"text": "While Mikolov et al. (2013b) found SGNS to outperform CBOW, Baroni et al. (2014) pared CBOW to the other methods when setting all the hyperparameters to the defaults provided by word2vec (Table 3) . With the exception of MSR's analogy task, CBOW is not the bestperforming method of any other task in this scenario. Other scenarios showed similar trends in our preliminary experiments. While CBOW can potentially derive better representations by combining the tokens in each context window, this potential is not realized in practice. Nevertheless, Melamud et al. (2014) show that capturing joint contexts can indeed improve performance on word similarity tasks, and we believe it is a direction worth pursuing.",
"cite_spans": [
{
"start": 6,
"end": 28,
"text": "Mikolov et al. (2013b)",
"ref_id": "BIBREF25"
},
{
"start": 60,
"end": 80,
"text": "Baroni et al. (2014)",
"ref_id": "BIBREF4"
},
{
"start": 548,
"end": 569,
"text": "Melamud et al. (2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 187,
"end": 196,
"text": "(Table 3)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Comparison with CBOW",
"sec_num": "5.4"
},
{
"text": "We analyze the individual impact of each hyperparameter, and try to characterize the conditions in which a certain setting is beneficial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameter Analysis",
"sec_num": "6"
},
{
"text": "Certain hyperparameter settings might cripple the performance of a certain method. We observe two scenarios in which SVD performs poorly. SVD does not benefit from shifted PPMI. Setting neg > 1 consistently deteriorates SVD's performance. Levy and Goldberg (2014c) made a similar observation, and hypothesized that this is a result of the increasing number of zero-cells, which may cause SVD to prefer a factorization that is very close to the zero matrix. SVD's L 2 objective is unweighted, and it does not distinguish between observed and unobserved matrix cells.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Harmful Configurations",
"sec_num": "6.1"
},
{
"text": "Using SVD \"correctly\" is bad. The traditional way of representing words with SVD uses the eigenvalue matrix (eig = 1):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Harmful Configurations",
"sec_num": "6.1"
},
{
"text": "W = U d \u2022 \u03a3 d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Harmful Configurations",
"sec_num": "6.1"
},
{
"text": "Despite being theoretically well-motivated, this setting leads to very poor results in practice, when compared to other settings (eig = 0.5 or 0). Table 6 demonstrates this gap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Harmful Configurations",
"sec_num": "6.1"
},
{
"text": "The drop in average accuracy when setting eig = 1 is astounding. The performance gap persists under different hyperparameter settings as well, and drops in performance of over 15 points (absolute) when using eig = 1 instead of eig = 0.5 or 0 are not uncommon. This setting is one of the main reasons for SVD's inferior results in the study by Baroni et al. (2014) , and also the reason we chose to use eig = 0.5 as the default setting for SVD in the vanilla scenario.",
"cite_spans": [
{
"start": 343,
"end": 363,
"text": "Baroni et al. (2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Harmful Configurations",
"sec_num": "6.1"
},
{
"text": "To identify which hyperparameter settings are beneficial, we looked at the best configuration of each method on each task. We then counted the number of times each hyperparameter setting was chosen in these configurations ( Table 7) . Some trends emerge, such as PPMI and SVD's preference towards shorter context windows 10 (win = 2), and that SGNS always prefers numerous negative samples (neg > 1).",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 232,
"text": "Table 7)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Beneficial Configurations",
"sec_num": "6.2"
},
{
"text": "To get a closer look and isolate the effect of each hyperparameter, we controlled for said hyperparameter, and compared the best configurations given each of the hyperparameter's settings. Table 8 shows the difference between default and non-default settings of each hyperparameter.",
"cite_spans": [],
"ref_spans": [
{
"start": 189,
"end": 196,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Beneficial Configurations",
"sec_num": "6.2"
},
{
"text": "While many hyperparameter settings can improve performance, they may also degrade it when chosen incorrectly. For instance, in the case of shifted PMI (neg), SGNS consistently profits from neg > 1, while SVD's performance is dramatically reduced. For PPMI, the utility of applying neg > 1 depends on the type of task: word similarity or analogy. Another example is dynamic context windows (dyn), which is beneficial for MSR's analogy task, but largely detrimental to other tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beneficial Configurations",
"sec_num": "6.2"
},
{
"text": "It appears that the only hyperparameter that can be \"blindly\" applied in any situation is context distribution smoothing (cds = 0.75), yielding a consistent improvement at an insignificant risk. Note that cds helps PPMI more than it does other methods; we suggest that this is because it reduces the relative impact of rare words on the distributional representation, thus addressing PMI's \"Achilles' heel\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beneficial Configurations",
"sec_num": "6.2"
},
{
"text": "It is generally advisable to tune all hyperparameters, as well as algorithm-specific hyperparameters, for the task at hand. However, this may be computationally expensive. We thus provide some \"rules of thumb\", which we found to work well in our setting:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Practical Recommendations",
"sec_num": "7"
},
{
"text": "\u2022 Always use context distribution smoothing (cds = 0.75) to modify PMI, as described in Section 3.2. It consistently improves performance, and is applicable to PPMI, SVD, and SGNS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Practical Recommendations",
"sec_num": "7"
},
{
"text": "\u2022 Do not use SVD \"correctly\" (eig = 1). Instead, use one of the symmetric variants (Section 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Practical Recommendations",
"sec_num": "7"
},
{
"text": "\u2022 SGNS is a robust baseline. While it might not be the best method for every task, it does not significantly underperform in any scenario. Moreover, SGNS is the fastest method to train, and cheapest (by far) in terms of disk space and memory consumption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Practical Recommendations",
"sec_num": "7"
},
{
"text": "\u2022 With SGNS, prefer many negative samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Practical Recommendations",
"sec_num": "7"
},
{
"text": "\u2022 for both SGNS and GloVe, it is worthwhile to experiment with the w + c variant, which is cheap to apply (does not require retraining) and can result in substantial gains (as well as substantial losses).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Practical Recommendations",
"sec_num": "7"
},
{
"text": "Recent embedding methods introduce a plethora of design choices beyond network architecture and optimization algorithms. We reveal that these seemingly minor variations can have a large impact on the success of word representation methods. By showing how to adapt and tune these hyperparameters in traditional methods, we allow a proper comparison between representations, and challenge various claims of superiority from the word embedding literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "This study also exposes the need for more controlled-variable experiments, and extending the concept of \"variable\" from the obvious task, data, and method to the often ignored preprocessing steps and hyperparameter settings. We also stress the need for transparent and reproducible experiments, and commend authors such as Mikolov, Pennington, and others for making their code publicly available. In this spirit, we make our code available as well. 11 (e) Performance difference between best models with w+c = w + c and w+c = only w. Table 8 : The added value versus the risk of setting each hyperparameter. The figures show the differences in performance between the best achievable configurations when restricting a hyperparameter to different values. This difference indicates the potential gain of tuning a given hyperparameter, as well as the risks of decreased performance when not tuning it. For example, an entry of +9.2% in Table (d) means that the best model with cds = 0.75 is 9.2% more accurate (absolute) than the best model with cds = 1; i.e. on MSR's analogies, using cds = 0.75 instead of cds = 1 improved PPMI's accuracy from .443 to .535.",
"cite_spans": [],
"ref_spans": [
{
"start": 534,
"end": 541,
"text": "Table 8",
"ref_id": null
},
{
"start": 933,
"end": 942,
"text": "Table (d)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "Transactions of the Association for Computational Linguistics, vol. 3, pp. 211-225, 2015. Action Editor: Patrick Pantel. Submission batch: 1/2015; Revision batch 3/2015; Published 5/2015. c 2015 Association for Computational Linguistics. Distributed under a CC-BY-NC-SA 4.0 license.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The logistic (sigmoidal) objective also curbs very high positive values of PMI. We suspect that this property, along with the weighted factorization property, addresses the aforementioned shortcoming of PMI, i.e. its overweighting of infrequent events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "GloVe's objective ignores (w, c) pairs that do not cooccur in the training corpus, treating them as missing values. SGNS, on the other hand, does take such pairs into account through the negative sampling procedure.3 The weighting formula is another hyper-parameter that could be tuned, but we keep to the default weighting scheme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "word2vec's code implements a slightly different formula: p = f \u2212t f \u2212 t f . We followed the formula presented",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "While it is more common to set eig = 1, this setting degrades SVD's performance considerably (see Section 6.1).6 While word2vec's default window size is 5, we present a single window size (win = 2) in Tables 2-4, in order to isolate win's effect from the effects of other hyperparameters. Running the same experiments with different window sizes reveals similar trends. Additional results with broader window sizes are shown inTable 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://word2vec.googlecode.com/svn/ trunk/demo-train-big-model-v1.sh8 We note that all conclusions drawn in this section rely on the specific data and settings with which we experiment. It is indeed feasible that experiments on different tasks, data, and hyperparameters may yield other conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Unlike PPMI, SVD underperforms in both analogy tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This might also relate to PMI's bias towards infrequent events (see Section 2.1). Broader windows create more random co-occurrences with rare words, \"polluting\" the distributional vector with random words that have high PMI scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://bitbucket.org/omerlevy/ hyperwords",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the Google Research Award Program and the German Research Foundation via the German-Israeli Project Cooperation (grant DA 1600/1-1). We thank Marco Baroni and Jeffrey Pennington for their valuable comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "win dyn sub neg cds w+c 2 : 5 : 10 none : with none : dirty 1 : 5 : 15 1.00 : 0.75 only w : w + c PPMI 7 : 1 : 0 4 : 4 4 : 4 2 : 6 : 0 1 : 7 -SVD 7 : 1 : 0 4 : 4 1 : 7 8 : 0 : 0 2 : 6 7 : 1 SGNS 2 : 3 : 3 6 : 2 4 : 4 0 : 4 : 4 3 : 5 4 : 4 GloVe 1 : 3 : 4 6 : 2 7 : 1 --4 : 4 Table 7 : The impact of each hyperparameter, measured by the number of tasks in which the best configuration had that hyperparameter setting. Non-applicable combinations are marked by \"-\". ",
"cite_spans": [],
"ref_spans": [
{
"start": 275,
"end": 282,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Google MSR Similarity Relatedness MEN M. Turk Rare Words SimLex Mul Mul PPMI +0.6% +1.9% +1.3% +1.0% -3.8% -3.9% -5.0% -12.2% SVD +0.7% +0.2% +0.6% +0.7% +0",
"authors": [
{
"first": "",
"middle": [],
"last": "Dyn = None. Method Wordsim Wordsim Bruni",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "(a) Performance difference between best models with dyn = with and dyn = none. Method WordSim WordSim Bruni et al. Radinsky et al. Luong et al. Hill et al. Google MSR Similarity Relatedness MEN M. Turk Rare Words SimLex Mul Mul PPMI +0.6% +1.9% +1.3% +1.0% -3.8% -3.9% -5.0% -12.2% SVD +0.7% +0.2% +0.6% +0.7% +0.8% -0.3% +4.0% +2.4% SGNS +1.5% +2.2% +1.5% +0.1% -0.4% -0.1% -4.4% -5.4% GloVe +0.2% -1.3% -1.0% -0.2% -3.4% -0.9% -3.0% -3.6%",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Performance difference between best models with sub = dirty and sub = none. References Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pasca, and Aitor Soroa",
"authors": [],
"year": 2009,
"venue": "Proceedings of Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Performance difference between best models with sub = dirty and sub = none. References Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pasca, and Aitor Soroa. 2009. A study on similarity and relatedness using distribu- tional and wordnet-based approaches. In Proceed- ings of Human Language Technologies: The 2009",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 19-27, Boulder, Colorado, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Distributional memory: A general framework for corpus-based semantics",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "4",
"pages": "673--721",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni and Alessandro Lenci. 2010. Dis- tributional memory: A general framework for corpus-based semantics. Computational Linguis- tics, 36(4):673-721.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Dont count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "238--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Dont count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 238-247, Baltimore, Maryland, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Jauvin",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Re- search, 3:1137-1155.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Distributional semantics in technicolor",
"authors": [
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Nam Khanh",
"middle": [],
"last": "Tran",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "136--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elia Bruni, Gemma Boleda, Marco Baroni, and Nam Khanh Tran. 2012. Distributional semantics in technicolor. In Proceedings of the 50th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 136-145, Jeju Island, Korea, July. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Extracting semantic representations from word co-occurrence statistics: a computational study",
"authors": [
{
"first": "A",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Joseph P",
"middle": [],
"last": "Bullinaria",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2007,
"venue": "Behavior Research Methods",
"volume": "39",
"issue": "3",
"pages": "510--526",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John A Bullinaria and Joseph P Levy. 2007. Extracting semantic representations from word co-occurrence statistics: a computational study. Behavior Research Methods, 39(3):510-526.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Extracting semantic representations from word co-occurrence statistics: Stop-lists, stemming, and SVD",
"authors": [
{
"first": "A",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Joseph P",
"middle": [],
"last": "Bullinaria",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2012,
"venue": "Behavior Research Methods",
"volume": "44",
"issue": "3",
"pages": "890--907",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John A Bullinaria and Joseph P Levy. 2012. Extracting semantic representations from word co-occurrence statistics: Stop-lists, stemming, and SVD. Behavior Research Methods, 44(3):890-907.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Experiments with LSA scoring: optimal rank and basis",
"authors": [
{
"first": "John",
"middle": [],
"last": "Caron",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the SIAM Computational Information Retrieval Workshop",
"volume": "",
"issue": "",
"pages": "157--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Caron. 2001. Experiments with LSA scor- ing: optimal rank and basis. In Proceedings of the SIAM Computational Information Retrieval Work- shop, pages 157-169.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Word association norms, mutual information, and lexicography",
"authors": [
{
"first": "Kenneth",
"middle": [
"Ward"
],
"last": "Church",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicog- raphy. Computational Linguistics, 16(1):22-29.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Pro- ceedings of the 25th International Conference on Machine Learning, pages 160-167.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Indexing by latent semantic analysis",
"authors": [
{
"first": "C",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Deerwester",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"K"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "George",
"middle": [
"W"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"A"
],
"last": "Furnas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harshman",
"suffix": ""
}
],
"year": 1990,
"venue": "JASIS",
"volume": "41",
"issue": "6",
"pages": "391--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott C. Deerwester, Susan T. Dumais, Thomas K. Lan- dauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by latent semantic analysis. JASIS, 41(6):391-407.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The approximation of one matrix by another of lower rank",
"authors": [
{
"first": "C",
"middle": [],
"last": "Eckart",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 1936,
"venue": "Psychometrika",
"volume": "1",
"issue": "",
"pages": "211--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C Eckart and G Young. 1936. The approximation of one matrix by another of lower rank. Psychome- trika, 1:211-218.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.3456"
]
},
"num": null,
"urls": [],
"raw_text": "Roi Reichart Felix Hill and Anna Korhonen. 2014. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. arXiv preprint arXiv:1408.3456.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Introducing and evaluating ukwac, a very large web-derived corpus of English",
"authors": [
{
"first": "Adriano",
"middle": [],
"last": "Ferraresi",
"suffix": ""
},
{
"first": "Eros",
"middle": [],
"last": "Zanchetta",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Bernardini",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 4th Web as Corpus Workshop (WAC-4)",
"volume": "",
"issue": "",
"pages": "47--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adriano Ferraresi, Eros Zanchetta, Marco Baroni, and Silvia Bernardini. 2008. Introducing and evaluating ukwac, a very large web-derived corpus of English. In Proceedings of the 4th Web as Corpus Workshop (WAC-4), pages 47-54.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Placing search in context: The concept revisited",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Yossi",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Zach",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "Gadi",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "Eytan",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Transactions on Information Systems",
"volume": "20",
"issue": "1",
"pages": "116--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on Informa- tion Systems, 20(1):116-131.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "word2vec explained: deriving Mikolov et al.'s negativesampling word-embedding method",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1402.3722"
]
},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Omer Levy. 2014. word2vec explained: deriving Mikolov et al.'s negative- sampling word-embedding method. arXiv preprint arXiv:1402.3722.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Distributional structure. Word",
"authors": [
{
"first": "Zellig",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1954,
"venue": "",
"volume": "10",
"issue": "",
"pages": "146--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig Harris. 1954. Distributional structure. Word, 10(23):146-162.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dependencybased word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "302--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014a. Dependency- based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 302-308, Baltimore, Maryland.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Linguistic regularities in sparse and explicit word representations",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "171--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014b. Linguistic regularities in sparse and explicit word representa- tions. In Proceedings of the Eighteenth Confer- ence on Computational Natural Language Learning, pages 171-180, Baltimore, Maryland.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Neural word embeddings as implicit matrix factorization",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2177--2185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014c. Neural word embeddings as implicit matrix factorization. In Ad- vances in Neural Information Processing Systems 27: Annual Conference on Neural Information Pro- cessing Systems 2014, December 8-13 2014, Mon- treal, Quebec, Canada, pages 2177-2185.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Better word representations with recursive neural networks for morphology",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "104--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Richard Socher, and Christo- pher D. Manning. 2013. Better word representa- tions with recursive neural networks for morphol- ogy. In Proceedings of the Seventeenth Confer- ence on Computational Natural Language Learning, pages 104-113, Sofia, Bulgaria, August. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Probabilistic modeling of joint-context in distributional similarity",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Melamud",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
},
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "181--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Melamud, Ido Dagan, Jacob Goldberger, Idan Szpektor, and Deniz Yuret. 2014. Probabilistic modeling of joint-context in distributional similar- ity. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 181-190, Baltimore, Maryland, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In Proceedings of the International Conference on Learning Represen- tations (ICLR).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed rep- resentations of words and phrases and their compo- sitionality. In Advances in Neural Information Pro- cessing Systems, pages 3111-3119.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746-751.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Dependency-based construction of semantic space models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "161--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Pad\u00f3 and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161-199.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Discovering word senses from text",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "613--619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Pantel and Dekang Lin. 2002. Discovering word senses from text. In Proceedings of the eighth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 613-619. ACM.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar, October. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A word at a time: Computing word relatedness using temporal semantic analysis",
"authors": [
{
"first": "Kira",
"middle": [],
"last": "Radinsky",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Agichtein",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Shaul",
"middle": [],
"last": "Markovitch",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 20th international conference on World wide web",
"volume": "",
"issue": "",
"pages": "337--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. 2011. A word at a time: Computing word relatedness using temporal semantic analysis. In Proceedings of the 20th international conference on World wide web, pages 337-346. ACM.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The Word-Space Model",
"authors": [
{
"first": "Magnus",
"middle": [],
"last": "Sahlgren",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Magnus Sahlgren. 2006. The Word-Space Model. Ph.D. thesis, Stockholm University.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Measuring praise and criticism: Inference of semantic orientation from association",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"L"
],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Littman",
"suffix": ""
}
],
"year": 2003,
"venue": "Transactions on Information Systems",
"volume": "21",
"issue": "4",
"pages": "315--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney and Michael L. Littman. 2003. Mea- suring praise and criticism: Inference of semantic orientation from association. Transactions on Infor- mation Systems, 21(4):315-346.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence Research",
"volume": "37",
"issue": "1",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of se- mantics. Journal of Artificial Intelligence Research, 37(1):141-188.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Domain and function: A dualspace model of semantic relations and compositions",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Artificial Intelligence Research",
"volume": "44",
"issue": "",
"pages": "533--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney. 2012. Domain and function: A dual- space model of semantic relations and compositions. Journal of Artificial Intelligence Research, 44:533- 585.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Using wiktionary for computing semantic relatedness",
"authors": [
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 23rd National Conference on Artificial Intelligence",
"volume": "2",
"issue": "",
"pages": "861--866",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Torsten Zesch, Christof M\u00fcller, and Iryna Gurevych. 2008. Using wiktionary for computing semantic relatedness. In Proceedings of the 23rd National Conference on Artificial Intelligence -Volume 2, AAAI'08, pages 861-866. AAAI Press.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "in the original paper (equation 2).",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "); Bruni et al.'s (2012) MEN dataset; Radinsky et al.'s (2011) Mechanical Turk dataset; Luong et al.'s (2013) Rare Words dataset; and Hill et al.'s (2014) SimLex-999 dataset.All these datasets contain word pairs together with human-assigned similarity scores. The word vectors are evaluated by ranking the pairs according to their cosine similarities, and measuring the correlation (Spearman's \u03c1) with the human ratings.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "\u2208V W \\{a * ,b,a} cos(b * , a * \u2212 a + b) = arg max b * \u2208V W \\{a * ,b,a} (cos(b * , a * ) \u2212 cos(b * , a) + cos(b * , b))",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"html": null,
"num": null,
"text": "The space of hyperparameters explored in this work.",
"type_str": "table",
"content": "<table><tr><td>Hyper-</td><td>Explored</td><td>Applicable</td></tr><tr><td>parameter</td><td>Values</td><td>Methods</td></tr><tr><td>win</td><td>2, 5, 10</td><td>All</td></tr><tr><td>dyn</td><td>none, with</td><td>All</td></tr><tr><td>sub del neg</td><td>none, dirty, clean \u2020 none, with \u2020 1, 5, 15</td><td>All All PPMI, SVD, SGNS</td></tr><tr><td>cds</td><td>1, 0.75</td><td>PPMI, SVD, SGNS</td></tr><tr><td>w+c</td><td>only w, w + c</td><td>SVD, SGNS, GloVe</td></tr><tr><td>eig</td><td>0, 0.5, 1</td><td>SVD</td></tr><tr><td>nrm</td><td>none \u2020 , row, col \u2020 , both \u2020</td><td>All</td></tr></table>"
},
"TABREF3": {
"html": null,
"num": null,
"text": "Performance of each method across different tasks using word2vec's recommended configuration: win = 2; dyn = with; sub = dirty; neg = 5; cds = 0.75; w+c = only w; eig = 0.0. CBOW is presented for comparison.",
"type_str": "table",
"content": "<table><tr><td>Method</td><td colspan=\"2\">WordSim Similarity Relatedness WordSim</td><td colspan=\"5\">Bruni et al. Radinsky et al. Luong et al. Hill et al. MEN M. Turk Rare Words SimLex Add / Mul Add / Mul Google MSR</td></tr><tr><td>PPMI SVD SGNS GloVe</td><td>.755 .793 .793 .725</td><td>.697 .691 .685 .604</td><td>.745 .778 .774 .729</td><td>.686 .666 .693 .632</td><td>.462 .514 .470 .403</td><td>.393 .432 .438 .398</td><td>.553 / .679 .306 / .535 .554 / .591 .408 / .468 .676 / .688 .618 / .645 .569 / .596 .533 / .580</td></tr></table>"
},
"TABREF5": {
"html": null,
"num": null,
"text": "Performance of each method across different tasks using 2-fold cross-validation for hyperparameter tuning. Configurations on large-scale (LS) corpora are also presented for comparison.",
"type_str": "table",
"content": "<table/>"
},
"TABREF6": {
"html": null,
"num": null,
"text": "The average performance of SVD on word similarity tasks given different values of eig, in the vanilla scenario.",
"type_str": "table",
"content": "<table><tr><td>reports</td></tr></table>"
},
"TABREF7": {
"html": null,
"num": null,
"text": "WordSim WordSimBruni et al.Radinsky et al. Luong et al. Hill et al. Performance difference between best models with neg > 1 and neg = 1. Performance difference between best models with cds = 0.75 and cds = 1.",
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td/><td>Google</td><td>MSR</td></tr><tr><td/><td colspan=\"2\">Similarity Relatedness</td><td>MEN</td><td>M. Turk</td><td colspan=\"2\">Rare Words SimLex</td><td>Mul</td><td>Mul</td></tr><tr><td>PPMI SVD</td><td>+0.6% -1.7%</td><td>+4.9% -2.2%</td><td>+1.3% -1.9%</td><td>+1.0% -4.6%</td><td>+2.2% -3.4%</td><td>+0.8% -3.5%</td><td colspan=\"2\">-6.2% -13.9% -14.9% -9.2%</td></tr><tr><td>SGNS GloVe</td><td>+1.5% -</td><td>+2.9% -</td><td>+2.3% -</td><td>+0.5% -</td><td>+1.5% -</td><td>+1.1% -</td><td colspan=\"2\">+3.3% +2.1% --</td></tr><tr><td colspan=\"3\">WordSim (c) Method Similarity Relatedness WordSim</td><td colspan=\"5\">Bruni et al. Radinsky et al. Luong et al. Hill et al. Google MEN M. Turk Rare Words SimLex Mul</td><td>MSR Mul</td></tr><tr><td>PPMI SVD SGNS GloVe</td><td>+1.3% +0.4% +0.4% -</td><td>+2.8% -0.2% +1.4% -</td><td>0.0% +0.1% 0.0% -</td><td>+2.1% +1.1% +0.1% -</td><td>+3.5% +0.4% 0.0% -</td><td>+2.9% -0.3% +0.2% -</td><td colspan=\"2\">+2.7% +9.2% +1.4% +2.2% 0.0% +0.6% --</td></tr><tr><td colspan=\"3\">WordSim (d) Method Similarity Relatedness WordSim</td><td colspan=\"6\">Bruni et al. Radinsky et al. Luong et al. Hill et al. Google MSR MEN M. Turk Rare Words SimLex Mul Mul</td></tr><tr><td>PPMI</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SVD SGNS GloVe</td><td>-0.6% +1.4% +2.3%</td><td>-0.2% +2.2% +4.7%</td><td>-0.4% +1.2% +3.0%</td><td>-2.1% +1.1% -0.1%</td><td>-0.7% -0.3% -0.7%</td><td>+0.7% -2.3% -2.6%</td><td colspan=\"2\">-1.8% -3.4% -1.0% -7.5% +3.3% -8.9%</td></tr></table>"
}
}
}
}