Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q18-1034",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:10:52.022412Z"
},
"title": "Linear Algebraic Structure of Word Senses, with Applications to Polysemy",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Princeton University",
"location": {
"addrLine": "35 Olden St",
"postCode": "08540",
"settlement": "Princeton",
"region": "NJ"
}
},
"email": "[email protected]"
},
{
"first": "Yuanzhi",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Princeton University",
"location": {
"addrLine": "35 Olden St",
"postCode": "08540",
"settlement": "Princeton",
"region": "NJ"
}
},
"email": "[email protected]"
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Princeton University",
"location": {
"addrLine": "35 Olden St",
"postCode": "08540",
"settlement": "Princeton",
"region": "NJ"
}
},
"email": "[email protected]"
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Princeton University",
"location": {
"addrLine": "35 Olden St",
"postCode": "08540",
"settlement": "Princeton",
"region": "NJ"
}
},
"email": "[email protected]"
},
{
"first": "Andrej",
"middle": [],
"last": "Risteski",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Princeton University",
"location": {
"addrLine": "35 Olden St",
"postCode": "08540",
"settlement": "Princeton",
"region": "NJ"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word embeddings are ubiquitous in NLP and information retrieval, but it is unclear what they represent when the word is polysemous. Here it is shown that multiple word senses reside in linear superposition within the word embedding and simple sparse coding can recover vectors that approximately capture the senses. The success of our approach, which applies to several embedding methods, is mathematically explained using a variant of the random walk on discourses model (Arora et al., 2016). A novel aspect of our technique is that each extracted word sense is accompanied by one of about 2000 \"discourse atoms\" that gives a succinct description of which other words co-occur with that word sense. Discourse atoms can be of independent interest, and make the method potentially more useful. Empirical tests are used to verify and support the theory.",
"pdf_parse": {
"paper_id": "Q18-1034",
"_pdf_hash": "",
"abstract": [
{
"text": "Word embeddings are ubiquitous in NLP and information retrieval, but it is unclear what they represent when the word is polysemous. Here it is shown that multiple word senses reside in linear superposition within the word embedding and simple sparse coding can recover vectors that approximately capture the senses. The success of our approach, which applies to several embedding methods, is mathematically explained using a variant of the random walk on discourses model (Arora et al., 2016). A novel aspect of our technique is that each extracted word sense is accompanied by one of about 2000 \"discourse atoms\" that gives a succinct description of which other words co-occur with that word sense. Discourse atoms can be of independent interest, and make the method potentially more useful. Empirical tests are used to verify and support the theory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word embeddings are constructed using Firth's hypothesis that a word's sense is captured by the distribution of other words around it (Firth, 1957) . Classical vector space models (see the survey by Turney and Pantel (2010)) use simple linear algebra on the matrix of word-word co-occurrence counts, whereas recent neural network and energy-based models such as word2vec use an objective that involves a nonconvex (thus, also nonlinear) function of the word co-occurrences (Bengio et al., 2003; Mikolov et al., 2013a; Mikolov et al., 2013b) . This nonlinearity makes it hard to discern how these modern embeddings capture the different senses of a polysemous word. The monolithic view of embeddings, with the internal information extracted only via inner product, is felt to fail in capturing word senses (Griffiths et al., 2007; Reisinger and Mooney, 2010; Iacobacci et al., 2015) . Researchers have instead sought to capture polysemy using more complicated representations, e.g., by inducing separate embeddings for each sense (Murphy et al., 2012; Huang et al., 2012) . These embeddingper-sense representations grow naturally out of classic Word Sense Induction or WSI (Yarowsky, 1995; Schutze, 1998; Reisinger and Mooney, 2010; Di Marco and Navigli, 2013) techniques that perform clustering on neighboring words.",
"cite_spans": [
{
"start": 134,
"end": 147,
"text": "(Firth, 1957)",
"ref_id": "BIBREF12"
},
{
"start": 473,
"end": 494,
"text": "(Bengio et al., 2003;",
"ref_id": "BIBREF3"
},
{
"start": 495,
"end": 517,
"text": "Mikolov et al., 2013a;",
"ref_id": "BIBREF19"
},
{
"start": 518,
"end": 540,
"text": "Mikolov et al., 2013b)",
"ref_id": "BIBREF20"
},
{
"start": 805,
"end": 829,
"text": "(Griffiths et al., 2007;",
"ref_id": "BIBREF14"
},
{
"start": 830,
"end": 857,
"text": "Reisinger and Mooney, 2010;",
"ref_id": "BIBREF28"
},
{
"start": 858,
"end": 881,
"text": "Iacobacci et al., 2015)",
"ref_id": "BIBREF16"
},
{
"start": 1029,
"end": 1050,
"text": "(Murphy et al., 2012;",
"ref_id": "BIBREF23"
},
{
"start": 1051,
"end": 1070,
"text": "Huang et al., 2012)",
"ref_id": "BIBREF15"
},
{
"start": 1172,
"end": 1188,
"text": "(Yarowsky, 1995;",
"ref_id": "BIBREF34"
},
{
"start": 1189,
"end": 1203,
"text": "Schutze, 1998;",
"ref_id": "BIBREF30"
},
{
"start": 1204,
"end": 1231,
"text": "Reisinger and Mooney, 2010;",
"ref_id": "BIBREF28"
},
{
"start": 1232,
"end": 1259,
"text": "Di Marco and Navigli, 2013)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The current paper goes beyond this monolithic view, by describing how multiple senses of a word actually reside in linear superposition within the standard word embeddings (e.g., word2vec (Mikolov et al., 2013a) and GloVe (Pennington et al., 2014) ). By this we mean the following: consider a polysemous word, say tie, which can refer to an article of clothing, or a drawn match, or a physical act. Let's take the usual viewpoint that tie is a single token that represents monosemous words tie1, tie2, .... The theory and experiments in this paper strongly suggest that word embeddings computed using modern techniques such as GloVe and word2vec satisfy:",
"cite_spans": [
{
"start": 188,
"end": 211,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF19"
},
{
"start": 222,
"end": 247,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "v tie \u2248 \u03b1 1 v tie1 + \u03b1 2 v tie2 + \u03b1 3 v tie3 + \u2022 \u2022 \u2022 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "where coefficients \u03b1 i 's are nonnegative and v tie1 , v tie2 , etc., are the hypothetical embeddings of the different senses-those that would have been induced in the thought experiment where all occurrences of the different senses were hand-labeled in the corpus. This Linearity Assertion, whereby linear structure appears out of a highly nonlinear embedding technique, is explained theoretically in Section 2, and then empirically tested in a couple of ways in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Section 3 uses the linearity assertion to show how to do WSI via sparse coding, which can be seen as a linear algebraic analog of the classic clusteringbased approaches, albeit with overlapping clusters. On standard testbeds it is competitive with earlier embedding-for-each-sense approaches (Section 6). A novelty of our WSI method is that it automatically links different senses of different words via our atoms of discourse (Section 3). This can be seen as an answer to the suggestion in (Reisinger and Mooney, 2010) to enhance one-embedding-persense methods so that they can automatically link together senses for different words, e.g., recognize that the \"article of clothing\" sense of tie is connected to shoe, jacket, etc.",
"cite_spans": [
{
"start": 491,
"end": 519,
"text": "(Reisinger and Mooney, 2010)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is inspired by the solution of word analogies via linear algebraic methods (Mikolov et al., 2013b) , and use of sparse coding on word embeddings to get useful representations for many NLP tasks (Faruqui et al., 2015) . Our theory builds conceptually upon the random walk on discourses model of Arora et al. (2016) , although we make a small but important change to explain empirical findings regarding polysemy. Our WSI procedure applies (with minor variation in performance) to canonical embeddings such as word2vec and GloVe as well as the older vector space methods such as PMI (Church and Hanks, 1990 ). This is not surprising since these embeddings are known to be interrelated (Levy and Goldberg, 2014; Arora et al., 2016) .",
"cite_spans": [
{
"start": 86,
"end": 109,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF20"
},
{
"start": 205,
"end": 227,
"text": "(Faruqui et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 305,
"end": 324,
"text": "Arora et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 592,
"end": 615,
"text": "(Church and Hanks, 1990",
"ref_id": "BIBREF7"
},
{
"start": 694,
"end": 719,
"text": "(Levy and Goldberg, 2014;",
"ref_id": "BIBREF17"
},
{
"start": 720,
"end": 739,
"text": "Arora et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since word embeddings are solutions to nonconvex optimization problems, at first sight it appears hopeless to reason about their finer structure. But it becomes possible to do so using a generative model for language (Arora et al., 2016 ) -a dynamic versions by the log-linear topic model of (Mnih and Hinton, 2007 )-which we now recall. It posits that at every point in the corpus there is a micro-topic (\"what is being talked about\") called discourse that is drawn from the continuum of unit vectors in \u211c d . The parameters of the model include a vector",
"cite_spans": [
{
"start": 217,
"end": 236,
"text": "(Arora et al., 2016",
"ref_id": "BIBREF0"
},
{
"start": 292,
"end": 314,
"text": "(Mnih and Hinton, 2007",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Justification for Linearity Assertion",
"sec_num": "2"
},
{
"text": "v w \u2208 \u211c d for each word w. Each discourse c defines a distribution over words Pr[w | c] \u221d exp(c \u2022 v w ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Justification for Linearity Assertion",
"sec_num": "2"
},
{
"text": "The model assumes that the corpus is generated by the slow geometric random walk of c over the unit sphere in \u211c d : when the walk is at c, a few words are emitted by i.i.d. samples from the distribution (2), which, due to its log-linear form, strongly favors words close to c in cosine similarity. Estimates for learning parameters v w using MLE and moment methods correspond to standard embedding methods such as GloVe and word2vec (see the original paper).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Justification for Linearity Assertion",
"sec_num": "2"
},
{
"text": "To study how word embeddings capture word senses, we'll need to understand the relationship between a word's embedding and those of words it co-occurs with. In the next subsection, we propose a slight modification to the above model and shows how to infer the embedding of a word from the embeddings of other words that co-occur with it. This immediately leads to the Linearity Assertion, as shown in Section 2.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Justification for Linearity Assertion",
"sec_num": "2"
},
{
"text": "As alluded to before, we modify the random walk model of (Arora et al., 2016) to the Gaussian random walk model. Again, the parameters of the model include a vector v w \u2208 \u211c d for each word w. The model assumes the corpus is generated as follows. First, a discourse vector c is drawn from a Gaussian with mean 0 and covariance \u03a3. Then, a window of n words w 1 , w 2 , . . . , w n are generated from c by:",
"cite_spans": [
{
"start": 57,
"end": 77,
"text": "(Arora et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr[w 1 , w 2 , . . . , w n | c] = n \u220f i=1 Pr[w i | c],",
"eq_num": "(2)"
}
],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr[w i | c] = exp(c \u2022 v w i )/Z c ,",
"eq_num": "(3)"
}
],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "where Z c = \u2211 w exp(\u27e8v w , c\u27e9) is the partition function. We also assume the partition function concentrates in the sense that Z c \u2248 Z exp(\u2225c\u2225 2 ) for some constant Z. This is a direct extension of (Arora et al., 2016, Lemma 2.1) to discourse vectors with norm other than 1, and causes the additional term exp(\u2225c\u2225 2 ). 1 Theorem 1. Assume the above generative model, and let s denote the random variable of a window of n words. Then, there is a linear transformation A such that",
"cite_spans": [
{
"start": 198,
"end": 229,
"text": "(Arora et al., 2016, Lemma 2.1)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "v w \u2248 A E [ 1 n \u2211 w i \u2208s v w i | w \u2208 s ] .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "Proof. Let c s be the discourse vector for the whole window s. By the law of total expectation, we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E [c s | w \u2208 s] =E [E[c s | s = w 1 . . . w j\u22121 ww j+1 . . . w n ] | w \u2208 s] .",
"eq_num": "(4)"
}
],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "We evaluate the two sides of the equation. First, by Bayes' rule and the assumptions on the distribution of c and the partition function, we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "p(c|w) \u221d p(w|c)p(c) \u221d 1 Z c exp(\u27e8v w , c\u27e9) \u2022 exp ( \u2212 1 2 c \u22a4 \u03a3 \u22121 c ) \u2248 1 Z exp ( \u27e8v w , c\u27e9 \u2212 c \u22a4 ( 1 2 \u03a3 \u22121 + I ) c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": ") .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "So c | w is a Gaussian distribution with mean E [c | w] \u2248 (\u03a3 \u22121 + 2I) \u22121 v w .",
"eq_num": "(5)"
}
],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "Next, we compute E[c|w 1 , . . . , w n ]. Again using Bayes' rule and the assumptions on the distribution of c and the partition function,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "p(c|w 1 , . . . , w n ) \u221d p(w 1 , . . . , w n |c)p(c) \u221d p(c) n \u220f i=1 p(w i |c) \u2248 1 Z n exp ( n \u2211 i=1 v \u22a4 w i c \u2212 c \u22a4 ( 1 2 \u03a3 \u22121 + nI ) c ) . So c|w 1 . . . w n is a Gaussian distribution with mean E[c|w 1 , . . . , w n ] \u2248 ( \u03a3 \u22121 + 2nI ) \u22121 n \u2211 i=1 v w i . (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "Now plugging in equation 5and 6into equation (4), we conclude that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "(\u03a3 \u22121 + 2I) \u22121 v w \u2248 (\u03a3 \u22121 + 2nI) \u22121 E [ n \u2211 i=1 v wi | w \u2208 s ] .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "is to assume vw are random vectors, and then Zc can be shown to concentrate around exp(\u2225c\u2225 2 ). Such a condition enforces the word vectors to be isotropic to some extent, and makes the covariance of the discourse identifiable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "Re-arranging the equation completes the proof with A = n(\u03a3 \u22121 + 2I)(\u03a3 \u22121 + 2nI) \u22121 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "Note: Interpretation. Theorem 1 shows that there exists a linear relationship between the vector of a word and the vectors of the words in its contexts. Consider the following thought experiment. First, choose a word w. Then, for each window s containing w, take the average of the vectors of the words in s and denote it as v s . Now, take the average of v s for all the windows s containing w, and denote the average as u. Theorem 1 says that u can be mapped to the word vector v w by a linear transformation that does not depend on w. This linear structure may also have connections to some other phenomena related to linearity, e.g., Gittens et al. (2017) and Tian et al. (2017) . Exploring such connections is left for future work. The linear transformation is closely related to \u03a3, which describes the distribution of the discourses. If we choose a coordinate system such that \u03a3 is a diagonal matrix with diagonal entries \u03bb i , then A will also be a diagonal matrix with diagonal entries (n + 2n\u03bb i )/(1 + 2n\u03bb i ). This is smoothing the spectrum and essentially shrinks the directions corresponding to large \u03bb i relatively to the other directions. Such directions are for common discourses and thus common words. Empirically, we indeed observe that A shrinks the directions of common words. For example, its last right singular vector has, as nearest neighbors, the vectors for words like \"with\", \"as\", and \"the.\" Note that empirically, A is not a diagonal matrix since the word vectors are not in the coordinate system mentioned. Note: Implications for GloVe and word2vec. Repeating the calculation in Arora et al. (2016) for our new generative model, we can show that the solutions to GloVe and word2vec training objectives solve for the following vectors:",
"cite_spans": [
{
"start": 638,
"end": 659,
"text": "Gittens et al. (2017)",
"ref_id": "BIBREF13"
},
{
"start": 664,
"end": 682,
"text": "Tian et al. (2017)",
"ref_id": "BIBREF31"
},
{
"start": 1609,
"end": 1628,
"text": "Arora et al. (2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "v w = ( \u03a3 \u22121 + 4I ) \u22121/2 v w .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "Since these other embeddings are the same as v w 's up to linear transformation, Theorem 1 (and the Linearity Assertion) still holds for them. Empirically, we find that (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "\u03a3 \u22121 + 4I ) \u22121/2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "is close to a scaled identity matrix (since \u2225\u03a3 \u22121 \u2225 2 is small), sov w 's can be used as a surrogate of v w 's. Experimental note: Using better sentence embeddings, SIF embeddings. Theorem 1 implicitly uses the average of the neighboring word vectors as an estimate (MLE) for the discourse vector. This estimate is of course also a simple sentence embedding, very popular in empirical NLP work and also reminiscent of word2vec's training objective. In practice, this naive sentence embedding can be improved by taking a weighted combination (often tf-idf) of adjacent words. The paper (Arora et al., 2017 ) uses a simple twist to the generative model in (Arora et al., 2016) to provide a better estimate of the discourse c called SIF embedding, which is better for downstream tasks and surprisingly competitive with sophisticated LSTM-based sentence embeddings. It is a weighted average of word embeddings in the window, with smaller weights for more frequent words (reminiscent of tf-idf). This weighted average is the MLE estimate of c if above generative model is changed to:",
"cite_spans": [
{
"start": 585,
"end": 604,
"text": "(Arora et al., 2017",
"ref_id": "BIBREF1"
},
{
"start": 654,
"end": 674,
"text": "(Arora et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "p(w|c) = \u03b1p(w) + (1 \u2212 \u03b1) exp(v w \u2022 c) Z c ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "where p(w) is the overall probability of word w in the corpus and \u03b1 > 0 is a constant (hyperparameter).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "The theory in the current paper works with SIF embeddings as an estimate of the discourse c; in other words, in Theorem 1 we replace the average word vector with the SIF vector of that window. Empirically we find that it leads to similar results in testing our theory (Section 4) and better results in downstream WSI applications (Section 6). Therefore, SIF embeddings are adopted in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Walk Model",
"sec_num": "2.1"
},
{
"text": "Now we use Theorem 1 to show how the Linearity Assertion follows. Recall the thought experiment considered there. Suppose word w has two distinct senses s 1 and s 2 . Compute a word embedding v w for w. Then hand-replace each occurrence of a sense of w by one of the new tokens s 1 , s 2 depending upon which one is being used. Next, train separate embeddings for s 1 , s 2 while keeping the other embeddings fixed. (NB: the classic clustering-based sense induction (Schutze, 1998; Reisinger and Mooney, 2010) can be seen as an approximation to this thought experiment.)",
"cite_spans": [
{
"start": 466,
"end": 481,
"text": "(Schutze, 1998;",
"ref_id": "BIBREF30"
},
{
"start": 482,
"end": 509,
"text": "Reisinger and Mooney, 2010)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Linearity Assertion",
"sec_num": "2.2"
},
{
"text": "Theorem 2 (Main). Assuming the model of Section 2.1, embeddings in the thought experiment above will satisfy \u2225v w \u2212v w \u2225 2 \u2192 0 as the corpus length tends to infinity, wherev w \u2248 \u03b1v s 1 + \u03b2v s 2 for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Linearity Assertion",
"sec_num": "2.2"
},
{
"text": "\u03b1 = f 1 f 1 + f 2 , \u03b2 = f 2 f 1 + f 2 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Linearity Assertion",
"sec_num": "2.2"
},
{
"text": "where f 1 and f 2 are the numbers of occurrences of s 1 , s 2 in the corpus, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Linearity Assertion",
"sec_num": "2.2"
},
{
"text": "Proof. Suppose we pick a random sample of N windows containing w in the corpus. For each window, compute the average of the word vectors and then apply the linear transformation in Theorem 1. The transformed vectors are i.i.d. estimates for v w , but with high probability about f 1 /(f 1 + f 2 ) fraction of the occurrences used sense s 1 and f 2 /(f 1 + f 2 ) used sense s 2 , and the corresponding estimates for those two subpopulations converge to v s 1 and v s 2 respectively. Thus by construction, the estimate for v w is a linear combination of those for v s 1 and v s 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof of Linearity Assertion",
"sec_num": "2.2"
},
{
"text": "Note. Theorem 1 (and hence the Linearity Assertion) holds already for the original model in Arora et al. 2016but with A = I, where I is the identity transformation. In practice, we find inducing the word vector requires a non-identity A, which is the reason for the modified model of Section 2.1. This also helps to address a nagging issue hiding in older clustering-based approaches such as Reisinger and Mooney (2010) and Huang et al. (2012) , which identified senses of a polysemous word by clustering the sentences that contain it. One imagines a good representation of the sense of an individual cluster is simply the cluster center. This turns out to be false -the closest words to the cluster center sometimes are not meaningful for the sense that is being captured; see Table 1 . Indeed, the authors of Reisinger and Mooney (2010) seem aware of this because they mention \"We do not assume that clusters correspond to traditional word senses. Rather, we only rely on clusters to capture meaningful variation in word usage.\" We find that applying A to cluster centers makes them meaningful again. See also Table 1 .",
"cite_spans": [
{
"start": 392,
"end": 419,
"text": "Reisinger and Mooney (2010)",
"ref_id": "BIBREF28"
},
{
"start": 424,
"end": 443,
"text": "Huang et al. (2012)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 778,
"end": 785,
"text": "Table 1",
"ref_id": null
},
{
"start": 1112,
"end": 1119,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Proof of Linearity Assertion",
"sec_num": "2.2"
},
{
"text": "Now we consider how to do WSI using only word embeddings and the Linearity Assertion. Our approach is fully unsupervised, and tries to induce senses for all words in one go, together with a vector representation for each sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Towards WSI: Atoms of Discourse",
"sec_num": "3"
},
{
"text": "center 1 before and provide providing a after providing provide opportunities provision center 2 before and a to the after access accessible allowing provide Table 1 : Four nearest words for some cluster centers that were computed for the word \"access\" by applying 5-means on the estimated discourse vectors (see Section 2.1) of 1000 random windows from Wikipedia containing \"access\". After applying the linear transformation of Theorem 1 to the center, the nearest words become meaningful.",
"cite_spans": [],
"ref_spans": [
{
"start": 158,
"end": 165,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Towards WSI: Atoms of Discourse",
"sec_num": "3"
},
{
"text": "Given embeddings for all words, it seems unclear at first sight how to pin down the senses of tie using only (1) since v tie can be expressed in infinitely many ways as such a combination, and this is true even if \u03b1 i 's were known (and they aren't).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Towards WSI: Atoms of Discourse",
"sec_num": "3"
},
{
"text": "To pin down the senses we will need to interrelate the senses of different words, for example, relate the \"article of clothing\" sense tie1 with shoe, jacket, etc. To do so we rely on the generative model of Section 2.1 according to which unit vector in the embedding space corresponds to a micro-topic or discourse. Empirically, discourses c and c \u2032 tend to look similar to humans (in terms of nearby words) if their inner product is larger than 0.85, and quite different if the inner product is smaller than 0.5. So in the discussion below, a discourse should really be thought of as a small region rather than a point.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Towards WSI: Atoms of Discourse",
"sec_num": "3"
},
{
"text": "One imagines that the corpus has a \"clothing\" discourse that has a high probability of outputting the tie1 sense, and also of outputting related words such as shoe, jacket, etc. By (2) the probability of being output by a discourse is determined by the inner product, so one expects that the vector for \"clothing\" discourse has a high inner product with all of shoe, jacket, tie1, etc., and thus can stand as surrogate for v tie1 in (1)! Thus it may be sufficient to consider the following global optimization:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Towards WSI: Atoms of Discourse",
"sec_num": "3"
},
{
"text": "Given word vectors {v w } in \u211c d and two integers k, m with k < m, find a set of unit vectors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Towards WSI: Atoms of Discourse",
"sec_num": "3"
},
{
"text": "A 1 , A 2 , . . . , A m such that v w = m \u2211 j=1 \u03b1 w,j A j + \u03b7 w (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Towards WSI: Atoms of Discourse",
"sec_num": "3"
},
{
"text": "where at most k of the coefficients \u03b1 w,1 , . . . , \u03b1 w,m are nonzero, and \u03b7 w 's are error vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Towards WSI: Atoms of Discourse",
"sec_num": "3"
},
{
"text": "Here k is the sparsity parameter, and m is the number of atoms, and the optimization minimizes the norms of \u03b7 w 's (the \u2113 2 -reconstruction error):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Towards WSI: Atoms of Discourse",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2211 w v w \u2212 m \u2211 j=1 \u03b1 w,j A j 2 2 .",
"eq_num": "(8)"
}
],
"section": "Towards WSI: Atoms of Discourse",
"sec_num": "3"
},
{
"text": "Both A j 's and \u03b1 w,j 's are unknowns, and the optimization is nonconvex. This is just sparse coding, useful in neuroscience (Olshausen and Field, 1997) and also in image processing, computer vision, etc. This optimization is a surrogate for the desired expansion of v tie as in (1), because one can hope that among A 1 , . . . , A m there will be directions corresponding to clothing, sports matches, etc., that will have high inner products with tie1, tie2, etc., respectively. Furthermore, restricting m to be much smaller than the number of words ensures that the typical A i needs to be reused to express multiple words.",
"cite_spans": [
{
"start": 125,
"end": 152,
"text": "(Olshausen and Field, 1997)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Towards WSI: Atoms of Discourse",
"sec_num": "3"
},
{
"text": "We refer to A i 's, discovered by this procedure, as atoms of discourse, since experimentation suggests that the actual discourse in a typical place in text (namely, vector c in (2)) is a linear combination of a small number, around 3-4, of such atoms. Implications of this for text analysis are left for future work. Relationship to Clustering. Sparse coding is solved using alternating minimization to find the A i 's that minimize (8). This objective function reveals sparse coding to be a linear algebraic analogue of overlapping clustering, whereby the A i 's act as cluster centers and each v w is assigned in a soft way to at most k of them (using the coefficients \u03b1 w,j , of which at most k are nonzero). In fact this clustering viewpoint is also the basis of the alternating minimization algorithm. In the special case when k = 1, each v w has to be assigned to a single cluster, which is the familiar geometric clustering with squared \u2113 2 distance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Towards WSI: Atoms of Discourse",
"sec_num": "3"
},
{
"text": "Similar overlapping clustering in a traditional graph-theoretic setup -clustering while simultaneously cross-relating the senses of different wordsseems more difficult but worth exploring.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Towards WSI: Atoms of Discourse",
"sec_num": "3"
},
{
"text": "Now we test the prediction of the Gaussian walk model suggesting a linear method to induce embed-#paragraphs 250k 500k 750k 1 million cos similarity 0.94 0.95 0.96 0.96 Table 2 : Fitting the GloVe word vectors with average discourse vectors using a linear transformation. The first row is the number of paragraphs used to compute the discourse vectors, and the second row is the average cosine similarities between the fitted vectors and the GloVe vectors.",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 176,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Test of Gaussian Walk Model: Induced Embeddings",
"sec_num": "4.1"
},
{
"text": "dings from the context of a word. Start with the GloVe embeddings; let v w denote the embedding for w. Randomly sample many paragraphs from Wikipedia, and for each word w \u2032 and each occurrence of w \u2032 compute the SIF embedding of text in the window of 20 words centered around w \u2032 . Average the SIF embeddings for all occurrences of w \u2032 to obtain vector u w \u2032 . The Gaussian walk model says that there is a linear transformation that maps u w \u2032 to v w \u2032 , so solve the regression:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test of Gaussian Walk Model: Induced Embeddings",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "argmin A \u2211 w \u2225Au w \u2212 v w \u2225 2 2 .",
"eq_num": "(9)"
}
],
"section": "Test of Gaussian Walk Model: Induced Embeddings",
"sec_num": "4.1"
},
{
"text": "We call the vectors Au w the induced embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test of Gaussian Walk Model: Induced Embeddings",
"sec_num": "4.1"
},
{
"text": "We can test this method of inducing embeddings by holding out 1/3 words randomly, doing the regression (9) on the rest, and computing the cosine similarities between Au w and v w on the heldout set of words. Table 2 shows that the average cosine similarity between the induced embeddings and the GloVe vectors is large. By contrast the average similarity between the average discourse vectors and the GloVe vectors is much smaller (about 0.58), illustrating the need for the linear transformation. Similar results are observed for the word2vec and SN vectors (Arora et al., 2016) .",
"cite_spans": [
{
"start": 559,
"end": 579,
"text": "(Arora et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 208,
"end": 215,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Test of Gaussian Walk Model: Induced Embeddings",
"sec_num": "4.1"
},
{
"text": "We do two empirical tests of the Linearity Assertion (Theorem 2). Test 1. The first test involves the classic artificial polysemous words (also called pseudowords). First, pre-train a set W 1 of word vectors on Wikipedia with existing embedding methods. Then, randomly pick m pairs of non-repeated words, and for each pair, replace each occurrence of either of the two words m pairs 10 10 3 3 Table 3 : The average relative errors and cosine similarities between the vectors of pseudowords and those predicted by Theorem 2. m pairs of words are randomly selected and for each pair, all occurrences of the two words in the corpus is replaced by a pseudoword. Then train the vectors for the pseudowords on the new corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 393,
"end": 400,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Test of Linearity Assertion",
"sec_num": "4.2"
},
{
"text": "with a pseudoword. Third, train a set W 2 of vectors on the new corpus, while holding fixed the vectors of words that were not involved in the pseudowords. Construction has ensured that each pseudoword has two distinct \"senses\", and we also have in W 1 the \"ground truth\" vectors for those senses. 2 Theorem 2 implies that the embedding of a pseudoword is a linear combination of the sense vectors, so we can compare this predicted embedding to the one learned in W 2 . 3 Suppose the trained vector for a pseudoword w is u w and the predicted vector is v w , then the comparison criterion is the average relative error",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test of Linearity Assertion",
"sec_num": "4.2"
},
{
"text": "1 |S| \u2211 w\u2208S \u2225uw\u2212vw\u2225 2 2 \u2225vw\u2225 2 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test of Linearity Assertion",
"sec_num": "4.2"
},
{
"text": "where S is the set of all the pseudowords. We also report the average cosine similarity between v w 's and u w 's. Table 3 shows the results for the GloVe and SN (Arora et al., 2016) vectors, averaged over 5 runs. When m is small, the error is small and the cosine similarity is as large as 0.9. Even if m = 3 \u2022 10 4 2 Note that this discussion assumes that the set of pseudowords is small, so that a typical neighborhood of a pseudoword does not consist of other pseudowords. Otherwise the ground truth vectors in W1 become a bad approximation to the sense vectors.",
"cite_spans": [
{
"start": 162,
"end": 182,
"text": "(Arora et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 115,
"end": 122,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Test of Linearity Assertion",
"sec_num": "4.2"
},
{
"text": "3 Here W2 is trained while fixing the vectors of words not involved in pseudowords to be their pre-trained vectors in W1. We can also train all the vectors in W2 from random initialization. Such W2 will not be aligned with W1. Then we can learn a linear transformation from W2 to W1 using the vectors for the words not involved in pseudowords, apply it on the vectors for the pseudowords, and compare the transformed vectors to the predicted ones. This is tested on word2vec, resulting in relative errors between 20% and 32%, and cosine similarities between 0.86 and 0.92. These results again support our analysis. vector type GloVe skip-gram SN cosine 0.72 0.73 0.76 Table 4 : The average cosine of the angles between the vectors of words and the span of vector representations of its senses. The words tested are those in the WSI task of SemEval 2010.",
"cite_spans": [],
"ref_spans": [
{
"start": 668,
"end": 675,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Test of Linearity Assertion",
"sec_num": "4.2"
},
{
"text": "(i.e., about 90% of the words in the vocabulary are replaced by pseudowords), the cosine similarity remains above 0.7, which is significant in the 300 dimensional space. This provides positive support for our analysis. Test 2. The second test is a proxy for what would be a complete (but laborious) test of the Linearity Assertion: replicating the thought experiment while hand-labeling sense usage for many words in a corpus. The simpler proxy is as follows. For each word w, WordNet (Fellbaum, 1998) lists its various senses by providing definition and example sentences for each sense. This is enough text (roughly a paragraph's worth) for our theory to allow us to represent it by a vector -specifically, apply the SIF sentence embedding followed by the linear transformation learned as in Section 4.1. The text embedding for sense s should approximate the ground truth vector v s for it. Then the Linearity Assertion predicts that embedding v w lies close to the subspace spanned by the sense vectors. (Note that this is a nontrivial event: in 300 dimensions a random vector will be quite far from the subspace spanned by some 3 other random vectors.) Table 4 checks this prediction using the polysemous words appearing in the WSI task of SemEval 2010. We tested three standard word embedding methods: GloVe, the skipgram variant of word2vec, and SN (Arora et al., 2016) . The results show that the word vectors are quite close to the subspace spanned by the senses.",
"cite_spans": [
{
"start": 485,
"end": 501,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF11"
},
{
"start": 1355,
"end": 1375,
"text": "(Arora et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 1157,
"end": 1164,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Test of Linearity Assertion",
"sec_num": "4.2"
},
{
"text": "The experiments use 300-dimensional embeddings created using the SN objective in (Arora et al., 2016) and a Wikipedia corpus of 3 billion tokens (Wikimedia, 2012), and the sparse coding is solved by standard k-SVD algorithm (Damnjanovic et al., 2010) . Experimentation showed that the best sparsity parameter k (i.e., the maximum number of allowed senses per word) is 5, and the number of atoms m is about 2000. For the number of senses k, we tried plausible alternatives (based upon suggestions of many colleagues) that allow k to vary for different words, for example to let k be correlated with the word frequency. But a fixed choice of k = 5 seems to produce just as good results. To understand why, realize that this method retains no information about the corpus except for the low dimensional word embeddings. Since the sparse coding tends to express a word using fairly different atoms, examining 7shows that \u2211 j \u03b1 2 w,j is bounded by approximately \u2225v w \u2225 2 2 . So if too many \u03b1 w,j 's are allowed to be nonzero, then some must necessarily have small coefficients, which makes the corresponding components indistinguishable from noise. In other words, raising k often picks not only atoms corresponding to additional senses, but also many that don't.",
"cite_spans": [
{
"start": 81,
"end": 101,
"text": "(Arora et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 224,
"end": 250,
"text": "(Damnjanovic et al., 2010)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with Atoms of Discourse",
"sec_num": "5"
},
{
"text": "The best number of atoms m was found to be around 2000. This was estimated by re-running the sparse coding algorithm multiple times with different random initializations, whereupon substantial overlap was found between the two bases: a large fraction of vectors in one basis were found to have a very close vector in the other. Thus combining the bases while merging duplicates yielded a basis of about the same size. Around 100 atoms are used by a large number of words or have no close-by words. They appear semantically meaningless and are excluded by checking for this condition. 4 The content of each atom can be discerned by looking at the nearby words in cosine similarity. Some examples are shown in Table 5 . Each word is represented using at most five atoms, which usually capture distinct senses (with some noise/mistakes). The senses recovered for tie and spring are shown in Table 6 . Similar results can be obtained by using other word embeddings like word2vec and GloVe.",
"cite_spans": [
{
"start": 584,
"end": 585,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 708,
"end": 715,
"text": "Table 5",
"ref_id": "TABREF2"
},
{
"start": 888,
"end": 895,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments with Atoms of Discourse",
"sec_num": "5"
},
{
"text": "We also observe sparse coding procedures assign nonnegative values to most coefficients \u03b1 w,j 's even if they are left unrestricted. Probably this is because the appearances of a word are best explained by what discourse is being used to generate it, rather than what discourses are not being used. Table 6 : Five discourse atoms linked to the words tie and spring. Each atom is represented by its nearest 6 words. The algorithm often makes a mistake in the last atom (or two), as happened here.",
"cite_spans": [],
"ref_spans": [
{
"start": 299,
"end": 306,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments with Atoms of Discourse",
"sec_num": "5"
},
{
"text": "Relationship to Topic Models. Atoms of discourse may be reminiscent of results from other automated methods for obtaining a thematic understanding of text, such as topic modeling, described in the survey by Blei (2012) . This is not surprising since the model (2) used to compute the embeddings is related to a log-linear topic model by Mnih and Hinton (2007) . However, the discourses here are computed via sparse coding on word embeddings, which can be seen as a linear algebraic alternative, resulting in fairly fine-grained topics. Atoms are also reminiscent of coherent \"word clusters\" detected in the past using Brown clustering, or even sparse coding (Murphy et al., 2012) . The novelty in this paper is a clear interpretation of the sparse coding results as atoms of discourse, as well as its use to capture different word senses.",
"cite_spans": [
{
"start": 207,
"end": 218,
"text": "Blei (2012)",
"ref_id": "BIBREF4"
},
{
"start": 337,
"end": 359,
"text": "Mnih and Hinton (2007)",
"ref_id": "BIBREF21"
},
{
"start": 658,
"end": 679,
"text": "(Murphy et al., 2012)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with Atoms of Discourse",
"sec_num": "5"
},
{
"text": "While the main result of the paper is to reveal the linear algebraic structure of word senses within existing embeddings, it is desirable to verify that this view can yield results competitive with earlier sense embedding approaches. We report some tests be-low. We find that common word embeddings perform similarly with our method; for concreteness we use induced embeddings described in Section 4.1. They are evaluated in three tasks: word sense induction task in SemEval 2010 (Manandhar et al., 2010) , word similarity in context (Huang et al., 2012) , and a new task we called police lineup test. The results are compared to those of existing embedding based approaches reported in related work (Huang et al., 2012; Neelakantan et al., 2014; Mu et al., 2017) .",
"cite_spans": [
{
"start": 480,
"end": 504,
"text": "(Manandhar et al., 2010)",
"ref_id": "BIBREF18"
},
{
"start": 534,
"end": 554,
"text": "(Huang et al., 2012)",
"ref_id": "BIBREF15"
},
{
"start": 700,
"end": 720,
"text": "(Huang et al., 2012;",
"ref_id": "BIBREF15"
},
{
"start": 721,
"end": 746,
"text": "Neelakantan et al., 2014;",
"ref_id": "BIBREF25"
},
{
"start": 747,
"end": 763,
"text": "Mu et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Testing WSI in Applications",
"sec_num": "6"
},
{
"text": "In the WSI task in SemEval 2010, the algorithm is given a polysemous word and about 40 pieces of texts, each using it according to a single sense. The algorithm has to cluster the pieces of text so that those with the same sense are in the same cluster. The evaluation criteria are F-score (Artiles et al., 2009) and V-Measure (Rosenberg and Hirschberg, 2007) . The F-score tends to be higher with a smaller number of clusters and the V-Measure tends to be higher with a larger number of clusters, and fair evaluation requires reporting both. Given a word and its example texts, our algorithm uses a Bayesian analysis dictated by our theory to compute a vector u c for each piece of text c and and then applies k-means on these vectors, with the small twist that sense vectors are assigned to nearest centers based on inner products rather than Euclidean distances. Table 7 shows the results. Computing vector u c . For word w we start by computing its expansion in terms of atoms of discourse (see (8) in Section 3). In an ideal world the nonzero coefficients would exactly capture its senses, and each text containing w would match to one of these nonzero coefficients. In the real world such deterministic success is elusive and one must reason using Bayes' rule.",
"cite_spans": [
{
"start": 290,
"end": 312,
"text": "(Artiles et al., 2009)",
"ref_id": "BIBREF2"
},
{
"start": 327,
"end": 359,
"text": "(Rosenberg and Hirschberg, 2007)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 866,
"end": 873,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Sense Induction",
"sec_num": "6.1"
},
{
"text": "For each atom a, word w and text c there is a joint distribution p(w, a, c) describing the event that atom a is the sense being used when word w was used in text c. We are interested in the posterior distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Induction",
"sec_num": "6.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(a|c, w) \u221d p(a|w)p(a|c)/p(a).",
"eq_num": "(10)"
}
],
"section": "Word Sense Induction",
"sec_num": "6.1"
},
{
"text": "We approximate p(a|w) using Theorem 2, which suggests that the coefficients in the expansion of v w with respect to atoms of discourse scale according to probabilities of usage. (This assertion involves ignoring the low-order terms involving the logarithm in the theorem statement.) Also, by the random walk model, p(a|c) can be approximated by exp(\u27e8v a , v c \u27e9) where v c is the SIF embedding of the context. Finally, since p(a) = E c [p(a|c)], it can be empirically estimated by randomly sampling c. The posterior p(a|c, w) can be seen as a soft decoding of text c to atom a. If texts c 1 , c 2 both contain w, and they were hard decoded to atoms a 1 , a 2 respectively then their similarity would be \u27e8v a 1 , v a 2 \u27e9. With our soft decoding, the similarity can be defined by taking the expectation over the full posterior:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Induction",
"sec_num": "6.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "similarity(c 1 , c 2 ) = E a i \u223cp(a|c i ,w),i\u2208{1,2} \u27e8v a 1 , v a 2 \u27e9,",
"eq_num": "(11)"
}
],
"section": "Word Sense Induction",
"sec_num": "6.1"
},
{
"text": "= \u27e8 \u2211 a 1 p(a 1 |c 1 , w)v a 1 , \u2211 a 2 p(a 2 |c 2 , w)v a 2 \u27e9 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Induction",
"sec_num": "6.1"
},
{
"text": "At a high level this is analogous to the Bayesian polysemy model of Reisinger and Mooney (2010) and Brody and Lapata (2009) , except that they introduced separate embeddings for each sense cluster, while here we are working with structure already existing inside word embeddings.",
"cite_spans": [
{
"start": 68,
"end": 95,
"text": "Reisinger and Mooney (2010)",
"ref_id": "BIBREF28"
},
{
"start": 100,
"end": 123,
"text": "Brody and Lapata (2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Induction",
"sec_num": "6.1"
},
{
"text": "Method V-Measure F-Score (Huang et al., 2012) 10.60 38.05 (Neelakantan et al., 2014) 9.00 47.26 (Mu et al., 2017) , k = 2 7.30 57.14 (Mu et al., 2017) , k = 5",
"cite_spans": [
{
"start": 25,
"end": 45,
"text": "(Huang et al., 2012)",
"ref_id": "BIBREF15"
},
{
"start": 58,
"end": 84,
"text": "(Neelakantan et al., 2014)",
"ref_id": "BIBREF25"
},
{
"start": 96,
"end": 113,
"text": "(Mu et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 133,
"end": 150,
"text": "(Mu et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Induction",
"sec_num": "6.1"
},
{
"text": "14.50 44.07 ours, k = 2 6.1 58.55 ours, k = 3 7.4 55.75 ours, k = 4 9.9 51.85 ours, k = 5 11.5 46.38 Table 7 : Performance of different vectors in the WSI task of SemEval 2010. The parameter k is the number of clusters used in the methods. Rows are divided into two blocks, the first of which shows the results of the competitors, and the second shows those of our algorithm. Best results in each block are in boldface.",
"cite_spans": [],
"ref_spans": [
{
"start": 101,
"end": 108,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Sense Induction",
"sec_num": "6.1"
},
{
"text": "The last equation suggests defining the vector u c for the text c as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Induction",
"sec_num": "6.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u c = \u2211 a p(a|c, w)v a ,",
"eq_num": "(12)"
}
],
"section": "Word Sense Induction",
"sec_num": "6.1"
},
{
"text": "which allows the similarity between two text pieces to be expressed via the inner product of their vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Induction",
"sec_num": "6.1"
},
{
"text": "Results. The results are reported in Table 7 . Our approach outperforms the results by Huang et al. (2012) and Neelakantan et al. (2014) . When compared to Mu et al. (2017) , for the case with 2 centers, we achieved better V-measure but lower F-score, while for 5 centers, we achieved lower V-measure but better F-score.",
"cite_spans": [
{
"start": 87,
"end": 106,
"text": "Huang et al. (2012)",
"ref_id": "BIBREF15"
},
{
"start": 111,
"end": 136,
"text": "Neelakantan et al. (2014)",
"ref_id": "BIBREF25"
},
{
"start": 156,
"end": 172,
"text": "Mu et al. (2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 37,
"end": 44,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Sense Induction",
"sec_num": "6.1"
},
{
"text": "The dataset consists of around 2000 pairs of words, along with the contexts the words occur in and the ground-truth similarity scores. The evaluation criterion is the correlation between the ground-truth scores and the predicted ones. Our method computes the estimated sense vectors and then the similarity as in Section 6.1. We compare to the baselines that simply use the cosine similarity of the GloVe/skip-gram vectors, and also to the results of several existing sense embedding methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Similarity in Context",
"sec_num": "6.2"
},
{
"text": "Results. Table 8 shows that our result is better than those of the baselines and Mu et al. (2017) , but slightly worse than that of Huang et al. (2012) .",
"cite_spans": [
{
"start": 81,
"end": 97,
"text": "Mu et al. (2017)",
"ref_id": "BIBREF22"
},
{
"start": 132,
"end": 151,
"text": "Huang et al. (2012)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Similarity in Context",
"sec_num": "6.2"
},
{
"text": "Spearman coefficient GloVe 0.573 skip-gram 0.622 (Huang et al., 2012) 0.657 (Neelakantan et al., 2014) 0.567 (Mu et al., 2017) 0.637 ours 0.652 Table 8 : The results for different methods in the task of word similarity in context. The best result is in boldface. Our result is close to the best.",
"cite_spans": [
{
"start": 49,
"end": 69,
"text": "(Huang et al., 2012)",
"ref_id": "BIBREF15"
},
{
"start": 76,
"end": 102,
"text": "(Neelakantan et al., 2014)",
"ref_id": "BIBREF25"
},
{
"start": 109,
"end": 126,
"text": "(Mu et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 144,
"end": 151,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "Note that Huang et al. (2012) retrained the vectors for the senses on the corpus, while our method depends only on senses extracted from the off-the-shelf vectors. After all, our goal is to show word senses already reside within off-the-shelf word vectors.",
"cite_spans": [
{
"start": 10,
"end": 29,
"text": "Huang et al. (2012)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "Evaluating WSI systems can run into well-known difficulties, as reflected in the changing metrics over the years (Navigli and Vannella, 2013) . Inspired by word-intrusion tests for topic coherence (Chang et al., 2009) , we proposed a new simple test, which has the advantages of being easy to understand, and capable of being administered to humans. The testbed uses 200 polysemous words and their 704 senses according to WordNet. Each sense is represented by 8 related words, which were collected from WordNet and online dictionaries by college students, who were told to identify most relevant other words occurring in the online definitions of this word sense as well as in the accompanying illustrative sentences. These are considered as ground truth representation of the word sense. These 8 words are typically not synonyms. For example, for the tool/weapon sense of axe they were \"handle, harvest, cutting, split, tool, wood, battle, chop.\"",
"cite_spans": [
{
"start": 113,
"end": 141,
"text": "(Navigli and Vannella, 2013)",
"ref_id": "BIBREF24"
},
{
"start": 197,
"end": 217,
"text": "(Chang et al., 2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Police Lineup",
"sec_num": "6.3"
},
{
"text": "The quantitative test is called police lineup. First, randomly pick one of these 200 polysemous words. Second, pick the true senses for the word and then add randomly picked senses from other words so that there are n senses in total, where each sense is represented by 8 related words as mentioned. Finally, the algorithm (or human) is given the polysemous word and a set of n senses, and has to identify the true senses in this set. Table 9 gives an example.",
"cite_spans": [],
"ref_spans": [
{
"start": 435,
"end": 442,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Police Lineup",
"sec_num": "6.3"
},
{
"text": "word senses bat 1 navigate nocturnal mouse wing cave sonic fly dark 2 used hitting ball game match cricket play baseball 3 wink briefly shut eyes wink bate quickly action 4 whereby legal court law lawyer suit bill judge 5 loose ends two loops shoelaces tie rope string 6 horny projecting bird oral nest horn hard food Table 9 : An example of the police lineup test with n = 6. The algorithm (or human subject) is given the polysemous word \"bat\" and n = 6 senses each of which is represented as a list of words, and is asked to identify the true senses belonging to \"bat\" (highlighted in boldface for demonstration).",
"cite_spans": [],
"ref_spans": [
{
"start": 318,
"end": 325,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Police Lineup",
"sec_num": "6.3"
},
{
"text": "Algorithm 1 Our method for the police lineup test Input: Word w, list S of senses (each has 8 words) Output: t senses out of S 1: Heuristically find inflectional forms of w. 2: Find 5 atoms for w and each inflectional form. Let U denote the union of all these atoms. 3: Initialize the set of candidate senses C w \u2190 \u2205, and the score for each sense L to score(L) \u2190 \u2212\u221e 4: for each atom a \u2208 U do 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Police Lineup",
"sec_num": "6.3"
},
{
"text": "Rank senses L \u2208 S by score(a, L) = s(a, L)\u2212s L A + s(w, L) \u2212 s L V 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Police Lineup",
"sec_num": "6.3"
},
{
"text": "Add the two senses L with highest score(a, L) to C w , and update their scores score(L) \u2190 max{score(L), score(a, L)} 7: Return the t senses L \u2208 C s with highest score(L)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Police Lineup",
"sec_num": "6.3"
},
{
"text": "Our method (Algorithm 1) uses the similarities between any word (or atom) x and a set of words Y , defined as s(x, Y ) = \u27e8v x , v Y \u27e9 where v Y is the SIF embedding of Y . It also uses the average similarities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Police Lineup",
"sec_num": "6.3"
},
{
"text": "s Y A = \u2211 a\u2208A s(a, Y ) |A| , s Y V = \u2211 w\u2208V s(w, Y ) |V |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Police Lineup",
"sec_num": "6.3"
},
{
"text": "where A are all the atoms, and V are all the words. We note two important practical details. First, while we have been using atoms of discourse as a proxy for word sense, these are too coarse-grained: the total number of senses (e.g., WordNet synsets) is far greater than 2000. Thus the score(\u2022) function uses both the atom and the word vector. Second, some words are more popular than the others-i.e., have large components along many atoms and wordswhich seems to be an instance of the smoothing Human subjects are told that on average each word has 3.5 senses and were asked to choose the senses they thought were true. The algorithms select t senses for t = 1, 2, . . . , 6. For each t, each algorithm was run 5 times (standard deviations over the runs are too small to plot). (B) The performance of our method for t = 4 and n = 20, 30, . . . , 70.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Police Lineup",
"sec_num": "6.3"
},
{
"text": "phenomenon alluded to in Footnote 4. The penalty terms s L A and s L V lower the scores of senses L containing such words. Finally, our algorithm returns t senses where t can be varied. Results. The precision and recall for different n and t (number of senses the algorithm returns) are presented in Figure 1 . Our algorithm outperforms the two selected competitors. For n = 20 and t = 4, our algorithm succeeds with precision 65% and recall 75%, and performance remains reasonable for n = 50. Giving the same test to humans 5 for n = 20 (see the left figure) suggests that our method performs similarly to non-native speakers.",
"cite_spans": [],
"ref_spans": [
{
"start": 300,
"end": 308,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Police Lineup",
"sec_num": "6.3"
},
{
"text": "Other word embeddings can also be used in the test and achieved slightly lower performance. For n = 20 and t = 4, the precision/recall are lower by the following amounts: GloVe 2.3%/5.76%, NNSE (matrix factorization on PMI to rank 300 by Murphy et al. (2012) ) 25%/28%.",
"cite_spans": [
{
"start": 238,
"end": 258,
"text": "Murphy et al. (2012)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Police Lineup",
"sec_num": "6.3"
},
{
"text": "Different senses of polysemous words have been shown to lie in linear superposition inside standard word embeddings like word2vec and GloVe. This has also been shown theoretically building upon 5 Human subjects are graduate students from science or engineering majors at major U.S. universities. Non-native speakers have 7 to 10 years of English language use/learning. previous generative models, and empirical tests of this theory were presented. A priori, one imagines that showing such theoretical results about the inner structure of modern word embeddings would be hopeless since they are solutions to complicated nonconvex optimization.",
"cite_spans": [
{
"start": 194,
"end": 195,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "A new WSI method is also proposed based upon these insights that uses only the word embeddings and sparse coding, and shown to provide very competitive performance on some WSI benchmarks. One novel aspect of our approach is that the word senses are interrelated using one of about 2000 discourse vectors that give a succinct description of which other words appear in the neighborhood with that sense. Our method based on sparse coding can be seen as a linear algebraic analog of the clustering approaches, and also gives fine-grained thematic structure reminiscent of topic models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "A novel police lineup test was also proposed for testing such WSI methods, where the algorithm is given a word w and word clusters, some of which belong to senses of w and the others are distractors belonging to senses of other words. The algorithm has to identify the ones belonging to w. We conjecture this police lineup test with distractors will challenge some existing WSI methods, whereas our method was found to achieve performance similar to non-native speakers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "The formal proof of(Arora et al., 2016) still applies in this setting. The simplest way to informally justify this assumption",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We think semantically meaningless atoms -i.e., unexplained inner products-exist because a simple language model such as ours cannot explain all observed co-occurrences due to grammar, stopwords, etc. It ends up needing smoothing terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the reviewers and the action editor of TACL for helpful feedback and thank the editors for granting special relaxation of the page limit for our paper. This work was supported in part by NSF grants CCF-1527371, DMS-1317308, Simons Investigator Award, Simons Collaboration Grant, and ONR-N00014-16-1-2329. Tengyu Ma was additionally supported by the Simons Award in Theoretical Computer Science and by the IBM Ph.D. Fellowship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A latent variable model approach to PMI-based word embeddings",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yuanzhi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Risteski",
"suffix": ""
}
],
"year": 2016,
"venue": "Transaction of Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "385--399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. A latent variable model approach to PMI-based word embeddings. Trans- action of Association for Computational Linguistics, pages 385-399.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A simple but tough-to-beat baseline for sentence embeddings",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embed- dings. In In Proceedings of International Conference on Learning Representations.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The role of named entities in web people search",
"authors": [
{
"first": "Javier",
"middle": [],
"last": "Artiles",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Amig\u00f3",
"suffix": ""
},
{
"first": "Julio",
"middle": [],
"last": "Gonzalo",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "534--542",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javier Artiles, Enrique Amig\u00f3, and Julio Gonzalo. 2009. The role of named entities in web people search. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 534- 542.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Jauvin",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Research, pages 1137-1155.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Probabilistic topic models. Communication of the Association for Computing Machinery",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "77--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei. 2012. Probabilistic topic models. Com- munication of the Association for Computing Machin- ery, pages 77-84.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bayesian word sense induction",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Brody",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "103--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Brody and Mirella Lapata. 2009. Bayesian word sense induction. In Proceedings of the 12th Confer- ence of the European Chapter of the Association for Computational Linguistics, pages 103-111.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Reading tea leaves: How humans interpret topic models",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Gerrish",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jordan",
"middle": [
"L"
],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2009,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "288--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Chang, Sean Gerrish, Chong Wang, Jordan L. Boyd-Graber, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Advances in Neural Information Processing Systems, pages 288-296.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Word association norms, mutual information, and lexicography",
"authors": [
{
"first": "Kenneth",
"middle": [
"Ward"
],
"last": "Church",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational linguistics",
"volume": "",
"issue": "",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicogra- phy. Computational linguistics, pages 22-29.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "SMALLbox -an evaluation framework for sparse representations and dictionary learning algorithms",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Damnjanovic",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Davies",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Plumbley",
"suffix": ""
}
],
"year": 2010,
"venue": "International Conference on Latent Variable Analysis and Signal Separation",
"volume": "",
"issue": "",
"pages": "418--425",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Damnjanovic, Matthew Davies, and Mark Plumb- ley. 2010. SMALLbox -an evaluation framework for sparse representations and dictionary learning al- gorithms. In International Conference on Latent Vari- able Analysis and Signal Separation, pages 418-425.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Clustering and diversifying web search results with graphbased word sense induction",
"authors": [
{
"first": "Antonio",
"middle": [
"Di"
],
"last": "",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "709--754",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Di Marco and Roberto Navigli. 2013. Clus- tering and diversifying web search results with graph- based word sense induction. Computational Linguis- tics, pages 709-754.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sparse overcomplete word vector representations",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1491--1500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah A. Smith. 2015. Sparse overcomplete word vector representations. In Proceedings of As- sociation for Computational Linguistics, pages 1491- 1500.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "WordNet: An Electronic Lexical Database",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A synopsis of linguistic theory, 1930-1955",
"authors": [
{
"first": "John",
"middle": [],
"last": "Rupert",
"suffix": ""
},
{
"first": "Firth",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1957,
"venue": "Studies in Linguistic Analysis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Rupert Firth. 1957. A synopsis of linguistic theory, 1930-1955. Studies in Linguistic Analysis.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Skip-gram -Zipf + Uniform = Vector Additivity",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Gittens",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Achlioptas",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"W"
],
"last": "Mahoney",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "69--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Gittens, Dimitris Achlioptas, and Michael W Ma- honey. 2017. Skip-gram -Zipf + Uniform = Vector Additivity. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 69-76.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Topics in semantic representation",
"authors": [
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steyvers",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"B"
],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2007,
"venue": "Psychological review",
"volume": "",
"issue": "",
"pages": "211--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas L. Griffiths, Mark Steyvers, and Joshua B. Tenenbaum. 2007. Topics in semantic representation. Psychological review, pages 211-244.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improving word representations via global context and multiple word prototypes",
"authors": [
{
"first": "Eric",
"middle": [
"H"
],
"last": "Huang",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "873--882",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representa- tions via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 873-882.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "SensEmbed: Learning sense embeddings for word and relational similarity",
"authors": [
{
"first": "Ignacio",
"middle": [],
"last": "Iacobacci",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "95--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. SensEmbed: Learning sense embeddings for word and relational similarity. In Pro- ceedings of Association for Computational Linguis- tics, pages 95-105.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Neural word embedding as implicit matrix factorization",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2177--2185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Ad- vances in Neural Information Processing Systems, pages 2177-2185.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "SemEval 2010: Task 14: Word sense induction & disambiguation",
"authors": [
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ioannis",
"suffix": ""
},
{
"first": "Dmitriy",
"middle": [],
"last": "Klapaftis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dligach",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sameer S Pradhan",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "63--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suresh Manandhar, Ioannis P Klapaftis, Dmitriy Dligach, and Sameer S Pradhan. 2010. SemEval 2010: Task 14: Word sense induction & disambiguation. In Pro- ceedings of the 5th International Workshop on Seman- tic Evaluation, pages 63-68.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013a. Distributed represen- tations of words and phrases and their composition- ality. In Advances in Neural Information Processing Systems, pages 3111-3119.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of the Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, pages 746-751.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Three new graphical models for statistical language modelling",
"authors": [
{
"first": "Andriy",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 24th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "641--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andriy Mnih and Geoffrey Hinton. 2007. Three new graphical models for statistical language modelling. In Proceedings of the 24th International Conference on Machine Learning, pages 641-648.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Geometry of polysemy",
"authors": [
{
"first": "Jiaqi",
"middle": [],
"last": "Mu",
"suffix": ""
},
{
"first": "Suma",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "Pramod",
"middle": [],
"last": "Viswanath",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaqi Mu, Suma Bhat, and Pramod Viswanath. 2017. Ge- ometry of polysemy. In Proceedings of International Conference on Learning Representations.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning effective and interpretable semantic models using non-negative sparse embedding",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Partha",
"middle": [
"Pratim"
],
"last": "Talukdar",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 24th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1933--1950",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Murphy, Partha Pratim Talukdar, and Tom M. Mitchell. 2012. Learning effective and interpretable semantic models using non-negative sparse embed- ding. In Proceedings of the 24th International Confer- ence on Computational Linguistics, pages 1933-1950.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "SemEval 2013: Task 11: Word sense induction and disambiguation within an end-user application",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Daniele",
"middle": [],
"last": "Vannella",
"suffix": ""
}
],
"year": 2013,
"venue": "Second Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "193--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli and Daniele Vannella. 2013. SemEval 2013: Task 11: Word sense induction and disambigua- tion within an end-user application. In Second Joint Conference on Lexical and Computational Semantics, pages 193-201.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Efficient nonparametric estimation of multiple embeddings per word in vector space",
"authors": [
{
"first": "Arvind",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Jeevan",
"middle": [],
"last": "Shankar",
"suffix": ""
},
{
"first": "Re",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1059--1069",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arvind Neelakantan, Jeevan Shankar, Re Passos, and Andrew Mccallum. 2014. Efficient nonparametric estimation of multiple embeddings per word in vec- tor space. In Proceedings of Conference on Empiri- cal Methods in Natural Language Processing, pages 1059-1069.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research",
"authors": [
{
"first": "Bruno",
"middle": [],
"last": "Olshausen",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Field",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "3311--3325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruno Olshausen and David Field. 1997. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, pages 3311-3325.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "GloVe: Global Vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Empiricial Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for word rep- resentation. In Proceedings of the Empiricial Methods in Natural Language Processing, pages 1532-1543.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Multiprototype vector-space models of word meaning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Reisinger",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "107--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Reisinger and Raymond Mooney. 2010. Multi- prototype vector-space models of word meaning. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, pages 107-117.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Vmeasure: A conditional entropy-based external cluster evaluation measure",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2007,
"venue": "Conference on Empirical Methods in Natural Language Processing and Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "410--420",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Rosenberg and Julia Hirschberg. 2007. V- measure: A conditional entropy-based external clus- ter evaluation measure. In Conference on Empirical Methods in Natural Language Processing and Confer- ence on Computational Natural Language Learning, pages 410-420.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Automatic word sense discrimination",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Schutze",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "97--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Schutze. 1998. Automatic word sense discrimi- nation. Computational Linguistics, pages 97-123.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The mechanism of additive composition",
"authors": [
{
"first": "Ran",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2017,
"venue": "Machine Learning",
"volume": "106",
"issue": "7",
"pages": "1083--1130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ran Tian, Naoaki Okazaki, and Kentaro Inui. 2017. The mechanism of additive composition. Machine Learn- ing, 106(7):1083-1130.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence Research",
"volume": "",
"issue": "",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of seman- tics. Journal of Artificial Intelligence Research, pages 141-188.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "English Wikipedia dump",
"authors": [
{
"first": "",
"middle": [],
"last": "Wikimedia",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wikimedia. 2012. English Wikipedia dump. Accessed March 2015.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Unsupervised word sense disambiguation rivaling supervised methods",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "189--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky. 1995. Unsupervised word sense dis- ambiguation rivaling supervised methods. In Proceed- ings of the 33rd Annual Meeting on Association for Computational Linguistics, pages 189-196.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Precision and recall in the police lineup test. (A) For each polysemous word, a set of n = 20 senses containing the ground truth senses of the word are presented.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF2": {
"html": null,
"type_str": "table",
"text": "Some discourse atoms and their nearest 9 words. By Equation(2), words most likely to appear in a discourse are those nearest to it.",
"content": "<table><tr><td>tie</td></tr></table>",
"num": null
}
}
}
}