|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:58:51.507901Z" |
|
}, |
|
"title": "Evaluating Natural Alpha Embeddings on Intrinsic and Extrinsic Tasks", |
|
"authors": [ |
|
{ |
|
"first": "Riccardo", |
|
"middle": [], |
|
"last": "Volpi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Machine Learning and Optimization", |
|
"institution": "Romanian Institute of Science and Technology (RIST)", |
|
"location": { |
|
"settlement": "Cluj-Napoca", |
|
"country": "Romania" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Luigi", |
|
"middle": [], |
|
"last": "Malag\u00f2", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Skip-Gram is a simple, but effective, model to learn a word embedding mapping by estimating a conditional probability distribution for each word of the dictionary. In the context of Information Geometry, these distributions form a Riemannian statistical manifold, where word embeddings are interpreted as vectors in the tangent bundle of the manifold. In this paper we show how the choice of the geometry on the manifold allows impacts on the performances both on intrinsic and extrinsic tasks, in function of a deformation parameter alpha.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Skip-Gram is a simple, but effective, model to learn a word embedding mapping by estimating a conditional probability distribution for each word of the dictionary. In the context of Information Geometry, these distributions form a Riemannian statistical manifold, where word embeddings are interpreted as vectors in the tangent bundle of the manifold. In this paper we show how the choice of the geometry on the manifold allows impacts on the performances both on intrinsic and extrinsic tasks, in function of a deformation parameter alpha.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Word embeddings are compact representations for the words of a dictionary. Rumelhart et al. (1986) first introduced the idea of using the internal representation of a neural network to construct a word embedding. Bengio et al. (2003) employ a neural network to predict the probability of the next word given the previous ones. Mikolov et al. (2010) proposed the use of a recurrency language model based on RNN, to learn the vector representations. More recently, this approach has been exploited further, with great success by means of bidirectional LSTM (Peters et al., 2018) and transformers (Radford et al., 2018; Devlin et al., 2018; Yang et al., 2019) . In this paper we focus on Skip-Gram (SG), a well-known model for the conditional probability of the context of a given central word, which it has been shown to work well at efficiently capturing syntactic and semantic information. SG is at the basis of many popular word embeddings algorithms, such as Word2Vec (Mikolov et al., 2013a,b) , the contpdfinfoinuous bag of words (Mikolov et al., 2013a,b) , and models based on weighted matrix factorization of the global cooccurrences as GloVe (Pennington et al., 2014) , cf. Levy and Goldberg (2014) . These methods are deeply related, Levy and Goldberg showed how Word2Vec SG with negative sampling is effectively performing a matrix factorization of the Shifted Positive PMI (Levy and Goldberg, 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 98, |
|
"text": "Rumelhart et al. (1986)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 233, |
|
"text": "Bengio et al. (2003)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 327, |
|
"end": 348, |
|
"text": "Mikolov et al. (2010)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 555, |
|
"end": 576, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 594, |
|
"end": 616, |
|
"text": "(Radford et al., 2018;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 617, |
|
"end": 637, |
|
"text": "Devlin et al., 2018;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 638, |
|
"end": 656, |
|
"text": "Yang et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 970, |
|
"end": 995, |
|
"text": "(Mikolov et al., 2013a,b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1033, |
|
"end": 1058, |
|
"text": "(Mikolov et al., 2013a,b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1148, |
|
"end": 1173, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1180, |
|
"end": 1204, |
|
"text": "Levy and Goldberg (2014)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1382, |
|
"end": 1407, |
|
"text": "(Levy and Goldberg, 2014)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "It has been noted (Mikolov et al., 2013c) how, once the embedding space has been learned, syntactic and semantic analogies between words translate in linear relations between the respective word vectors. There have been numerous works investigating the reason of the correspondence between linear properties and word relations. Pennington et al. gave a very intuitive explanation in their paper on GloVe (Pennington et al., 2014) . More recently Arora et al. (Arora et al., 2016) tried to study this property by introducing a hidden Markov model, under some regularity assumptions on the distribution of the word embedding vectors, cf. (Mu et al., 2017) . Word embeddings are also often used as input for another computational model, to solve more complex inference tasks. The evaluation of the quality of a word embedding, which ideally should encode syntactic and semantic information, is not easy to be determined and different approaches have been proposed in the literature. This evaluation can be in terms of performance on intrinsic tasks like word similarity (Bullinaria and Levy, 2007 Levy, , 2012 Pennington et al., 2014; Levy et al., 2015) , or by solving word analogies (Mikolov et al., 2013c,a) , however several authors (Tsvetkov et al., 2015; Schnabel et al., 2015) has showed a low degree of correlation between the quality of an embedding for word similarities and analogies on one side, and on downstream (extrinsic) tasks, for instance on classification or prediction, to which the embedding is given in input.", |
|
"cite_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 41, |
|
"text": "(Mikolov et al., 2013c)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 404, |
|
"end": 429, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 446, |
|
"end": 479, |
|
"text": "Arora et al. (Arora et al., 2016)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 636, |
|
"end": 653, |
|
"text": "(Mu et al., 2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1083, |
|
"end": 1093, |
|
"text": "Levy, 2007", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1094, |
|
"end": 1106, |
|
"text": "Levy, , 2012", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1107, |
|
"end": 1131, |
|
"text": "Pennington et al., 2014;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1132, |
|
"end": 1150, |
|
"text": "Levy et al., 2015)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1182, |
|
"end": 1207, |
|
"text": "(Mikolov et al., 2013c,a)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1234, |
|
"end": 1257, |
|
"text": "(Tsvetkov et al., 2015;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 1258, |
|
"end": 1280, |
|
"text": "Schnabel et al., 2015)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Several works have highlighted the effectiveness of post-processing techniques (Bullinaria and Levy, 2007 Levy, , 2012 , such as PCA (Raunak, 2017; Mu et al., 2017) , focusing on the fact that certain dominant components are not carriers of semantic nor syn-tactic information and thus act like noise for determinate tasks of interest. A different approach which still acts on the learned vectors after training has been recently proposed by Volpi and Malag\u00f2 (2019) . The authors present a geometrical framework in which word embeddings are represented as vectors in the tangent space of a probability simplex. A family of word embeddings called natural alpha embeddings is introduced, where \u03b1 is a deformation parameter for the geometry of the probability simplex, known in Information Geometry in the context of \u03b1-connections (Amari and Nagaoka, 2000; Amari, 2016) . Noticeably, alpha word embeddings include the classical word embeddings as a special case. In this paper we provide an experimental evaluation of natural alpha embeddings over different tasks, both intrinsic and extrinsic, including word similarities and analogies, as well as downstream tasks, such as document classification and sentiment analysis, in order to study the impact of the geometry on performances.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 105, |
|
"text": "Levy, 2007", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 106, |
|
"end": 118, |
|
"text": "Levy, , 2012", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 133, |
|
"end": 147, |
|
"text": "(Raunak, 2017;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 148, |
|
"end": 164, |
|
"text": "Mu et al., 2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 442, |
|
"end": 465, |
|
"text": "Volpi and Malag\u00f2 (2019)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 828, |
|
"end": 853, |
|
"text": "(Amari and Nagaoka, 2000;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 854, |
|
"end": 866, |
|
"text": "Amari, 2016)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The Skip-Gram conditional model (Mikolov et al., 2013b; Pennington et al., 2014) allows the unsupervised training of a set of word-embeddings, by predicting the conditional probability of any word \u03c7 to be in the context of a central word w", |
|
"cite_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 55, |
|
"text": "(Mikolov et al., 2013b;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 56, |
|
"end": 80, |
|
"text": "Pennington et al., 2014)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conditional Models and the Embeddings Structure", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "p(\u03c7|w) = p w (\u03c7) = exp(u T w v \u03c7 ) Z w (1) with Z w = \u03c7 \u2208D exp(u T w v \u03c7 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conditional Models and the Embeddings Structure", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "partition function. The conditional model represents an exponential family in the simplex, parameterized by two matrices U and V of size n\u00d7d, where n is the cardinality of the dictionary D, and d is the size of the embeddings. We will refer to the rows of a matrix V as v \u03c7 or V \u03c7 , and to its columns as V k . It is common practice in the literature of word embedding to consider u w or alternatively u w + v w as embedding vectors for w (Bullinaria and Levy, 2012; Mikolov et al., 2013a,b; Pennington et al., 2014; Raunak, 2017) . In the remaining part of this section we briefly review the natural alpha embeddings and limit embeddings, based on Information Geometry framework. We refer the reader to Volpi and Malag\u00f2 (2019) for more details and mathematical derivations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 455, |
|
"end": 466, |
|
"text": "Levy, 2012;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 467, |
|
"end": 491, |
|
"text": "Mikolov et al., 2013a,b;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 492, |
|
"end": 516, |
|
"text": "Pennington et al., 2014;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 517, |
|
"end": 530, |
|
"text": "Raunak, 2017)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 704, |
|
"end": 727, |
|
"text": "Volpi and Malag\u00f2 (2019)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conditional Models and the Embeddings Structure", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "After training, the matrices U and V are fixed. For each w, the conditional model p w (\u03c7) is an exponential family E in the n \u2212 1 dimensional simplex, where n is the size of the dictionary. This models the probability of a word \u03c7 in the context, when w is the central word. The sufficient statistics of this model are determined by the columns of V , while each row u w of U can be seen as an assignment for the natural parameters, i.e., each row identifies a probability distribution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alpha Embeddings", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "According to the language of Information Geometry, a statistical model can be modelled as a Riemannian manifold endowed with the Fisher information matrix and with a family of \u03b1connections (Amari, 1985; Shun-Ichi and Hiroshi, 2000; Amari, 2016) . The alpha embeddings are defined up to the choice of a reference distribution p 0 . The natural alpha embedding of a given word w is defined as the projection of the logarithmic map Log \u03b1 p 0 w onto the tangent space of the submodel T p 0 E. The main intuition is that a word embedding for w corresponds to the vector in the tangent space which allows to reach the distribution of the context of w from p 0 . Deforming the simplex continuously with a family of isometries depending from a parameter alpha, and by considering a family of \u03b1-logarithmic maps, depending on the choice of the \u03b1-connection, a family of natural alpha embeddings W \u03b1 p 0 (w) can be defined as a function of the deformation parameter \u03b1", |
|
"cite_spans": [ |
|
{ |
|
"start": 189, |
|
"end": 202, |
|
"text": "(Amari, 1985;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 203, |
|
"end": 231, |
|
"text": "Shun-Ichi and Hiroshi, 2000;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 244, |
|
"text": "Amari, 2016)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alpha Embeddings", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "W \u03b1 p 0 (w) = \u03a0 \u03b1 0 Log \u03b1 p 0 p w = I(p 0 ) \u22121 \u03c7 l \u03b1 p 0 w (\u03c7) \u2206V (p 0 ) \u03c7 (2) where \u2206V (p 0 ) = V \u2212 E p 0 [V ] is the matrix of centered sufficient statistics in p 0 and l \u03b1 p 0 w (\u03c7) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 p0(\u03c7)(ln pw(\u03c7) \u2212 ln p0(\u03c7)) \u03b1 = 1 p0(\u03c7) 2 1\u2212\u03b1 pw (\u03c7) p 0 (\u03c7) 1\u2212\u03b1 2 \u2212 1 \u03b1 = 1 .", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Alpha Embeddings", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The Fisher metric is simply computed as the metric for an exponential family (Amari and Nagaoka, 2000)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alpha Embeddings", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "I(p 0 ) = E p 0 \u2206V (p 0 ) T \u2206V (p 0 ) ,", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Alpha Embeddings", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "and it does not depend on alpha since the family of alpha divergences induces the same Fisher information metric for any value of alpha.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alpha Embeddings", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The notion of alpha embeddings can be used both for downstream tasks and also to evaluate similarities and analogies in the tangent space of the manifold (Volpi and Malag\u00f2, 2019) . Given two words a and b, a measure of similarity is defined by", |
|
"cite_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 178, |
|
"text": "(Volpi and Malag\u00f2, 2019)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alpha Embeddings", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "sim \u03b1 p 0 (a, b) = W \u03b1 p 0 (a), W \u03b1 p 0 (b) I(p 0 ) ||W \u03b1 p 0 (a)|| I(p 0 ) ||W \u03b1 p 0 (b)|| I(p 0 ) ,", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Alpha Embeddings", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "while analogies of the form a : b = c : d can be solved by minimizing an analogy measure \u03ba", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alpha Embeddings", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "(\u03b1) p 0 (p a , p b , p c , p d ) defined as W \u03b1 p 0 (b) \u2212 W \u03b1 p 0 (a) \u2212 W \u03b1 p 0 (d) + W \u03b1 p 0 (c) I(p 0 ) .", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Alpha Embeddings", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "It is possible to show that for \u03b1 = 1 and choosing p 0 equal to the uniform distribution, the embeddings of Eq. (2) reduce to the standard vectors u w . Furthermore, by substituting the Fisher Information matrix I(p 0 ) with the identity 1 , Eqs. 5and (6) reduce to the standard formulas used in the literature for similarities and analogies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alpha Embeddings", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The embedding vectors u + v have been shown to provide better results (Pennington et al., 2014) than simply u. In the context of natural alpha embeddings, the vectors u + v can be interpreted as a recentering of the natural parameters u of the exponential family. This corresponds to a reweighting of the probabilities in Eq. (1)", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 95, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alpha Embeddings", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p (+) (\u03c7|w) = N w exp(v w v \u03c7 )p(\u03c7|w)", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Alpha Embeddings", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "based on a change of reference measure proportional to exp(v w v \u03c7 ), i.e., by weighting more those words \u03c7 in the context whose outer vectors are aligned to the outer vector of the central word w.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alpha Embeddings", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The behavior of the alpha embeddings for \u03b1 progressively approaching minus infinity turns out to be particularly interesting. In this case, l \u03b1 p 0 w (\u03c7) is progressively more and more peaked on", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limit Embeddings", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c7 * w = arg max \u03c7 p w (\u03c7) p 0 (\u03c7) ,", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Limit Embeddings", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "and presents a growing norm, see Eq.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limit Embeddings", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "(3). By normalizing these alpha embeddings to preserve the direction of the tangent vector, a simple formula can be obtained depending only on the \u03c7 * w row of the matrix of sufficient statistics \u2206V (p 0 ). The normalized limit embeddings then simplify to", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limit Embeddings", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "LW \u03b1 p 0 (w) = lim \u03b1\u2192\u2212\u221e W \u03b1 0 (w) = I(p 0 ) \u22121 \u2206V (p 0 ) \u03c7 * w ,", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Limit Embeddings", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "leading to simple geometrical methods in the limit. Let us notice that the same row \u2206V a can be associated to multiple words, thus limit embeddings are also naturally inducing a clustering in the embedding space.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limit Embeddings", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We considered two corpora: English Wikipedia dump October 2017 (enwiki), with 1.5B words, and its augmented version composed by Gutenberg (Gutenberg), English Wikipedia and Book-Corpus (Zhu et al., 2015 ; BookCorpus; Kobayashi) (geb), with 1.8B words. For each corpus we trained a set of GloVe word embeddings (Pennington et al., 2014) with vector sizes of 300 and 50, window size of 10, until convergence for a maximum of 1,000 epochs (more details in Appendix A). The embeddings in Eq.", |
|
"cite_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 202, |
|
"text": "(Zhu et al., 2015", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 310, |
|
"end": 335, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(2) will be denoted with 'E' in figures and tables, while the limit embeddings in Eq. (9) will be denoted with 'LE'. Embeddings have been normalized either with the Fisher Information matrix (F) or with the Identity (I). Similarly after normalization, the scalar products can be computed with the respective metric (on the tasks that requires scalar product calculation). In this study, normalization and scalar product are always using the same metric. For the reference distribution needed for the computation of the alpha embeddings we have chosen the uniform distribution (0), the unigram distribution of the model (u) -obtained by marginalization of the joint distribution learned by the model, or the unigram distribution estimated from the corpus data (ud). Embeddings are denoted by 'U', if in the computation of Eqs. (2) and (9), the formula used for p w is Eq. (1), while they will be denoted by 'U+V' if Eq. (7) is used instead.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We evaluated the alpha embeddings on intrinsic (similarities, analogies, concept categorization) and extrinsic (document classification, sentiment analysis) tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In Fig. 1 we report results for similarities and analogies with embedding size 300. For similarities we use: ws353 (Finkelstein et al., 2001 ), Table 1 : Spearman correlations for similarities tasks. WG5 inside the enwiki and geb section are the wikigiga5 pretrained vectors on 6B words (Pennington et al., 2014) tested for comparison on the dictionary of the smaller corpora enwiki and geb. Lastly, U and U+V are the standard methods with the word embeddings vectors. PM are the accuracies reported by Pennington et al. (2014) on enwiki, BDK is the best setup across tasks (varying hyperparameters) reported by Baroni et al. (2014) and LGD are the best methods in cross-validation with fixed window size of 10 and 5 (for varying hyperparameters) reported by Levy et al. (2015) mc (Miller and Charles, 1991) , rg (Rubenstein and Goodenough, 1965) , scws (Huang et al., 2012), men (Bruni et al., 2014), mturk287 (Radinsky et al., 2011) , rw (Luong et al., 2013 ) and simlex999 (Hill et al., 2015) . For analogies we use the Google analogy dataset (Mikolov et al., 2013a) . The limit embeddings (colored dotted lines) achieve good performances on both tasks, above the competitor methods from the literature U and U+V centered and normalized by column, as described in Pennington et al. (2014) . Comparison with baseline methods from literature on word similarity is presented in Tables 1, we compare with the limit embeddings since they usually seem to be the best performing on the similarity task, see Fig. 1 and other comparable baselines from the literature with similar window size. In Table 2 we report best performances on analogy task on alpha embeddings, where alpha is selected with cross-validation (Table 3) . For enwiki syn, the limit embedding has been found to work better instead. The errors reported are obtained averaging the performances on test of the top three alpha selected based on best performances on validation. The errors obtained are relatively small which indicates that tuning alpha is easy also on tasks with small amount of data in cross-validation. The best tuned alpha on the geb dataset completely outperform the baselines. The last intrinsic tasks considered are cluster purity for concept categorization datasets AP (Al-muhareb, 2006) and BLESS (Baroni and Lenci, 2011) . The purity curves (Fig. 2) are more noisy, this is because the datasets available for this task are quite limited in size. Almost all the curves exhibit a peak which is relatively more pronounced for smaller embedding sizes, while the limit behaviour for very negative alphas is better performing for larger embedding size. This points to the fact that the natural clustering performed by the limit embeddings of Eq. 9 is better behaved when the dimension of the embedding grows. Increasing the embedding size, increases the number of sufficient statistics, thus allowing more flexibility for the limit clustering during training. Figure 2 : Cluster purity on concept categorization task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 140, |
|
"text": "(Finkelstein et al., 2001", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 287, |
|
"end": 312, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 503, |
|
"end": 527, |
|
"text": "Pennington et al. (2014)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 612, |
|
"end": 632, |
|
"text": "Baroni et al. (2014)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 759, |
|
"end": 777, |
|
"text": "Levy et al. (2015)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 781, |
|
"end": 807, |
|
"text": "(Miller and Charles, 1991)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 813, |
|
"end": 846, |
|
"text": "(Rubenstein and Goodenough, 1965)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 911, |
|
"end": 934, |
|
"text": "(Radinsky et al., 2011)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 940, |
|
"end": 959, |
|
"text": "(Luong et al., 2013", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 976, |
|
"end": 995, |
|
"text": "(Hill et al., 2015)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1046, |
|
"end": 1069, |
|
"text": "(Mikolov et al., 2013a)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1267, |
|
"end": 1291, |
|
"text": "Pennington et al. (2014)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 2253, |
|
"end": 2271, |
|
"text": "(Al-muhareb, 2006)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2276, |
|
"end": 2306, |
|
"text": "BLESS (Baroni and Lenci, 2011)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 9, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 144, |
|
"end": 151, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1503, |
|
"end": 1509, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1590, |
|
"end": 1597, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 1709, |
|
"end": 1718, |
|
"text": "(Table 3)", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 2327, |
|
"end": 2335, |
|
"text": "(Fig. 2)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2940, |
|
"end": 2948, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Intrinsic Tasks", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "As extrinsic tasks we choose 20 Newsgroup multi classification (Lang, 1995) and IMDBReviews sentiment analysis (Maas et al., 2011) . Embeddings are normalized before training either with I or F. We use a linear architecture (BatchNorm+Dense) for both tasks, while for sentiment analysis we also use a recurrent architecture (Bidirectional LSTM 32 channels, GlobalMaxPool1D, Dense 20 + Dropout 0.05, Dense). In Tables 4 and 5 we report the best methods chosen with respect to the validation set and the best limit embedding performances for embedding size 300. A more complete set of experiments can be found in Appendix. Limit Embeddings have been generalized, instead of considering only the max row \u03c7 * (see Sec. 2.2), by considered the top k rows from \u2206V . Limit embeddings are evaluated with respect to top 1, 3, and 5, denoted -t1/3/5. Furthermore we denote by -w if a weighted average (with weights p w (\u03c7)/p 0 (\u03c7)) is performed for the top rows of \u2206V . The improvements reported in the Tables are small but consistent, of above 0.5% accuracy on both Newsgroups and IMDBReviews, furthermore the improvement persist also with increased complexity of the network architecture (bidirectional LSTM). -u-F 96.65 (t3-w) 64.54 (t1) LE-U+V-ud-F 96.38 (t5-w) 64.76 (t3-w) reports curves for the values on test with early stopping based on validation for embedding sizes of 50 and 300. The improvements for tuning alpha are higher on size 50 exhibiting a more evident peak. For size 300 improvements are smaller but consistent. In particular a peak performance for alpha can be always easily identified for a chosen reference distribution and a chosen normalization. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 75, |
|
"text": "(Lang, 1995)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 111, |
|
"end": 130, |
|
"text": "(Maas et al., 2011)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1202, |
|
"end": 1219, |
|
"text": "-u-F 96.65 (t3-w)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 410, |
|
"end": 424, |
|
"text": "Tables 4 and 5", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Extrinsic Tasks", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For word similarities and analogies alpha embeddings provide significant improvements over baseline methods (corresponding to \u03b1 = 1). For the other tasks the improvements are smaller but consistent, depending on the value of \u03b1, the chosen reference distribution (0, u, ud) and the chosen normalization method (I, F). The improvements persist also when increasing the complexity of the networks used (linear vs BiLSTM). This motivates further studies on more complex architectures, for example on models employing transformers with the aim to close the experimental gap with the state of the art.", |
|
"cite_spans": [ |
|
{ |
|
"start": 262, |
|
"end": 272, |
|
"text": "(0, u, ud)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The best value of alpha depends both on the task and on the dataset. Alpha embeddings thus provide an extra handle on the optimization problem, allowing to choose the deformation parameter based on data. Alpha values lower than 1 and negative seems to be preferred across most tasks. Limit embeddings provide a simple method which does not require validation over alpha, but can still offer an improvement on several tasks of interest. Furthermore limit embeddings can be interpreted as a natural clustering in space learned by the SG model itself during training. Performances of the limit embeddings grow with increasing dimension, pointing to the possibility to have a consistent improvement in higher embedding dimensions without tuning alpha.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We have performed experiments using two corpora: english Wikipedia dump October 2017 (enwiki) and also we augmented this last one with Guthenberg(Gutenberg) and BookCorpus(BookCorpus; Kobayashi) calling this geb (guthenberg, enwiki, bookcorpus) . We used the wikiextractor python script(Attardi) to parse the Wikipedia dump xml file. A minimal preprocessing have been used: lower case all the letters, remove stop-words and remove punctuation. We use a cut-off minimum frequency (m0) of 1000 during GloVe training (Pennington et al., 2014) . We obtained a dictionary of about 67k words for both enwiki and geb. The window size was set to be 10 as in (Pennington et al., 2014) , with decaying weighting rate from the center of 1/d for the calculation of cooccurrences. We trained the models for a maximum of 1000 epochs. Embedding sizes used are 50 and 300. Table 6 : AUC on Newsgroups with linear architecture (BatchNorm + Dense). We use geb embeddings, fixed during the classifiers training. The alpha for which to report performances on test is chosen based on the best measure on the validation set and we report both performances on validation and on test (\u03b1 between -4 and 4 with adaptive step: 0.2 between [-1, 1] and 0.4 in between [-3, 3] and 1 between [-4, 4] ). We also report limit embedding performances. Table 10 : Spearman correlations for similarities tasks for the different methods on enwiki and geb. LE represents the cos product between limit embeddings on the exponential family model. WG5 inside the enwiki and geb section are the wikigiga5 pretrained vectors on 6B words (Pennington et al., 2014) tested for comparison on the dictionary of the smaller corpora enwiki and geb. Lastly, U and U+V are the non-geometric methods with the word embeddings vectors. Table 11 : Analogy tasks for the different methods on enwiki and geb. The best alpha is selected with a 3-fold cross validation (\u03b1 between -10 and 10). The methods reported are implementing either euclidean normalization (I) or normalization with the Fisher (F) in different points on the manifold (0, u). Scalar products (-p) are always calculated with respect to the Identity in this ", |
|
"cite_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 244, |
|
"text": "(guthenberg, enwiki, bookcorpus)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 514, |
|
"end": 539, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 650, |
|
"end": 675, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1239, |
|
"end": 1246, |
|
"text": "[-3, 3]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1261, |
|
"end": 1268, |
|
"text": "[-4, 4]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1593, |
|
"end": 1618, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 857, |
|
"end": 864, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1317, |
|
"end": 1325, |
|
"text": "Table 10", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1780, |
|
"end": 1788, |
|
"text": "Table 11", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Additional Details", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Proposition 3 inVolpi and Malag\u00f2 (2019) provides conditions under which Fisher Information matrix is isotropic, i.e., proportional to the identity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "R. Volpi and L. Malag\u00f2 are supported by the Deep-Riemann project, co-funded by the European Regional Development Fund and the Romanian Government through the Competitiveness Operational Programme 2014-2020, Action 1.1.4, project ID P 37 714, contract no. 136/27.09.2016.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": " Shun-ichi Amari and Hiroshi Nagaoka. 2000 Gutenberg. Free ebooks -project gutenberg. https://www.gutenberg.org. Accessed: 2019-09.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1, |
|
"end": 42, |
|
"text": "Shun-ichi Amari and Hiroshi Nagaoka. 2000", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Attributes in lexical acquisition", |
|
"authors": [ |
|
{ |
|
"first": "Abdulrahman", |
|
"middle": [], |
|
"last": "Almuhareb", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abdulrahman Almuhareb. 2006. Attributes in lexical acquisition. Ph.D. thesis, University of Essex.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Differential-geometrical methods in statistics", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Shun-Ichi Amari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "Lecture Notes in Statistics", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shun-ichi Amari. 1985. Differential-geometrical meth- ods in statistics, volume 28 of Lecture Notes in Statistics. Springer-Verlag, New York.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Information Geometry and Its Applications", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Shun-Ichi Amari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Applied Mathematical Sciences", |
|
"volume": "194", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-4-431-55978-8" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shun-ichi Amari. 2016. Information Geometry and Its Applications, volume 194 of Applied Mathematical Sciences. Springer Japan, Tokyo.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation", |
|
"authors": [ |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Computational Linguistics", |
|
"volume": "41", |
|
"issue": "4", |
|
"pages": "665--695", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41(4):665-695.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Improving word representations via global context and multiple word prototypes", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Eric", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew Y", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "873--882", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric H Huang, Richard Socher, Christopher D Man- ning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguistics: Long Papers-Volume 1, pages 873-882. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Homemade bookcorpus", |
|
"authors": [ |
|
{ |
|
"first": "Sosuke", |
|
"middle": [], |
|
"last": "Kobayashi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sosuke Kobayashi. Homemade bookcorpus.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Newsweeder: Learning to filter netnews", |
|
"authors": [ |
|
{ |
|
"first": "Ken", |
|
"middle": [], |
|
"last": "Lang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Machine Learning Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "331--339", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ken Lang. 1995. Newsweeder: Learning to filter netnews. In Machine Learning Proceedings 1995, pages 331-339. Elsevier.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Neural word embedding as implicit matrix factorization", |
|
"authors": [ |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2177--2185", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Ad- vances in neural information processing systems, pages 2177-2185.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Improving distributional similarity with lessons learned from word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "211--225", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00134" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the Associ- ation for Computational Linguistics, 3:211-225.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Better word representations with recursive neural networks for morphology", |
|
"authors": [ |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "104--113", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minh-Thang Luong, Richard Socher, and Christo- pher D Manning. 2013. Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Conference on Com- putational Natural Language Learning, pages 104- 113.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Learning word vectors for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Maas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Daly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "142--150", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the as- sociation for computational linguistics: Human lan- guage technologies-volume 1, pages 142-150. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1301.3781" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Recurrent neural network based language model", |
|
"authors": [ |
|
{ |
|
"first": "Tom\u00e1\u0161", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Karafi\u00e1t", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luk\u00e1\u0161", |
|
"middle": [], |
|
"last": "Burget", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ja\u0148", |
|
"middle": [], |
|
"last": "Cernock\u1ef3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanjeev", |
|
"middle": [], |
|
"last": "Khudanpur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Eleventh annual conference of the international speech communication association", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom\u00e1\u0161 Mikolov, Martin Karafi\u00e1t, Luk\u00e1\u0161 Burget, Ja\u0148 Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech com- munication association.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Linguistic regularities in continuous space word representations", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yih", |
|
"middle": [], |
|
"last": "Wen-Tau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Zweig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "746--751", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In NAACL-HLT, pages 746- 751.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Contextual correlates of semantic similarity", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "George", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Walter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Charles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Language and cognitive processes", |
|
"volume": "6", |
|
"issue": "1", |
|
"pages": "1--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George A Miller and Walter G Charles. 1991. Contex- tual correlates of semantic similarity. Language and cognitive processes, 6(1):1-28.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Simple and Effective Postprocessing for Word Representations", |
|
"authors": [ |
|
{ |
|
"first": "Jiaqi", |
|
"middle": [], |
|
"last": "Mu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suma", |
|
"middle": [], |
|
"last": "Bhat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pramod", |
|
"middle": [], |
|
"last": "Viswanath", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1702.01417" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiaqi Mu, Suma Bhat, and Pramod Viswanath. 2017. All-but-the-Top: Simple and Effective Postprocess- ing for Word Representations. arXiv:1702.01417 [cs, stat]. ArXiv: 1702.01417.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1802.05365[cs].ArXiv:1802.05365" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv:1802.05365 [cs]. ArXiv: 1802.05365.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Improving language understanding by generative pre-training", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karthik", |
|
"middle": [], |
|
"last": "Narasimhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Salimans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "A word at a time: computing word relatedness using temporal semantic analysis", |
|
"authors": [ |
|
{ |
|
"first": "Kira", |
|
"middle": [], |
|
"last": "Radinsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Agichtein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evgeniy", |
|
"middle": [], |
|
"last": "Gabrilovich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaul", |
|
"middle": [], |
|
"last": "Markovitch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 20th international conference on World wide web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "337--346", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. 2011. A word at a time: computing word relatedness using temporal semantic analysis. In Proceedings of the 20th international conference on World wide web, pages 337-346.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Simple and Effective Dimensionality Reduction for Word Embeddings", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vikas Raunak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1708.03629[cs].ArXiv:1708.03629" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vikas Raunak. 2017. Simple and Effective Di- mensionality Reduction for Word Embeddings. arXiv:1708.03629 [cs]. ArXiv: 1708.03629.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Contextual correlates of synonymy", |
|
"authors": [ |
|
{ |
|
"first": "Herbert", |
|
"middle": [], |
|
"last": "Rubenstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Goodenough", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1965, |
|
"venue": "Communications of the ACM", |
|
"volume": "8", |
|
"issue": "10", |
|
"pages": "627--633", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communica- tions of the ACM, 8(10):627-633.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Learning representations by backpropagating errors", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Rumelhart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ronald", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Williams", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "Nature", |
|
"volume": "323", |
|
"issue": "6088", |
|
"pages": "533--536", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1038/323533a0" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning representations by back- propagating errors. Nature, 323(6088):533-536.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Evaluation methods for unsupervised word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Tobias", |
|
"middle": [], |
|
"last": "Schnabel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Igor", |
|
"middle": [], |
|
"last": "Labutov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mimno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Joachims", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "298--307", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D15-1036" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 298-307, Lis- bon, Portugal. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Translations of mathematical monographs 191) Methods of information geometry", |
|
"authors": [ |
|
{ |
|
"first": "Amari", |
|
"middle": [], |
|
"last": "Shun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-Ichi", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nagaoka", |
|
"middle": [], |
|
"last": "Hiroshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amari Shun-Ichi and Nagaoka Hiroshi. 2000. (Trans- lations of mathematical monographs 191) Methods of information geometry. American Mathematical Society.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Evaluation of word vector representations by subspace alignment", |
|
"authors": [ |
|
{ |
|
"first": "Yulia", |
|
"middle": [], |
|
"last": "Tsvetkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manaal", |
|
"middle": [], |
|
"last": "Faruqui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wang", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2049--2054", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D15-1243" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guil- laume Lample, and Chris Dyer. 2015. Evaluation of word vector representations by subspace alignment. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 2049-2054, Lisbon, Portugal. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Natural alpha embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Riccardo", |
|
"middle": [], |
|
"last": "Volpi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luigi", |
|
"middle": [], |
|
"last": "Malag\u00f2", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Riccardo Volpi and Luigi Malag\u00f2. 2019. Natural alpha embeddings. ArXiv:1912.02280.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Xlnet: Generalized autoregressive pretraining for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Zhilin", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zihang", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. ArXiv, abs/1906.08237.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", |
|
"authors": [ |
|
{ |
|
"first": "Yukun", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Kiros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rich", |
|
"middle": [], |
|
"last": "Zemel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raquel", |
|
"middle": [], |
|
"last": "Urtasun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Torralba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanja", |
|
"middle": [], |
|
"last": "Fidler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the IEEE international conference on computer vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE inter- national conference on computer vision, pages 19- 27.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Word similarities (top) and word analogies (bottom) for different values of \u03b1.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "Fig. 3", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "Performances on 20 Newsgroups and IMDB Reviews for varying alphas. Metrics I and F refers to embeddings normalization before training.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"text": "-u-F (\u03b1 = 0.2) 0.96792 0.96787 E-U+V-ud-F (\u03b1 = 0.4) 0.96798 0.96792 LE-U+V-0-F-t3-w 0.9666 0.96654 LE-U+V-u-F-t3-w 0.96662 0.96655 LE-U+V-ud-F-t5-w 0.96388 0.96381", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td/><td>method</td><td>sem</td><td>syn</td><td>tot</td></tr><tr><td/><td colspan=\"4\">E-U+V-0-I 84.5 \u00b1 0.4 67.33 \u00b1 0.6 74.4 \u00b1 0.1</td></tr><tr><td>enwiki</td><td>WG5-U+V U</td><td>79.4 77.8</td><td>67.5 62.1</td><td>72.6 68.9</td></tr><tr><td/><td>U+V</td><td>80.9</td><td>63.4</td><td>70.9</td></tr><tr><td/><td colspan=\"4\">E-U+V-0-I 83.8 \u00b1 0.4 72.2 \u00b1 0.4 76.7 \u00b1 0.3</td></tr><tr><td>geb</td><td>WG5-U+V U</td><td>78.7 75.7</td><td>65.2 66.8</td><td>70.7 70.4</td></tr><tr><td/><td>U+V</td><td>80.0</td><td>68.5</td><td>73.2</td></tr><tr><td/><td>PM 1.6B</td><td>80.8</td><td>61.5</td><td>70.3</td></tr><tr><td/><td>PM 6B</td><td>77.4</td><td>67.0</td><td>71.7</td></tr><tr><td/><td>BDK</td><td>80.0</td><td>68.5</td><td>73.2</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"text": "Best cross-validated alphas for methods of Table 2 (enwiki and geb).", |
|
"content": "<table><tr><td>method</td><td>sem</td><td>syn</td><td>tot</td></tr><tr><td>en E-U+V-0-I</td><td>1.8 \u00b1 0.1</td><td>\u2212\u221e</td><td>1.7 \u00b1 0.1</td></tr><tr><td>geb E-U+V-0-I</td><td>1.7 \u00b1 0.1</td><td colspan=\"2\">1.3 \u00b1 0.1 1.3 \u00b1 0.1</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td colspan=\"3\">: AUC and accuracy on test of 20 Newsgroups</td></tr><tr><td colspan=\"3\">multiclass classification, compared to baseline vectors.</td></tr><tr><td colspan=\"3\">Best alpha and best limit method (on validation) are</td></tr><tr><td colspan=\"2\">reported in parenthesis.</td><td/></tr><tr><td>method</td><td colspan=\"2\">20 Newsgroups AUC acc</td></tr><tr><td>U+V</td><td>96.34</td><td>65.06</td></tr><tr><td>E-U+V-0-F</td><td>96.76 (0.2)</td><td>65.86 (0.4)</td></tr><tr><td>E-U+V-u-F</td><td>96.79 (0.2)</td><td>66.30 (0.2)</td></tr><tr><td>E-U+V-ud-F</td><td>96.79 (0.4)</td><td>65.24 (0.6)</td></tr><tr><td colspan=\"2\">LE-U+V-0-F 96.65 (t3-w)</td><td>64.47 (t1)</td></tr><tr><td>LE-U+V</td><td/><td/></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td colspan=\"3\">Accuracy on test of IMDBReviews sentiment</td></tr><tr><td colspan=\"3\">analysis binary classification, with linear and with BiL-</td></tr><tr><td colspan=\"3\">STM architecture, compared to baseline vectors. Best</td></tr><tr><td colspan=\"3\">alpha and best limit method (on validation), are re-</td></tr><tr><td>ported in parenthesis.</td><td/><td/></tr><tr><td>method</td><td colspan=\"2\">IMDB Reviews acc lin acc BiLSTM</td></tr><tr><td>U+V</td><td>83.76</td><td>88.00</td></tr><tr><td>E-U+V-0-F</td><td colspan=\"2\">83.58 (2.4) 88.12 (\u22124.0)</td></tr><tr><td>E-U+V-u-F</td><td colspan=\"2\">83.72 (\u22123.0) 88.56 (\u22124.0)</td></tr><tr><td colspan=\"3\">E-U+V-ud-F 84.23 (\u22123.0) 88.48 (\u22122.2)</td></tr><tr><td>LE-U+V-0-F</td><td>84.00 (t1)</td><td>88.36 (t1)</td></tr><tr><td>LE-U+V-u-F</td><td>84.29 (t1)</td><td>88.66 (t1)</td></tr><tr><td colspan=\"3\">LE-U+V-ud-F 84.00 (t3-w) 88.49 (t3-w)</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td>: Accuracy on Newsgroups (BatchNorm +</td></tr><tr><td>Dense).</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td>: Accuracy on IMDBReviews with linear archi-</td></tr><tr><td>tecture (BatchNorm + Dense).</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td>: Accuracy on IMDBReviews with BiLSTM-</td></tr><tr><td>pool architecture (Bidirectional LSTM 32 channels,</td></tr><tr><td>GlobalMaxPool1D, Dense 20 + Dropout 0.05, Dense).</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |