ACL-OCL / Base_JSON /prefixR /json /repl4nlp /2020.repl4nlp-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:58:47.855040Z"
},
"title": "Learning Geometric Word Meta-Embeddings",
"authors": [
{
"first": "Pratik",
"middle": [],
"last": "Jawanpuria",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Vayve Technologies",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Satya",
"middle": [],
"last": "Dev",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Vayve Technologies",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Vayve Technologies",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Bamdev",
"middle": [],
"last": "Mishra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Vayve Technologies",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "India",
"middle": [],
"last": "Microsoft",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Vayve Technologies",
"location": {
"country": "India"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a geometric framework for learning meta-embeddings of words from different embedding sources. Our framework transforms the embeddings into a common latent space, where, for example, simple averaging or concatenation of different embeddings (of a given word) is more amenable. The proposed latent space arises from two particular geometric transformations-source embedding specific orthogonal rotations and a common Mahalanobis metric scaling. Empirical results on several word similarity and word analogy benchmarks illustrate the efficacy of the proposed framework.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a geometric framework for learning meta-embeddings of words from different embedding sources. Our framework transforms the embeddings into a common latent space, where, for example, simple averaging or concatenation of different embeddings (of a given word) is more amenable. The proposed latent space arises from two particular geometric transformations-source embedding specific orthogonal rotations and a common Mahalanobis metric scaling. Empirical results on several word similarity and word analogy benchmarks illustrate the efficacy of the proposed framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word embeddings have become an integral part of modern NLP. They capture semantic and syntactic similarities and are typically used as features in training NLP models for diverse tasks like named entity tagging, sentiment analysis, and classification, to name a few. Word embeddings are learned in an unsupervised manner from large text corpora and a number of pre-trained embeddings are readily available. The quality of the word embeddings, however, depends on various factors like the size and genre of training corpora as well as the training method used. This has led to ensemble approaches for creating meta-embeddings from different original embeddings (Yin and Shutze, 2016; Coates and Bollegala, 2018; Bao and Bollegala, 2018; O'Neill and Bollegala, 2020) . Meta-embeddings are appealing because they can improve quality of embeddings on account of noise cancellation and diversity of data sources and algorithms.",
"cite_spans": [
{
"start": 660,
"end": 682,
"text": "(Yin and Shutze, 2016;",
"ref_id": "BIBREF24"
},
{
"start": 683,
"end": 710,
"text": "Coates and Bollegala, 2018;",
"ref_id": "BIBREF5"
},
{
"start": 711,
"end": 735,
"text": "Bao and Bollegala, 2018;",
"ref_id": "BIBREF1"
},
{
"start": 736,
"end": 764,
"text": "O'Neill and Bollegala, 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Various approaches have been proposed to learn meta-embeddings and can be broadly classified into two categories: (a) simple linear methods like averaging or concatenation, or a low-dimensional projection via singular value projection (Yin and Shutze, 2016; Coates and Bollegala, 2018) and (b) non-linear methods that aim to learn metaembeddings as shared representation using autoencoding or transformation between common representation and each embedding set (Murom\u00e4gi et al., 2017; Bao and Bollegala, 2018; O'Neill and Bollegala, 2020) .",
"cite_spans": [
{
"start": 235,
"end": 257,
"text": "(Yin and Shutze, 2016;",
"ref_id": "BIBREF24"
},
{
"start": 258,
"end": 285,
"text": "Coates and Bollegala, 2018)",
"ref_id": "BIBREF5"
},
{
"start": 461,
"end": 484,
"text": "(Murom\u00e4gi et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 485,
"end": 509,
"text": "Bao and Bollegala, 2018;",
"ref_id": "BIBREF1"
},
{
"start": 510,
"end": 538,
"text": "O'Neill and Bollegala, 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we focus on simple linear methods such as averaging and concatenation for computing meta-embeddings, which are very easy to implement and have shown highly competitive performance (Yin and Shutze, 2016; Coates and Bollegala, 2018) . Due to the nature of the underlying embedding generation algorithms (Mikolov et al., 2013; Pennington et al., 2014) , correspondences between dimensions, e.g., of two embeddings x \u2208 R d and z \u2208 R d of the same word, are usually not known. Hence, averaging may be detrimental in cases where the dimensions are negatively correlated. Consider the scenario where z := \u2212x. Here, simple averaging of x and z would result in the zero vector. Similarly, when z is a (dimension-wise) permutation of x, simple averaging would result in a sub-optimal meta-embedding vector compared to averaging of aligned embeddings. Therefore, we propose to align the embeddings (of a given word) as an important first step towards generating metaembeddings.",
"cite_spans": [
{
"start": 194,
"end": 216,
"text": "(Yin and Shutze, 2016;",
"ref_id": "BIBREF24"
},
{
"start": 217,
"end": 244,
"text": "Coates and Bollegala, 2018)",
"ref_id": "BIBREF5"
},
{
"start": 315,
"end": 337,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 338,
"end": 362,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To this end, we develop a geometric framework for learning meta-embeddings, by aligning different embeddings in a common latent space, where the dimensions of different embeddings (of a given word) are in coherence. Mathematically, we perform different orthogonal transformations of the source embeddings to learn a latent space along with a Mahalanobis metric that scales different features appropriately. The meta-embeddings are, subsequently, learned in the latent space, e.g., using averaging or concatenation. Empirical results on the word similarity and the word analogy tasks show that the proposed geometrically aligned metaembeddings outperform strong baselines such as the plain averaging and the plain concatenation models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Consider two (monolingual) embeddings x i \u2208 R d and z i \u2208 R d of a given word i in a d-dimensional space. As discussed earlier, embeddings generated from different algorithms (Turian et al., 2010; Mikolov et al., 2013; Pennington et al., 2014; Dhillon et al., 2015; Bojanowski et al., 2017) may express different characteristics (of the same word). Hence, the goal of learning a meta-embedding w i (corresponding to word i) is to generate a representation that inherits the properties of the different source embeddings (e.g., x i and z i ).",
"cite_spans": [
{
"start": 175,
"end": 196,
"text": "(Turian et al., 2010;",
"ref_id": "BIBREF23"
},
{
"start": 197,
"end": 218,
"text": "Mikolov et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 219,
"end": 243,
"text": "Pennington et al., 2014;",
"ref_id": "BIBREF20"
},
{
"start": 244,
"end": 265,
"text": "Dhillon et al., 2015;",
"ref_id": "BIBREF6"
},
{
"start": 266,
"end": 290,
"text": "Bojanowski et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Geometric Modeling",
"sec_num": "2"
},
{
"text": "Our framework imposes orthogonal transformations on the given source embeddings to enable alignment. To allow a more effective model for comparing similarity between different embeddings of a given word, we additionally induce this latent space with the Mahalanobis metric. The Mahalanobis similarity generalizes the cosine similarity measure, which is commonly used for evaluating the relatedness between word embeddings. Unlike cosine similarity, the Mahalanobis metric does not assume uncorrelated feature and it incorporates the feature correlation information from the training data (Jawanpuria et al., 2019) . The combination of orthogonal transformation and Mahalanobis metric learning allows to capture any affine relationship that may exist between word embeddings. Mathematically, this relates to the singular value decomposition of a matrix (Bonnabel and Sepulchre, 2009; Mishra et al., 2014) .",
"cite_spans": [
{
"start": 588,
"end": 613,
"text": "(Jawanpuria et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 852,
"end": 882,
"text": "(Bonnabel and Sepulchre, 2009;",
"ref_id": "BIBREF4"
},
{
"start": 883,
"end": 903,
"text": "Mishra et al., 2014)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Geometric Modeling",
"sec_num": "2"
},
{
"text": "Overall, we formulate the problem of learning geometric transformations -the orthogonal rotations and the metric scaling -via a binary classification problem (discussed later). The metaembeddings are subsequently computed using these transformations. The following sections formalize the proposed latent space and meta-embedding models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Geometric Modeling",
"sec_num": "2"
},
{
"text": "In this section, we learn the latent space using geometric transformations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning the Latent Space",
"sec_num": "2.1"
},
{
"text": "Let U \u2208 M d and V \u2208 M d be orthogonal transformations for embeddings x i and z i , respectively, for all words i = 1, . . . , n. Here M d represents the set of d \u00d7 d orthogonal matrices. The aligned embeddings in the latent space corresponding to x i and z i can then be expressed as Ux i and Vz i , respectively. We next induce the Mahalanobis metric B in this (aligned) latent space, where B is a d \u00d7 d symmetric positive-definite matrix. In this latent space, the similarity between the two embeddings x i and z i can be obtained by the following expression of their dot product: (Ux i ) B(Vz i ). This expression may also be interpreted as the standard dot product (cosine similarity) between B 1 2 Ux i and B 1 2 Vz i , where B 1 2 denotes the matrix square root of the symmetric positive definite matrix B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning the Latent Space",
"sec_num": "2.1"
},
{
"text": "The orthogonal transformations as well as the Mahalanobis metric are learned via the following binary classification problem: pairs of word embeddings {x i , z i } of the same word i belong to the positive class while pairs {x i , z j } belong to the negative class (for i = j). We consider the similarity between the two embeddings in the latent space as the decision function of the proposed binary classification problem. Let X = [x 1 , . . . , x n ] \u2208 R d\u00d7n and Z = [z 1 , . . . , z n ] \u2208 R d\u00d7n be the word embedding matrices for n words, where the columns correspond to different words. In addition, let Y denote the label matrix, where Y ii = 1 for i = 1, . . . , n and Y ij = 0 for i = j. The proposed optimization problem employs the simple to optimize square loss function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning the Latent Space",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min U,V\u2208M d , B 0 X U BVZ \u2212 Y 2 + C B 2 ,",
"eq_num": "(1)"
}
],
"section": "Learning the Latent Space",
"sec_num": "2.1"
},
{
"text": "where \u2022 is the Frobenius norm (which generalizes the 2-norm to matrices) and C > 0 is the regularization parameter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning the Latent Space",
"sec_num": "2.1"
},
{
"text": "Meta-embeddings constructed by averaging or concatenating the given word embeddings have been shown to obtain highly competitive performance (Yin and Shutze, 2016; Coates and Bollegala, 2018) . Hence, we propose to learn metaembeddings as averaging or concatenation in the learned latent space.",
"cite_spans": [
{
"start": 141,
"end": 163,
"text": "(Yin and Shutze, 2016;",
"ref_id": "BIBREF24"
},
{
"start": 164,
"end": 191,
"text": "Coates and Bollegala, 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Averaging and Concatenation in Latent Space",
"sec_num": "2.2"
},
{
"text": "The meta-embedding w i of a word i is generated as an average of the (aligned) word embeddings in the latent space. The latent space representation of x i , as a function of orthogonal transformation U and metric B, is B 1 2 Ux i (Jawanpuria et al., 2019) . Hence, we obtain",
"cite_spans": [
{
"start": 230,
"end": 255,
"text": "(Jawanpuria et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Geometry-Aware Averaging",
"sec_num": null
},
{
"text": "w i = average(B 1 2 Ux i , B 1 2 Vz i ) = (B 1 2 Ux i + B 1 2 Vz i )/2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Geometry-Aware Averaging",
"sec_num": null
},
{
"text": "It should be noted that the proposed geometryaware averaging approach generalizes the plain averaging method proposed in (Coates and Bollegala, 2018) , which is now a particular case in our framework by choosing U, V, and B as identity matrices.",
"cite_spans": [
{
"start": 121,
"end": 149,
"text": "(Coates and Bollegala, 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Geometry-Aware Averaging",
"sec_num": null
},
{
"text": "We next propose to concatenate the aligned embeddings in the learned latent space. For a given word i, with x i and z i as different source embeddings, the meta-embeddings w i learned by the proposed geometry-aware concatenation model is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Geometry-Aware Concatenation",
"sec_num": null
},
{
"text": "w i = concatenation(B 1 2 Ux i , B 1 2 Vz i ) = [(B 1 2 Ux i ) , (B 1 2 Vz i ) ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Geometry-Aware Concatenation",
"sec_num": null
},
{
"text": ". The plain concatenation method studied in (Yin and Shutze, 2016 ) is a special case of the proposed geometry-aware concatenation (by setting U, V, and B as identity matrices).",
"cite_spans": [
{
"start": 44,
"end": 65,
"text": "(Yin and Shutze, 2016",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Geometry-Aware Concatenation",
"sec_num": null
},
{
"text": "The proposed optimization problem (1) employs square loss function and 2 -norm regularization, both of which are well-studied in the literature. The search space is the Cartesian product of the set of d-dimensional symmetric positive definite matrices and the set of d-dimensional orthogonal matrices, both of which are smooth spaces. Such sets have well-known Riemannian manifold structure (Lee, 2003) that allows to propose computationally efficient iterative optimization algorithms. A manifold may be viewed as a generalization of the notion of surface to higher dimensions. We employ the popular Riemannian optimization framework (Absil et al., 2008) to solve (1). Recently, Jawanpuria et al. (2019) have studied a similar optimization problem in the context of learning cross-lingual word embeddings.",
"cite_spans": [
{
"start": 391,
"end": 402,
"text": "(Lee, 2003)",
"ref_id": "BIBREF13"
},
{
"start": 635,
"end": 655,
"text": "(Absil et al., 2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization",
"sec_num": "2.3"
},
{
"text": "Our implementation is done using the Pymanopt toolbox (Townsend et al., 2016) , which is a publicly available Python toolbox for Riemannian optimization algorithms. In particular, we use the conjugate gradient algorithm of Pymanopt. For this, we just need to supply the objective function of (1). This can be done efficiently as the numerical cost of computing the objective function is O(nd 2 ). The overall computational cost of our implementation scales linearly with the number of words in the vocabulary sets. Our code is available at https: //github.com/SatyadevNtv/geo-meta-emb.",
"cite_spans": [
{
"start": 54,
"end": 77,
"text": "(Townsend et al., 2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization",
"sec_num": "2.3"
},
{
"text": "In this section, we evaluate the performance of the proposed meta-embedding models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We consider the following standard evaluation tasks (Yin and Shutze, 2016; Coates and Bollegala, 2018) :",
"cite_spans": [
{
"start": 52,
"end": 74,
"text": "(Yin and Shutze, 2016;",
"ref_id": "BIBREF24"
},
{
"start": 75,
"end": 102,
"text": "Coates and Bollegala, 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Tasks and Datasets",
"sec_num": "3.1"
},
{
"text": "\u2022 Word similarity: in this task, we compare the human-annotated similarity scores between pairs of words with the corresponding cosine similarity computed via the constructed metaembeddings. We report results on the following benchmark datasets: RG (Rubenstein and Goodenough, 1965) , MC (Miller and Charles, 1991) , WS (Finkelstein et al., 2001 ), MTurk (Halawi et al., 2012) , RW (Luong et al., 2013) , and SL (Hill et al., 2015) . Following previous works (Yin and Shutze, 2016; Coates and Bollegala, 2018; O'Neill and Bollegala, 2020) , we report the Spearman correlation score (higher is better) between the cosine similarity (computed via meta-embeddings) and the human scores. \u2022 Word analogy: in this task, the aim is to answer questions which have the form \"A is to B as C is to ?\" (Mikolov et al., 2013) . After generating the meta-embeddings a, b, and c (corresponding to terms A, B, and C, respectively), the answer is chosen to be the term whose meta-embedding has the maximum cosine similarity with (b \u2212 a + c) (Mikolov et al., 2013) . The benchmark datasets include MSR (Gao et al., 2014) , GL (Mikolov et al., 2013) , and SemEval (Jurgens et al., 2012) . Following previous works (Yin and Shutze, 2016; Coates and Bollegala, 2018 ; O'Neill and Bollegala, 2020), we report the percentage of correct answers for MSR and GL datasets, and the Spearman correlation score for SemEval. In both cases, a higher score implies better performance. We learn the meta-embeddings from the following publicly available 300-dimensional pre-trained word embeddings for English.",
"cite_spans": [
{
"start": 249,
"end": 282,
"text": "(Rubenstein and Goodenough, 1965)",
"ref_id": "BIBREF21"
},
{
"start": 288,
"end": 314,
"text": "(Miller and Charles, 1991)",
"ref_id": "BIBREF16"
},
{
"start": 320,
"end": 345,
"text": "(Finkelstein et al., 2001",
"ref_id": "BIBREF7"
},
{
"start": 355,
"end": 376,
"text": "(Halawi et al., 2012)",
"ref_id": "BIBREF9"
},
{
"start": 382,
"end": 402,
"text": "(Luong et al., 2013)",
"ref_id": "BIBREF14"
},
{
"start": 412,
"end": 431,
"text": "(Hill et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 459,
"end": 481,
"text": "(Yin and Shutze, 2016;",
"ref_id": "BIBREF24"
},
{
"start": 482,
"end": 509,
"text": "Coates and Bollegala, 2018;",
"ref_id": "BIBREF5"
},
{
"start": 510,
"end": 538,
"text": "O'Neill and Bollegala, 2020)",
"ref_id": "BIBREF19"
},
{
"start": 790,
"end": 812,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 1024,
"end": 1046,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 1084,
"end": 1102,
"text": "(Gao et al., 2014)",
"ref_id": "BIBREF8"
},
{
"start": 1108,
"end": 1130,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 1145,
"end": 1167,
"text": "(Jurgens et al., 2012)",
"ref_id": "BIBREF12"
},
{
"start": 1195,
"end": 1217,
"text": "(Yin and Shutze, 2016;",
"ref_id": "BIBREF24"
},
{
"start": 1218,
"end": 1244,
"text": "Coates and Bollegala, 2018",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Tasks and Datasets",
"sec_num": "3.1"
},
{
"text": "\u2022 CBOW (Mikolov et al., 2013) : has 929 023 word embeddings trained on Google News. \u2022 GloVe (Pennington et al., 2014) Table 2 : Generalization performance of the meta-embedding algorithms on the word similarity and the word analogy tasks with GloVe and fastText source embeddings. The columns 'Avg.(WS)' and 'Avg.(WA)' correspond to the average performance on the word similarity and the word analogy tasks, respectively.",
"cite_spans": [
{
"start": 7,
"end": 29,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 92,
"end": 117,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Tasks and Datasets",
"sec_num": "3.1"
},
{
"text": "1 917 494 word embeddings trained on 42B tokens of web data from the common crawl. \u2022 fastText (Bojanowski et al., 2017) : has 2 000 000 word embeddings trained on common crawl. The meta-embeddings are learned on the common set of words from different pairs of the source embeddings. The number of common words between various source embeddings pairs are as follows: 154 077 (GloVe \u2229 CBOW), 552 168 (GloVe \u2229 fast-Text), and 641 885 (CBOW \u2229 fastText).",
"cite_spans": [
{
"start": 94,
"end": 119,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Tasks and Datasets",
"sec_num": "3.1"
},
{
"text": "The performance of our geometry-aware averaging and concatenation models, henceforth termed as Geo-AVG and Geo-CONC, respectively, are reported in Tables 1-3. Each table corresponds to a pair of source embeddings (from CBOW, GloVe, and fastText) and the meta-embeddings generated from the source embeddings. We report the performance of the following:",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 181,
"text": "Tables 1-3. Each table corresponds",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "3.2"
},
{
"text": "\u2022 the proposed models Geo-AVG and Geo-CONC \u2022 the meta-embeddings models AVG (Coates and Bollegala, 2018) and CONC (Yin and Shutze, 2016) , which perform plain averaging and concatenation, respectively \u2022 the source embeddings, which serve as a benchmark the meta-embeddings algorithms should ideally surpass in order to justify their usage",
"cite_spans": [
{
"start": 76,
"end": 104,
"text": "(Coates and Bollegala, 2018)",
"ref_id": "BIBREF5"
},
{
"start": 114,
"end": 136,
"text": "(Yin and Shutze, 2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "3.2"
},
{
"text": "We observe that the proposed geometry-aware models (Geo-AVG and Geo-CONC) outperform the individual source embeddings in most datasets. Among the source embeddings, fastText performs better than CBOW and GloVe. Interestingly, we observe that the performance of the meta-embeddings generated by the proposed Geo-CONC with CBOW and GloVe (results in Table 1 ) is at par with the fast-Text embeddings (results in Table 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 348,
"end": 355,
"text": "Table 1",
"ref_id": null
},
{
"start": 410,
"end": 417,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "3.2"
},
{
"text": "The proposed models also easily surpass the AVG and CONC models in both the word similarity and the word analogy tasks. In all the three tables, the proposed models obtain the best overall performance in both the tasks. This shows that the alignment of word embedding spaces with orthogonal rotations and the Mahalanobis metric improves the overall quality of the meta-embeddings. 72.2 80.1 76.9 22.1 59.7 Geo-AVG 85.5 84.6 82.9 73.6 59.7 47.4 72.3 79.9 76.9 22.0 59.6 Table 3 : Generalization performance of the meta-embedding algorithms on the word similarity and the word analogy tasks with CBOW and fastText source embeddings. The columns 'Avg.(WS)' and 'Avg.(WA)' correspond to the average performance on the word similarity and the word analogy tasks, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 469,
"end": 476,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "3.2"
},
{
"text": "We propose a geometric framework for learning meta-embeddings of words from various sources of word embeddings. Our framework aligns the embeddings in a common latent space. The importance of learning the latent space is shown in several benchmark datasets, where the proposed algorithms (Geo-AVG and Geo-CONC) outperforms the plain averaging and the plain concatenation models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "Extending the proposed geometric framework to non-linear word meta-embedding approaches and for generating sentence meta-embeddings are promising directions of future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Optimization Algorithms on Matrix Manifolds",
"authors": [
{
"first": "P.-A",
"middle": [],
"last": "Absil",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mahony",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sepulchre",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P.-A. Absil, R. Mahony, and R. Sepulchre. 2008. Op- timization Algorithms on Matrix Manifolds. Prince- ton University Press, Princeton, NJ.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning word metaembeddings by autoencoding",
"authors": [
{
"first": "C",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Bollegala",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1650--1661",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Bao and D. Bollegala. 2018. Learning word meta- embeddings by autoencoding. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1650-1661.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "P",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135- 146. https://fasttext.cc/docs/en/ english-vectors.html.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Think globally, embed locally-locally linear metaembedding of words",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kawarabayashi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Bollegala, K. Hayashi, and K. Kawarabayashi. 2018. Think globally, embed locally-locally linear meta- embedding of words. In Proceedings of the Inter- national Joint Conference on Artificial Intelligence (IJCAI).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Riemannian metric and geometric mean for positive semidefinite matrices of fixed rank",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bonnabel",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sepulchre",
"suffix": ""
}
],
"year": 2009,
"venue": "SIAM Journal on Matrix Analysis and Applications",
"volume": "31",
"issue": "3",
"pages": "1055--1070",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Bonnabel and R. Sepulchre. 2009. Riemannian met- ric and geometric mean for positive semidefinite ma- trices of fixed rank. SIAM Journal on Matrix Analy- sis and Applications, 31(3):1055-1070.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Frustratingly easy meta-embedding -computing meta-embeddings by averaging source word embeddings",
"authors": [
{
"first": "J",
"middle": [
"N"
],
"last": "Coates",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Bollegala",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL-HLT 2018",
"volume": "",
"issue": "",
"pages": "194--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. N. Coates and D. Bollegala. 2018. Frustratingly easy meta-embedding -computing meta-embeddings by averaging source word embeddings. In Proceedings of NAACL-HLT 2018, pages 194-198.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Eigenwords: Spectral word embeddings",
"authors": [
{
"first": "P",
"middle": [
"S"
],
"last": "Dhillon",
"suffix": ""
},
{
"first": "D",
"middle": [
"P"
],
"last": "Foster",
"suffix": ""
},
{
"first": "L",
"middle": [
"H"
],
"last": "Ungar",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of Machine Learning Research",
"volume": "16",
"issue": "",
"pages": "3035--3078",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. S. Dhillon, D. P. Foster, and L. H. Ungar. 2015. Eigenwords: Spectral word embeddings. Journal of Machine Learning Research, 16:3035-3078.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Placing search in context: The concept revisited",
"authors": [
{
"first": "L",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 10th international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "406--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Finkelstein, E. Gabrilovich, Y. Matias, E. Rivlin, Z. Solan, G. Wolfman, and E. Ruppin. 2001. Placing search in context: The concept revisited. In Proceed- ings of the 10th international conference on World Wide Web. ACM, pages 406-414.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Wordrep: A benchmark for research on learning word representation",
"authors": [
{
"first": "B",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bian",
"suffix": ""
},
{
"first": "T.-Y",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1407.1640"
]
},
"num": null,
"urls": [],
"raw_text": "B. Gao, J. Bian, and T.-Y. Liu. 2014. Wor- drep: A benchmark for research on learning word representation. Technical report, arXiv preprint arXiv:1407.1640.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Large-scale learning of word relatedness with constraint",
"authors": [
{
"first": "G",
"middle": [],
"last": "Halawi",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Dror",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Koren",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "1406--1414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Halawi, G. Dror, E. Gabrilovich, and Y. Koren. 2012. Large-scale learning of word relatedness with con- straint. In Proceedings of the 18th ACM SIGKDD in- ternational conference on Knowledge discovery and data mining, pages 1406-1414.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "665--695",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Hill, R. Reichart, and A. Korhonen. 2015. Simlex- 999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, pages 665-695.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning multilingual word embeddings in latent metric space: A geometric approach",
"authors": [
{
"first": "P",
"middle": [],
"last": "Jawanpuria",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Balgovind",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Mishra",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "107--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Jawanpuria, A. Balgovind, A. Kunchukuttan, and B. Mishra. 2019. Learning multilingual word em- beddings in latent metric space: A geometric ap- proach. Transactions of the Association for Com- putational Linguistics, 7:107-120.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Semeval-2012 task 2: Measuring degrees of relational similarity",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Jurgens",
"suffix": ""
},
{
"first": "P",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "Mohammad",
"suffix": ""
},
{
"first": "K",
"middle": [
"J"
],
"last": "Holyoak",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "356--364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. A. Jurgens, P. D. Turney, S. M. Mohammad, and K. J. Holyoak. 2012. Semeval-2012 task 2: Measur- ing degrees of relational similarity. In Proceedings of the First Joint Conference on Lexical and Com- putational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 356-364.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Introduction to smooth manifolds, second edition",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Lee",
"suffix": ""
}
],
"year": 2003,
"venue": "Graduate Texts in Mathematics",
"volume": "218",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. M. Lee. 2003. Introduction to smooth manifolds, sec- ond edition, volume 218 of Graduate Texts in Math- ematics. Springer-Verlag, New York.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Better word representations with recursive neural networks for morphology",
"authors": [
{
"first": "T",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "104--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Luong, R. Socher, and C. D. Manning. 2013. Bet- ter word representations with recursive neural net- works for morphology. In Proceedings of the Seven- teenth Conference on Computational Natural Lan- guage Learning (CoNLL), pages 104-113.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems (NeurIPS)",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (NeurIPS), pages 3111-3119.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Contextual correlates of semantic similarity. Language and congnitive processes",
"authors": [
{
"first": "G",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "W",
"middle": [
"G"
],
"last": "Charles",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. A. Miller and W. G. Charles. 1991. Contextual cor- relates of semantic similarity. Language and cong- nitive processes, pages 1-28.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Fixed-rank matrix factorizations and Riemannian low-rank optimization",
"authors": [
{
"first": "B",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Meyer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bonnabel",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sepulchre",
"suffix": ""
}
],
"year": 2014,
"venue": "Computational Statistics",
"volume": "29",
"issue": "3",
"pages": "591--621",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Mishra, G. Meyer, S. Bonnabel, and R. Sepulchre. 2014. Fixed-rank matrix factorizations and Rieman- nian low-rank optimization. Computational Statis- tics, 29(3):591-621.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Linear ensembles of word embedding models",
"authors": [
{
"first": "A",
"middle": [],
"last": "Murom\u00e4gi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Sirts",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Laur",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 21st Nordic Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "96--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Murom\u00e4gi, K. Sirts, and S. Laur. 2017. Linear en- sembles of word embedding models. In Proceedings of the 21st Nordic Conference on Computational Lin- guistics, pages 96-104.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Meta-embedding as auxiliary task regularization",
"authors": [
{
"first": "J",
"middle": [],
"last": "O'neill",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Bollegala",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the European Conference on Artificial Intelligence (ECAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. O'Neill and D. Bollegala. 2020. Meta-embedding as auxiliary task regularization. In Proceedings of the European Conference on Artificial Intelligence (ECAI).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "14",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Pennington, R. Socher, and C. D. Manning. 2014. Glove: Global vectors for word representation. Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 14:1532-1543.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Contextual correlates of synonymy",
"authors": [
{
"first": "H",
"middle": [],
"last": "Rubenstein",
"suffix": ""
},
{
"first": "J",
"middle": [
"B"
],
"last": "Goodenough",
"suffix": ""
}
],
"year": 1965,
"venue": "Communications of ACM",
"volume": "",
"issue": "",
"pages": "627--633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Rubenstein and J. B. Goodenough. 1965. Contex- tual correlates of synonymy. Communications of ACM, pages 627-633.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Pymanopt: A python toolbox for optimization on manifolds using automatic differentiation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Townsend",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Koep",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Weichwald",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Machine Learning Research",
"volume": "17",
"issue": "137",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Townsend, N. Koep, and S. Weichwald. 2016. Py- manopt: A python toolbox for optimization on mani- folds using automatic differentiation. Journal of Ma- chine Learning Research, 17(137):1-5.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Word representations: a simple and general method for semisupervised learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Annual Meeting of the Association of Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "384--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Turian, L. Ratinov, and B. Bengio. 2010. Word rep- resentations: a simple and general method for semi- supervised learning. In Proceedings of the Annual Meeting of the Association of Computational Lin- guistics (ACL), pages 384-394.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning word metaembeddings",
"authors": [
{
"first": "W",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Shutze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association of Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "1351--1360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Yin and H. Shutze. 2016. Learning word meta- embeddings. In Proceedings of the 54th Annual Meeting of the Association of Computational Lin- guistics (ACL), pages 1351-1360.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>Model</td><td colspan=\"5\">RG MC WS MTurk RW SL Avg.(WS) MSR GL SemEvaL Avg.(WA)</td></tr><tr><td>CBOW</td><td>76.1 80.0 77.2 68.4 53.4 44.2</td><td>66.5</td><td>71.7 55.4</td><td>20.4</td><td>49.2</td></tr><tr><td>GloVe</td><td>82.9 84.0 79.6 70.0 48.7 45.3</td><td>68.4</td><td>69.3 75.2</td><td>18.6</td><td>54.4</td></tr><tr><td>CONC</td><td>81.1 84.6 81.4 71.9 54.6 46.0</td><td>69.9</td><td>76.6 69.9</td><td>20.1</td><td>55.5</td></tr><tr><td>AVG</td><td>81.5 83.7 79.4 72.1 52.9 46.2</td><td>69.3</td><td>73.7 66.9</td><td>19.7</td><td>53.4</td></tr><tr><td colspan=\"2\">Geo-CONC 86.0 85.0 81.2 70.5 55.6 48.2</td><td>71.1</td><td>78.1 73.3</td><td>19.9</td><td>57.1</td></tr><tr><td>Geo-AVG</td><td>85.8 83.5 81.2 69.1 55.7 48.2</td><td>70.6</td><td>77.3 72.3</td><td>19.5</td><td>56.3</td></tr><tr><td>Model</td><td colspan=\"5\">RG MC WS MTurk RW SL Avg.(WS) MSR GL SemEvaL Avg.(WA)</td></tr><tr><td>GloVe</td><td>82.9 84.0 79.6 70.0 48.7 45.3</td><td>68.4</td><td>69.3 75.2</td><td>18.6</td><td>54.4</td></tr><tr><td>fastText</td><td>83.8 82.5 83.5 73.3 58.0 46.4</td><td>71.2</td><td>78.7 71.0</td><td>22.5</td><td>57.4</td></tr><tr><td>CONC</td><td>83.8 82.5 83.4 73.3 57.9 46.4</td><td>71.2</td><td>79.8 71.7</td><td>22.5</td><td>58.0</td></tr><tr><td>AVG</td><td>83.4 82.1 83.5 73.3 58.0 46.5</td><td>71.1</td><td>79.7 71.7</td><td>22.4</td><td>57.9</td></tr><tr><td colspan=\"2\">Geo-CONC 83.7 84.0 82.6 74.6 55.1 48.4</td><td>71.4</td><td>80.4 79.3</td><td>21.5</td><td>60.4</td></tr><tr><td>Geo-AVG</td><td>83.6 82.0 82.7 74.3 57.0 48.4</td><td>71.3</td><td>79.1 71.1</td><td>23.1</td><td>57.8</td></tr><tr><td/><td/><td/><td/><td/><td>: has</td></tr></table>",
"num": null,
"html": null,
"text": "Table 1: Generalization performance of the meta-embedding algorithms on the word similarity and the word analogy tasks with GloVe and CBOW source embeddings. The columns 'Avg.(WS)' and 'Avg.(WA)' correspond to the average performance on the word similarity and the word analogy tasks, respectively."
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>CBOW</td><td>76.1 80.0 77.2 68.4 53.4 44.2</td><td>66.5</td><td>71.7 55.4</td><td>20.4</td><td>49.2</td></tr><tr><td>fastText</td><td>83.8 82.5 83.5 73.3 58.0 46.4</td><td>71.2</td><td>78.7 71.0</td><td>22.5</td><td>57.4</td></tr><tr><td>CONC</td><td>83.8 82.5 83.5 73.6 59.9 46.4</td><td>71.6</td><td>79.9 75.8</td><td>22.5</td><td>59.4</td></tr><tr><td>AVG</td><td>83.7 82.5 83.4 73.7 59.8 46.4</td><td>71.6</td><td>79.9 75.8</td><td>22.5</td><td>59.4</td></tr></table>",
"num": null,
"html": null,
"text": "ModelRG MC WS MTurk RW SL Avg.(WS) MSR GL SemEvaL Avg.(WA) Geo-CONC 85.3 84.3 82.9 73.6 59.7 47.4"
}
}
}
}