ACL-OCL / Base_JSON /prefixR /json /repl4nlp /2020.repl4nlp-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:59:16.457115Z"
},
"title": "Improving Bilingual Lexicon Induction with Unsupervised Post-Processing of Monolingual Word Vector Spaces",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Lab",
"institution": "University of Cambridge",
"location": {}
},
"email": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Lab",
"institution": "University of Cambridge",
"location": {}
},
"email": ""
},
{
"first": "\u2666",
"middle": [],
"last": "Goran Glava\u0161",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Mannheim",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Work on projection-based induction of crosslingual word embedding spaces (CLWEs) predominantly focuses on the improvement of the projection (i.e., mapping) mechanisms. In this work, in contrast, we show that a simple method for post-processing monolingual embedding spaces facilitates learning of the crosslingual alignment and, in turn, substantially improves bilingual lexicon induction (BLI). The post-processing method we examine is grounded in the generalisation of first-and second-order monolingual similarities to the n th-order similarity. By post-processing monolingual spaces before the cross-lingual alignment, the method can be coupled with any projection-based method for inducing CLWE spaces. We demonstrate the effectiveness of this simple monolingual post-processing across a set of 15 typologically diverse languages (i.e., 15\u00d714 BLI setups), and in combination with two different projection methods.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Work on projection-based induction of crosslingual word embedding spaces (CLWEs) predominantly focuses on the improvement of the projection (i.e., mapping) mechanisms. In this work, in contrast, we show that a simple method for post-processing monolingual embedding spaces facilitates learning of the crosslingual alignment and, in turn, substantially improves bilingual lexicon induction (BLI). The post-processing method we examine is grounded in the generalisation of first-and second-order monolingual similarities to the n th-order similarity. By post-processing monolingual spaces before the cross-lingual alignment, the method can be coupled with any projection-based method for inducing CLWE spaces. We demonstrate the effectiveness of this simple monolingual post-processing across a set of 15 typologically diverse languages (i.e., 15\u00d714 BLI setups), and in combination with two different projection methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Cross-lingual word embeddings (CLWEs) are a mainstay of modern cross-lingual NLP (Ruder et al., 2019b) . CLWE models induce a shared cross-lingual vector space in which words with similar meanings obtain similar vectors regardless of their language. Their usefulness has been attested in tasks such as bilingual lexicon induction (BLI) (Gouws et al., 2015; Heyman et al., 2017) , information retrieval (Litschko et al., 2018) , machine translation (Artetxe et al., 2018b; , document classification (Klementiev et al., 2012) , and many others (Ruder et al., 2019b) .",
"cite_spans": [
{
"start": 81,
"end": 102,
"text": "(Ruder et al., 2019b)",
"ref_id": "BIBREF31"
},
{
"start": 336,
"end": 356,
"text": "(Gouws et al., 2015;",
"ref_id": "BIBREF12"
},
{
"start": 357,
"end": 377,
"text": "Heyman et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 402,
"end": 425,
"text": "(Litschko et al., 2018)",
"ref_id": "BIBREF22"
},
{
"start": 448,
"end": 471,
"text": "(Artetxe et al., 2018b;",
"ref_id": "BIBREF4"
},
{
"start": 498,
"end": 523,
"text": "(Klementiev et al., 2012)",
"ref_id": "BIBREF20"
},
{
"start": 542,
"end": 563,
"text": "(Ruder et al., 2019b)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Importantly, CLWEs are one of the central mechanisms for facilitating transfer of language technologies for low-resource languages, which often lack sufficient bilingual signal for obvious transfer via machine translation. Lack of language re-sources is the main reason for popularity of the socalled projection-based CLWE methods (Mikolov et al., 2013a; Artetxe et al., 2016 Artetxe et al., , 2018a . These models align two independently trained monolingual word vector spaces post-hoc, using limited bilingual supervision in the form of several hundred to several thousand word translation pairs (Mikolov et al., 2013a; Vuli\u0107 and Korhonen, 2016; . Some models even align the monolingual spaces using only identical strings (Smith et al., 2017; S\u00f8gaard et al., 2018) or numerals (Artetxe et al., 2017) . The most recent work focused on fully unsupervised CLWE induction: they extract seed translation lexicons relying on topological similarities between monolingual spaces Artetxe et al., 2018a; Hoshen and Wolf, 2018; Alaux et al., 2019) .",
"cite_spans": [
{
"start": 331,
"end": 354,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF24"
},
{
"start": 355,
"end": 375,
"text": "Artetxe et al., 2016",
"ref_id": "BIBREF1"
},
{
"start": 376,
"end": 399,
"text": "Artetxe et al., , 2018a",
"ref_id": "BIBREF3"
},
{
"start": 598,
"end": 621,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF24"
},
{
"start": 622,
"end": 647,
"text": "Vuli\u0107 and Korhonen, 2016;",
"ref_id": "BIBREF38"
},
{
"start": 725,
"end": 745,
"text": "(Smith et al., 2017;",
"ref_id": "BIBREF34"
},
{
"start": 746,
"end": 767,
"text": "S\u00f8gaard et al., 2018)",
"ref_id": "BIBREF35"
},
{
"start": 780,
"end": 802,
"text": "(Artetxe et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 974,
"end": 996,
"text": "Artetxe et al., 2018a;",
"ref_id": "BIBREF3"
},
{
"start": 997,
"end": 1019,
"text": "Hoshen and Wolf, 2018;",
"ref_id": "BIBREF16"
},
{
"start": 1020,
"end": 1039,
"text": "Alaux et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we do not focus on projection itself: rather, we investigate a transformation of input monolingual word vector spaces that facilitates the projection and leads to higher quality CLWEs. Regardless of the actual projection method, the quality of the input monolingual spaces has a profound impact on the induced shared cross-lingual space, and, in turn, on the quality of induced bilingual lexicons. We demonstrate that simple unsupervised post-processing of monolingual embedding spaces leads to substantial BLI performance gains across a large number of language pairs. Our work is inspired by observations that monolingual \"embeddings capture more information than what is immediately obvious\" (Artetxe et al., 2018c) . In other words, the information surfaced in the pretrained monolingual vector spaces may not be optimal for an application such as word-level translation (BLI).",
"cite_spans": [
{
"start": 709,
"end": 732,
"text": "(Artetxe et al., 2018c)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We rely on a monolingual post-processing method of Artetxe et al. (2018c) : a linear transformation controlled by a single parameter that adjusts the similarity order of the input embedding spaces. We demonstrate that applying this trans-formation on both monolingual spaces before any standard projection-based CLWE framework yields consistent BLI gains for a wide array of languages. We run a large-scale BLI evaluation with 15 typologically diverse languages (i.e., 15\u00d714 = 210 BLI setups) and show that this simple monolingual postprocessing yields gains in 183/210 setups over the current state-of-the-art BLI models which combine self-learning (Artetxe et al., 2018a) with (weak) word-level supervision . We further show that this monolingual post-processing yields improvements on other BLI datasets , for different projection-based CLWE models, and also for BLI with 210 similar (major European) languages (Dubossarsky et al., 2020) , indicating the importance and robustness of monolingual post-processing for BLI.",
"cite_spans": [
{
"start": 51,
"end": 73,
"text": "Artetxe et al. (2018c)",
"ref_id": "BIBREF5"
},
{
"start": 650,
"end": 673,
"text": "(Artetxe et al., 2018a)",
"ref_id": "BIBREF3"
},
{
"start": 914,
"end": 940,
"text": "(Dubossarsky et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Projection-Based CLWEs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "Preliminaries. Projection-based CLWE models learn a linear projection between two independently trained monolingual spaces -X (source language L s ) and Z (target language L t ) -using a word translation dictionary D to guide the alignment. X D \u2282 X and Z D \u2282 Z denote the row-aligned subsets of X and Z containing vectors of aligned words from D. X D and Z D are used to learn orthogonal projections W x and W z defining the bilingual space: Y = XW x \u222a ZW z . While (weakly) supervised methods start from a readily available dictionary D, fully unsupervised models automatically induce the seed dictionary D (i.e., from monolingual data). 1 Furthermore, it has been empirically validated (Artetxe et al., 2017; ) that applying an iterative self-learning procedure leads to consistent BLI improvements, especially for distant languages and in low-data regimes. In a nutshell, at each self-learning iteration k, a dictionary D (k) is first used to learn the joint space",
"cite_spans": [
{
"start": 639,
"end": 640,
"text": "1",
"ref_id": null
},
{
"start": 688,
"end": 710,
"text": "(Artetxe et al., 2017;",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "Y (k) = XW (k) x \u222a ZW (k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "z . The mutual crosslingual nearest neighbours in Y (k) are then used to extract the new dictionary D (k+1) . Relying on mutual nearest neighbours partially removes the noise, leading to better performance. For more technical details on self-learning, we refer the reader to prior work (Ruder et al., 2019a; .",
"cite_spans": [
{
"start": 52,
"end": 55,
"text": "(k)",
"ref_id": null
},
{
"start": 286,
"end": 307,
"text": "(Ruder et al., 2019a;",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "Motivation. Most existing CLWE models ignore the properties of the initial monolingual spaces X and Z (i.e., they are taken \"as-is\") and focus on improving the projection. However, monolingual postprocessing of X and Z prior to learning the projections may facilitate the projection and be beneficial for iterative setups such as self-learning. This intuition is already confirmed by a number of monolingual transformations, e.g., 2 -normalisation, mean centering, or whitening/dewhitening, that are \"by default\" performed by toolkits such as MUSE and VecMap (Artetxe et al., 2018b; Zhang et al., 2019) . In this work, however, we investigate a transformation to the monolingual spaces which is applied before they undergo the series of standard normalisation and centering steps.",
"cite_spans": [
{
"start": 559,
"end": 582,
"text": "(Artetxe et al., 2018b;",
"ref_id": "BIBREF4"
},
{
"start": 583,
"end": 602,
"text": "Zhang et al., 2019)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "Further, we investigate a line of research that leverages unsupervised post-processing of monolingual word vectors (Mu et al., 2018; Wang et al., 2018; Raunak et al., 2019; Tang et al., 2019) to emphasise semantic properties over syntactic aspects, typically with small gains reported on intrinsic word similarity (e.g., SimLex-999 (Hill et al., 2015) ). In this work, we empirically validate that these unsupervised post-processing techniques can also be effective in cross-lingual scenarios for lowresource BLI, even when coupled with the current state-of-the-art CLWE frameworks that rely on \"all the bells and whistles\", such as self-learning and additional vector space preprocessing.",
"cite_spans": [
{
"start": 115,
"end": 132,
"text": "(Mu et al., 2018;",
"ref_id": "BIBREF27"
},
{
"start": 133,
"end": 151,
"text": "Wang et al., 2018;",
"ref_id": "BIBREF39"
},
{
"start": 152,
"end": 172,
"text": "Raunak et al., 2019;",
"ref_id": "BIBREF28"
},
{
"start": 173,
"end": 191,
"text": "Tang et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 332,
"end": 351,
"text": "(Hill et al., 2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "Unsupervised Monolingual Post-processing. We now outline the simple post-processing method of Artetxe et al. (2018c) used in this work, and then extend it to the bilingual setup. The core idea is to generalise the notion of first-and second-order similarity (Sch\u00fctze, 1998) 2 to nth-order similarity. Let us define the (standard, first-order) similarity matrix of the source language space X as M 1 (X) = XX T (similar for Z). The second-order similarity can then be defined as M 2 (X) = XX T XX T , where it holds M 2 (X) = M 1 (M 1 (X)); the nth-order similarity is then M n (X) = (XX T ) n . The embeddings of words w i and w j are given by the rows i and j of each M n matrix.",
"cite_spans": [
{
"start": 94,
"end": 116,
"text": "Artetxe et al. (2018c)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "We are then looking for a general linear transformation that adjusts the similarity order of input matrices X and Z. As proven by Artetxe et al. (2018c) , the n th -order similarity transformation can be obtained as",
"cite_spans": [
{
"start": 130,
"end": 152,
"text": "Artetxe et al. (2018c)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "M n (X) = M 1 (XR (n\u22121)/2 ), with R \u03b1 = Q\u2206 \u03b1 , where Q and \u2206 are the ma- trices obtained via eigendecomposition of X T X (X T X = Q\u2206Q T ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "\u2206 is a diagonal matrix containing eigenvalues of X T X; Q is an orthogonal matrix with eigenvectors of X T X as columns. 3 Finally, we apply the above post-processing on both monolingual vector spaces X and Z. This results in adjusted vector spaces X \u03b1s = XR \u03b1s and Z \u03b1t = ZR \u03b1t . Transformed spaces X \u03b1s and Z \u03b1t then replace the original spaces X and Z as input to any standard projection-based CLWE method.",
"cite_spans": [
{
"start": 121,
"end": 122,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "We evaluate the impact of unsupervised monolingual post-processing described in \u00a72 on BLI, focusing on pairs of typologically diverse languages. 4 Mean reciprocal rank (MRR) is used as the main evaluation metric, reported as MRR\u00d7100%. 5",
"cite_spans": [
{
"start": 145,
"end": 146,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "Training and Test Data. We exploit the training and test dictionaries compiled from PanLex (Kamholz et al., 2014) by : the data encompasses 15 diverse languages listed in Table 1 and a total of 210 distinct L s \u2192 L t BLI 3 Although the post-processing motivation stems from the desire to adjust discrete similarity orders, note that \u03b1 is in fact a continuous parameter which can be carefully fine-tuned (negative values are also allowed). The code is available at: https://github.com/artetxem/uncovec. 4 The focus of this work is on the standard BLI task; however, it has recently shown ) that some downstream tasks strongly correlate with BLI.",
"cite_spans": [
{
"start": 91,
"end": 113,
"text": "(Kamholz et al., 2014)",
"ref_id": "BIBREF18"
},
{
"start": 221,
"end": 222,
"text": "3",
"ref_id": null
},
{
"start": 502,
"end": 503,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 171,
"end": 178,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "5 Our findings also hold for Precision@M, for M \u2208 {1, 5}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "setups. 6 In addition, we evaluate on 15 European languages (i.e., 210 pairs) from Dubossarsky et al. (2020) . 7 , and on diverse language pairs from the BLI evaluation suite of . Training and test dictionaries in all setups contain 5K and 2K word translation pairs, respectively. We create smaller training dictionaries (e.g., spanning 1K training translation pairs) by taking the most frequent pairs from the 5K dictionaries.",
"cite_spans": [
{
"start": 8,
"end": 9,
"text": "6",
"ref_id": null
},
{
"start": 83,
"end": 108,
"text": "Dubossarsky et al. (2020)",
"ref_id": "BIBREF9"
},
{
"start": 111,
"end": 112,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "Monolingual Embeddings. We use the 300-dim vectors of for all languages, pretrained on Common Crawl and Wikipedia with fastText (Bojanowski et al., 2017 Projection-Based Framework. We base the induction of projection-based CLWEs on the wellknown VecMap framework (Artetxe et al., 2018b); 9 it shows very competitive and robust BLI performance, especially for distant pairs, according to the recent comparative studies Doval et al., 2019) . We analyse the impact of unsupervised monolingual postprocessing from \u00a72 by (1) feeding the original vectors X and Y to VecMap (BASELINE), and then by (2) feeding their post-processed variants X \u03b1s and Y \u03b1t (POSTPROC). We experiment with projection model variants without and with self-learning, and with different initial dictionary sizes (5K and 1K). Note that the POSTPROC variant requires tuning of two hyper-parameters: \u03b1 s and \u03b1 t . Due to a lack of development sets for BLI experiments, we tune the two \u03b1-parameters on a single language pair (BG-CA) via crossvalidation; we grid-search over the following values: [\u22120.5, \u22120.25, \u22120.15, 0, 0.15, 0.25, 0.5]. We then keep them fixed to the following values: \u03b1 s = \u22120.25, \u03b1 t = 0.15 in all subsequent experiments.",
"cite_spans": [
{
"start": 128,
"end": 152,
"text": "(Bojanowski et al., 2017",
"ref_id": "BIBREF6"
},
{
"start": 418,
"end": 437,
"text": "Doval et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "Main BLI results averaged over each source language (L s ) are provided in Table 2 , while additional results per language pair are available in 6 github.com/cambridgeltl/panlex-bli. For a detailed procedure on how the lexicons were obtained from PanLex, we refer the reader to the work of . 7 The languages are English, German, Dutch, Swedish, Danish, Italian, Portuguese, Spanish, French, Romanian, Croatian, Polish, Russian, Czech, Bulgarian.",
"cite_spans": [
{
"start": 145,
"end": 146,
"text": "6",
"ref_id": null
},
{
"start": 292,
"end": 293,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 75,
"end": 82,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "8 Experiments with other monolingual vectors such as the original fastText and skip-gram (Mikolov et al., 2013b ) trained on Wikipedia show the same trends in the final results.",
"cite_spans": [
{
"start": 89,
"end": 111,
"text": "(Mikolov et al., 2013b",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "9 https://github.com/artetxem/vecmap the supplemental material. We also observe performance gains with a \"pure\" supervised model variant (i.e., without self-learning), but for clarity, we focus our analysis on the more powerful baseline, with self-learning. We note improvements in 183/210 (seed dictionary size 5K) and 181/210 BLI setups (size: 1K) over the projection-based baselines that held previous peak scores using the same data . This validates our intuition that monolingual vectors store more information which needs to be \"uncovered\" via monolingual post-processing. The effect of monolingual postprocessing pertains after applying other perturbations such as 2 -norm or mean centering. For some languages -e.g., FI, TR, NO -we achieve gains in all BLI setups with those languages as sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "What is more, we have not carefully fine-tuned \u03b1 s and \u03b1 t : we note that even higher scores can be achieved by finer-grained fine-tuning in the future. For instance, setting (\u03b1 s , \u03b1 t ) = (\u22120.5, 0.25) instead of (\u22120.25, 0.15) for TR-BG increases BLI score from 37.8 to 39.5; the previous peak score with BASELINE was 35.1. The baseline mapping is simply obtained by setting (\u03b1 s , \u03b1 t ) = (0, 0), and we note that the tuned post-processing validated in our work should be considered as a tunable option for any projection-based CLWE method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "We further probe the robustness of unsupervised post-processing by running experiments on additional BLI evaluation set of and with another mapping model: RCSLS ). While we again observe gains across a range of different model variants and with different seed dictionary sizes, we summarise a selection of results in Table 3 . Finally, small but consistent improvements extend also to a set of 15 European languages from Dubossarsky et al. (2020) (see Fotnote 6): POSTPROC yields gains on average for all 15/15 source languages, and across 173/210 setups (5K seed dictionary); the global average improves from 43.9 (the strongest BASELINE) to 44.7. In summary, these results further underline the usefulness of the monolingual post-processing method.",
"cite_spans": [],
"ref_spans": [
{
"start": 317,
"end": 324,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "We have demonstrated a simple and effective method for improving bilingual lexicon induction (BLI) with projection-based cross-lingual word embeddings. The method is based on standalone unsupervised post-processing of initial monolingual word embeddings before mapping, and as such applicable to any projection-based CLWE method. We have verified the importance and robustness of this monolingual post-processing with a wide range of (dis)similar language pairs as well as in different BLI setups and with different CLWE methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "In future work, we will test other unsupervised post-processors, and also probe similar methods that inject external lexical knowledge into monolingual word vectors towards improved BLI. We also plan to probe if similar gains still hold with recently proposed more sophisticated self-learning methods (Karan et al., 2020) , non-linear mappingbased CLWE methods Mohiuddin and Joty, 2020) . Another idea is to also apply a similar principle to contextualised word representations in cross-lingual settings (Schuster et al., 2019; Liu et al., 2019) .",
"cite_spans": [
{
"start": 301,
"end": 321,
"text": "(Karan et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 361,
"end": 386,
"text": "Mohiuddin and Joty, 2020)",
"ref_id": "BIBREF26"
},
{
"start": 504,
"end": 527,
"text": "(Schuster et al., 2019;",
"ref_id": "BIBREF32"
},
{
"start": 528,
"end": 545,
"text": "Liu et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "We report main BLI results for all 15 \u00d7 14 = 210 language pairs based on PanLex training and test data in the supplemental material, grouped by the source language, and for two dictionary sizes: |D| = 1, 000 and |D| = 5, 000 (while similar relative performance is also observed with other dictionary sizes, e.g., |D| = 500). The results are provided in Table 4-Table 18 , and they are the basis of the results reported in the main paper. The language codes are available in Table 1 (in the main paper). As mentioned in the main paper, all results are obtained with the two \u03b1-hyperparameters fixed to the following values: \u03b1 S = \u22120.25, \u03b1 T = 0.15, without any further fine-tuning. A more careful language pair-specific fine-tuning results in even higher performance for many language pairs.",
"cite_spans": [],
"ref_spans": [
{
"start": 353,
"end": 369,
"text": "Table 4-Table 18",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "A Supplemental Material",
"sec_num": null
},
{
"text": "In all tables, BASELINE refers to the bestperforming weakly supervised projection-based approach without and with self-learning, as reported in a recent comparative study of ; 5k and 1k denote the seed dictionary D size. The scores in bold indicate improvements over the BASELINE methods. All results are reported as MRR scores: the MRR score of .xyz should be read as xy.z% (e.g., the score of .432 can be read as 43.2%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Supplemental Material",
"sec_num": null
},
{
"text": "(The actual tables with the full results in all BLI setups start on the next page.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Supplemental Material",
"sec_num": null
},
{
"text": "Bulgarian: BG--CA -EO -ET -EU -FI -HE -HU -ID -KA -KO -LT -NO -TH -TR BASELINE (supervised, 5k) . 5k) . 5k) .473 .419 .420 .302 .386 .392 .489 .330 .371 .211 .419 .462 .203 .379 BASELINE (supervised, 1k) . 1k) . 1k) .458 .408 .398 .286 .377 .376 .478 .321 .329 .188 .375 .458 .133 .362 (supervised, 5k) . 5k) . 5k) .434 .510 .359 .409 .359 .373 .454 .326 .322 .242 .347 .448 .234 .351 BASELINE (supervised, 1k) . 1k) . 1k) .413 .508 .309 .393 .321 .351 .439 .326 .274 .204 .306 .438 .146 .332 Table 5: All BLI scores (MRR) with Catalan (CA) as the source language.",
"cite_spans": [
{
"start": 113,
"end": 116,
"text": "5k)",
"ref_id": null
},
{
"start": 119,
"end": 214,
"text": "5k) .473 .419 .420 .302 .386 .392 .489 .330 .371 .211 .419 .462 .203 .379 BASELINE (supervised,",
"ref_id": null
},
{
"start": 215,
"end": 218,
"text": "1k)",
"ref_id": null
},
{
"start": 221,
"end": 224,
"text": "1k)",
"ref_id": null
},
{
"start": 227,
"end": 300,
"text": "1k) .458 .408 .398 .286 .377 .376 .478 .321 .329 .188 .375 .458 .133 .362",
"ref_id": null
},
{
"start": 301,
"end": 317,
"text": "(supervised, 5k)",
"ref_id": null
},
{
"start": 320,
"end": 323,
"text": "5k)",
"ref_id": null
},
{
"start": 326,
"end": 421,
"text": "5k) .434 .510 .359 .409 .359 .373 .454 .326 .322 .242 .347 .448 .234 .351 BASELINE (supervised,",
"ref_id": null
},
{
"start": 422,
"end": 425,
"text": "1k)",
"ref_id": null
},
{
"start": 428,
"end": 431,
"text": "1k)",
"ref_id": null
},
{
"start": 434,
"end": 507,
"text": "1k) .413 .508 .309 .393 .321 .351 .439 .326 .274 .204 .306 .438 .146 .332",
"ref_id": null
}
],
"ref_spans": [
{
"start": 11,
"end": 110,
"text": "BG--CA -EO -ET -EU -FI -HE -HU -ID -KA -KO -LT -NO -TH -TR BASELINE (supervised, 5k)",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Supplemental Material",
"sec_num": null
},
{
"text": "Esperanto: EO- -BG -CA -ET -EU -FI -HE -HU -ID -KA -KO -LT -NO -TH -TR BASELINE (supervised, 5k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Supplemental Material",
"sec_num": null
},
{
"text": ". 5k) . 5k) .428 .546 .353 .369 .372 .299 .432 .342 .311 .186 .404 .405 .124 .292 BASELINE (supervised, 1k) . (supervised, 5k) . 5k) . 5k) .433 .401 .352 .239 .447 .320 .471 .253 .253 .192 .380 .407 .205 .334 BASELINE (supervised, 1k) . 1k) . 1k) .415 .392 .337 .200 .446 .289 .461 .227 .224 .150 .356 .408 .108 .319 (supervised, 5k) . 5k) . 5k) .332 .453 .324 .255 .276 .207 .302 .238 .188 .108 .229 .309 .119 .254 BASELINE (supervised, 1k) .120 .142 .077 .088 .048 .037 .077 .049 .059 .021 .071 .053 .018 .055 BASELINE (self-learning, 1k) . 1k) . 294 .440 .292 .209 .232 .144 .263 .214 .136 .069 .157 .272 .059 .201 -BG -CA -EO -ET -EU -HE -HU -ID -KA -KO -LT -NO -TH -TR BASELINE (supervised, 5k) . 5k) . 5k) .423 .430 .386 .456 .302 .386 .477 .311 .329 .258 .434 .481 .196 .370 BASELINE (supervised, 1k) . 1k) . 1k) .409 .413 .372 .447 .259 .369 .466 .307 .303 .228 .424 .477 .112 .360 (supervised, 5k) . 5k) . 5k) .401 .418 .307 .298 .212 .333 .402 .293 .213 .219 .308 .379 .238 .342 BASELINE (supervised, 1k) . 1k) . 1k) .381 .401 .280 .255 .174 .311 .388 .274 .174 .184 .255 .366 .131 .326 Table 10: All BLI scores (MRR) with Hebrew (HE) as the source language.",
"cite_spans": [
{
"start": 2,
"end": 5,
"text": "5k)",
"ref_id": null
},
{
"start": 8,
"end": 103,
"text": "5k) .428 .546 .353 .369 .372 .299 .432 .342 .311 .186 .404 .405 .124 .292 BASELINE (supervised,",
"ref_id": null
},
{
"start": 104,
"end": 107,
"text": "1k)",
"ref_id": null
},
{
"start": 110,
"end": 126,
"text": "(supervised, 5k)",
"ref_id": null
},
{
"start": 129,
"end": 132,
"text": "5k)",
"ref_id": null
},
{
"start": 135,
"end": 230,
"text": "5k) .433 .401 .352 .239 .447 .320 .471 .253 .253 .192 .380 .407 .205 .334 BASELINE (supervised,",
"ref_id": null
},
{
"start": 231,
"end": 234,
"text": "1k)",
"ref_id": null
},
{
"start": 237,
"end": 240,
"text": "1k)",
"ref_id": null
},
{
"start": 243,
"end": 316,
"text": "1k) .415 .392 .337 .200 .446 .289 .461 .227 .224 .150 .356 .408 .108 .319",
"ref_id": null
},
{
"start": 317,
"end": 333,
"text": "(supervised, 5k)",
"ref_id": null
},
{
"start": 336,
"end": 339,
"text": "5k)",
"ref_id": null
},
{
"start": 342,
"end": 437,
"text": "5k) .332 .453 .324 .255 .276 .207 .302 .238 .188 .108 .229 .309 .119 .254 BASELINE (supervised,",
"ref_id": null
},
{
"start": 438,
"end": 441,
"text": "1k)",
"ref_id": null
},
{
"start": 521,
"end": 540,
"text": "(self-learning, 1k)",
"ref_id": null
},
{
"start": 543,
"end": 546,
"text": "1k)",
"ref_id": null
},
{
"start": 549,
"end": 617,
"text": "294 .440 .292 .209 .232 .144 .263 .214 .136 .069 .157 .272 .059 .201",
"ref_id": null
},
{
"start": 696,
"end": 712,
"text": "(supervised, 5k)",
"ref_id": null
},
{
"start": 715,
"end": 718,
"text": "5k)",
"ref_id": null
},
{
"start": 721,
"end": 816,
"text": "5k) .423 .430 .386 .456 .302 .386 .477 .311 .329 .258 .434 .481 .196 .370 BASELINE (supervised,",
"ref_id": null
},
{
"start": 817,
"end": 820,
"text": "1k)",
"ref_id": null
},
{
"start": 823,
"end": 826,
"text": "1k)",
"ref_id": null
},
{
"start": 829,
"end": 902,
"text": "1k) .409 .413 .372 .447 .259 .369 .466 .307 .303 .228 .424 .477 .112 .360",
"ref_id": null
},
{
"start": 903,
"end": 919,
"text": "(supervised, 5k)",
"ref_id": null
},
{
"start": 922,
"end": 925,
"text": "5k)",
"ref_id": null
},
{
"start": 928,
"end": 1023,
"text": "5k) .401 .418 .307 .298 .212 .333 .402 .293 .213 .219 .308 .379 .238 .342 BASELINE (supervised,",
"ref_id": null
},
{
"start": 1024,
"end": 1027,
"text": "1k)",
"ref_id": null
},
{
"start": 1030,
"end": 1033,
"text": "1k)",
"ref_id": null
},
{
"start": 1036,
"end": 1109,
"text": "1k) .381 .401 .280 .255 .174 .311 .388 .274 .174 .184 .255 .366 .131 .326",
"ref_id": null
}
],
"ref_spans": [
{
"start": 618,
"end": 686,
"text": "-BG -CA -EO -ET -EU -HE -HU -ID -KA -KO -LT -NO -TH -TR",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Supplemental Material",
"sec_num": null
},
{
"text": "Hungarian: HU- -BG -CA -EO -ET -EU -FI -HE -ID -KA -KO -LT -NO -TH -TR BASELINE (supervised, 5k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Supplemental Material",
"sec_num": null
},
{
"text": ". 5k) . 5k) .466 .495 .453 .457 .310 .418 .405 .403 .353 .293 .436 .457 .194 .387 BASELINE (supervised, 1k) . 1k) . 1k) .458 .484 .431 .443 .276 .410 .385 .406 .331 .270 .401 .447 .126 .377 (supervised, 5k) . 5k) . 5k) .307 .333 .303 .273 .225 .270 .298 .360 .205 .203 .242 .335 .256 .328 BASELINE (supervised, 1k) . 1k) . 1k) .280 .327 .282 .221 .197 .252 .271 .346 .131 .184 .149 .325 .225 .322 (supervised, 5k) . 5k) . 5k) .412 .355 .307 .300 .218 .331 .270 .343 .200 .154 .342 .281 .153 .280 BASELINE (supervised, 1k) . 153 .088 .083 .112 .068 .065 .046 .103 .048 .036 .138 .048 .025 .091 BASELINE (self-learning, 1k) . 1k) .378 .341 .283 .279 .174 .308 .233 .323 .177 .098 .321 .260 .078 .249 -BG -CA -EO -ET -EU -FI -HE -HU -ID -KA -LT -NO -TH -TR BASELINE (supervised, 5k) . 5k) .324 .330 .217 .247 .153 .310 .281 .367 .264 .180 .239 .313 .199 .301 BASELINE (supervised, 1k) . 1k) . 1k) .268 .274 .134 .193 .106 .271 .239 .348 .236 .117 .152 .264 .102 .284 (supervised, 5k) . 5k) . 5k) .470 .408 .406 .400 .233 .394 .338 .426 .220 .300 .160 .372 .205 .326 BASELINE (supervised, 1k) . 1k) . 1k) .438 .387 .388 .382 .191 .390 .306 .412 .195 .282 .117 .355 .109 .305 (supervised, 5k) . 5k) . 422 .457 .377 .395 .328 .419 .353 .452 .340 .298 .250 .351 .197 .341 POSTPROC (self-learning, 5k) .441 . 474 .425 .411 .345 .424 .381 .455 .354 .315 .257 .367 .227 .346 BASELINE (supervised, 1k) . 1k) . 411 .444 .374 .371 .300 .412 .336 .443 .339 .268 .228 .315 .140 .332 POSTPROC (self-learning, 1k) . 433 .466 .419 .389 .313 .417 .366 .445 .352 .279 .236 .332 .136 .336 (supervised, 5k) . 5k) . 5k) .378 .405 .291 .328 .252 .338 .361 .413 .395 .261 .226 .298 .369 .210 BASELINE (supervised, 1k) . 150 .133 .052 .112 .062 .093 .076 .167 .131 .053 .050 .099 .073 .028 BASELINE (self-learning, 1k) . 1k) .361 .394 .259 .289 .217 .326 .336 .411 .390 .245 .200 .234 .368 .142 Table 18: All BLI scores (MRR) with Turkish (TR) as the source language.",
"cite_spans": [
{
"start": 2,
"end": 5,
"text": "5k)",
"ref_id": null
},
{
"start": 8,
"end": 103,
"text": "5k) .466 .495 .453 .457 .310 .418 .405 .403 .353 .293 .436 .457 .194 .387 BASELINE (supervised,",
"ref_id": null
},
{
"start": 104,
"end": 107,
"text": "1k)",
"ref_id": null
},
{
"start": 110,
"end": 113,
"text": "1k)",
"ref_id": null
},
{
"start": 116,
"end": 189,
"text": "1k) .458 .484 .431 .443 .276 .410 .385 .406 .331 .270 .401 .447 .126 .377",
"ref_id": null
},
{
"start": 190,
"end": 206,
"text": "(supervised, 5k)",
"ref_id": null
},
{
"start": 209,
"end": 212,
"text": "5k)",
"ref_id": null
},
{
"start": 215,
"end": 310,
"text": "5k) .307 .333 .303 .273 .225 .270 .298 .360 .205 .203 .242 .335 .256 .328 BASELINE (supervised,",
"ref_id": null
},
{
"start": 311,
"end": 314,
"text": "1k)",
"ref_id": null
},
{
"start": 317,
"end": 320,
"text": "1k)",
"ref_id": null
},
{
"start": 323,
"end": 396,
"text": "1k) .280 .327 .282 .221 .197 .252 .271 .346 .131 .184 .149 .325 .225 .322",
"ref_id": null
},
{
"start": 397,
"end": 413,
"text": "(supervised, 5k)",
"ref_id": null
},
{
"start": 416,
"end": 419,
"text": "5k)",
"ref_id": null
},
{
"start": 422,
"end": 517,
"text": "5k) .412 .355 .307 .300 .218 .331 .270 .343 .200 .154 .342 .281 .153 .280 BASELINE (supervised,",
"ref_id": null
},
{
"start": 518,
"end": 521,
"text": "1k)",
"ref_id": null
},
{
"start": 524,
"end": 587,
"text": "153 .088 .083 .112 .068 .065 .046 .103 .048 .036 .138 .048 .025",
"ref_id": null
},
{
"start": 602,
"end": 621,
"text": "(self-learning, 1k)",
"ref_id": null
},
{
"start": 624,
"end": 697,
"text": "1k) .378 .341 .283 .279 .174 .308 .233 .323 .177 .098 .321 .260 .078 .249",
"ref_id": null
},
{
"start": 776,
"end": 788,
"text": "(supervised,",
"ref_id": null
},
{
"start": 789,
"end": 792,
"text": "5k)",
"ref_id": null
},
{
"start": 795,
"end": 890,
"text": "5k) .324 .330 .217 .247 .153 .310 .281 .367 .264 .180 .239 .313 .199 .301 BASELINE (supervised,",
"ref_id": null
},
{
"start": 891,
"end": 894,
"text": "1k)",
"ref_id": null
},
{
"start": 897,
"end": 900,
"text": "1k)",
"ref_id": null
},
{
"start": 903,
"end": 976,
"text": "1k) .268 .274 .134 .193 .106 .271 .239 .348 .236 .117 .152 .264 .102 .284",
"ref_id": null
},
{
"start": 977,
"end": 993,
"text": "(supervised, 5k)",
"ref_id": null
},
{
"start": 996,
"end": 999,
"text": "5k)",
"ref_id": null
},
{
"start": 1002,
"end": 1097,
"text": "5k) .470 .408 .406 .400 .233 .394 .338 .426 .220 .300 .160 .372 .205 .326 BASELINE (supervised,",
"ref_id": null
},
{
"start": 1098,
"end": 1101,
"text": "1k)",
"ref_id": null
},
{
"start": 1104,
"end": 1107,
"text": "1k)",
"ref_id": null
},
{
"start": 1110,
"end": 1183,
"text": "1k) .438 .387 .388 .382 .191 .390 .306 .412 .195 .282 .117 .355 .109 .305",
"ref_id": null
},
{
"start": 1184,
"end": 1200,
"text": "(supervised, 5k)",
"ref_id": null
},
{
"start": 1203,
"end": 1206,
"text": "5k)",
"ref_id": null
},
{
"start": 1209,
"end": 1272,
"text": "422 .457 .377 .395 .328 .419 .353 .452 .340 .298 .250 .351 .197",
"ref_id": null
},
{
"start": 1314,
"end": 1399,
"text": "474 .425 .411 .345 .424 .381 .455 .354 .315 .257 .367 .227 .346 BASELINE (supervised,",
"ref_id": null
},
{
"start": 1400,
"end": 1403,
"text": "1k)",
"ref_id": null
},
{
"start": 1406,
"end": 1409,
"text": "1k)",
"ref_id": null
},
{
"start": 1412,
"end": 1475,
"text": "411 .444 .374 .371 .300 .412 .336 .443 .339 .268 .228 .315 .140",
"ref_id": null
},
{
"start": 1512,
"end": 1580,
"text": "433 .466 .419 .389 .313 .417 .366 .445 .352 .279 .236 .332 .136 .336",
"ref_id": null
},
{
"start": 1581,
"end": 1597,
"text": "(supervised, 5k)",
"ref_id": null
},
{
"start": 1600,
"end": 1603,
"text": "5k)",
"ref_id": null
},
{
"start": 1606,
"end": 1701,
"text": "5k) .378 .405 .291 .328 .252 .338 .361 .413 .395 .261 .226 .298 .369 .210 BASELINE (supervised,",
"ref_id": null
},
{
"start": 1702,
"end": 1705,
"text": "1k)",
"ref_id": null
},
{
"start": 1708,
"end": 1771,
"text": "150 .133 .052 .112 .062 .093 .076 .167 .131 .053 .050 .099 .073",
"ref_id": null
},
{
"start": 1786,
"end": 1805,
"text": "(self-learning, 1k)",
"ref_id": null
},
{
"start": 1808,
"end": 1881,
"text": "1k) .361 .394 .259 .289 .217 .326 .336 .411 .390 .245 .200 .234 .368 .142",
"ref_id": null
}
],
"ref_spans": [
{
"start": 698,
"end": 766,
"text": "-BG -CA -EO -ET -EU -FI -HE -HU -ID -KA -LT -NO -TH -TR",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Supplemental Material",
"sec_num": null
},
{
"text": "Recent empirical studies show that, under fair evaluation, (weakly) supervised methods always outperform their unsupervised counterparts. We thus base all our experiments in \u00a74 on the weakly supervised setup; nonetheless, we observe substantial relative gains for the fully unsupervised setup as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "With second-order similarity, the similarity of two words is captured in terms of how similar they are to other words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported by the ERC Consolidator Grant LEXICAL (no 648909) awarded to Anna Korhonen. Goran Glava\u0161 is supported by the Eliteprogramm of the Baden-W\u00fcrttemberg Stiftung (AGREE grant). We thank the reviewers for their insightful suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised hyperalignment for multilingual word embeddings",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Alaux",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Cuturi",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean Alaux, Edouard Grave, Marco Cuturi, and Ar- mand Joulin. 2019. Unsupervised hyperalignment for multilingual word embeddings. In Proceedings of ICLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning principled bilingual mappings of word embeddings while preserving monolingual invariance",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "2289--2294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word em- beddings while preserving monolingual invariance. In Proceedings of EMNLP, pages 2289-2294.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning bilingual word embeddings with (almost) no bilingual data",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "451--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of ACL, pages 451-462.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "789--798",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully un- supervised cross-lingual mappings of word embed- dings. In Proceedings of ACL, pages 789-798.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised neural machine translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised neural ma- chine translation. In Proceedings of ICLR.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "282--291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, I\u00f1igo Lopez-Gazpio, and Eneko Agirre. 2018c. Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation. In Pro- ceedings of CoNLL, pages 282-291.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the ACL",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the ACL, 5:135-146.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Word translation without parallel data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2018. Word translation without parallel data. In Proceed- ings of ICLR.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "On the robustness of unsupervised and semi-supervised cross-lingual word embedding learning",
"authors": [
{
"first": "Yerai",
"middle": [],
"last": "Doval",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Espinosa-Anke",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Schockaert",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yerai Doval, Jose Camacho-Collados, Luis Espinosa- Anke, and Steven Schockaert. 2019. On the robustness of unsupervised and semi-supervised cross-lingual word embedding learning. CoRR, abs/1908.07742.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Lost in embedding space: Explaining cross-lingual task performance with eigenvalue divergence. CoRR, abs",
"authors": [
{
"first": "Haim",
"middle": [],
"last": "Dubossarsky",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haim Dubossarsky, Ivan Vuli\u0107, Roi Reichart, and Anna Korhonen. 2020. Lost in embedding space: Explain- ing cross-lingual task performance with eigenvalue divergence. CoRR, abs/2001.11136.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "How to (properly) evaluate crosslingual word embeddings: On strong baselines, comparative analyses, and some misconceptions",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Litschko",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "710--721",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goran Glava\u0161, Robert Litschko, Sebastian Ruder, and Ivan Vuli\u0107. 2019. How to (properly) evaluate cross- lingual word embeddings: On strong baselines, com- parative analyses, and some misconceptions. In Pro- ceedings of ACL, pages 710-721.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Non-linear instance-based cross-lingual mapping for nonisomorphic embedding spaces",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goran Glava\u0161 and Ivan Vuli\u0107. 2020. Non-linear instance-based cross-lingual mapping for non- isomorphic embedding spaces. In Proceedings of ACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "BilBOWA: Fast bilingual distributed representations without word alignments",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "748--756",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. BilBOWA: Fast bilingual distributed repre- sentations without word alignments. In Proceedings of ICML, pages 748-756.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning word vectors for 157 languages",
"authors": [
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Prakhar",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "3483--3487",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Ar- mand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of LREC, pages 3483-3487.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bilingual lexicon induction by learning to combine word-level and character-level representations",
"authors": [
{
"first": "Geert",
"middle": [],
"last": "Heyman",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "1085--1095",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geert Heyman, Ivan Vuli\u0107, and Marie-Francine Moens. 2017. Bilingual lexicon induction by learning to combine word-level and character-level representa- tions. In Proceedings of EACL, pages 1085-1095.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "SimLex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "41",
"issue": "4",
"pages": "665--695",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00237"
]
},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41(4):665-695.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Non-adversarial unsupervised word translation",
"authors": [
{
"first": "Yedid",
"middle": [],
"last": "Hoshen",
"suffix": ""
},
{
"first": "Lior",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "469--478",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yedid Hoshen and Lior Wolf. 2018. Non-adversarial unsupervised word translation. In Proceedings of EMNLP, pages 469-478.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Loss in translation: Learning bilingual word mapping with a retrieval criterion",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "2979--2984",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herv\u00e9 J\u00e9gou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of EMNLP, pages 2979-2984.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Panlex: Building a resource for panlingual lexical translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Kamholz",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Pool",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"M"
],
"last": "Colowick",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "3145--3150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Kamholz, Jonathan Pool, and Susan M. Colow- ick. 2014. Panlex: Building a resource for panlin- gual lexical translation. In Proceedings of LREC, pages 3145-3150.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Classification-based self-learning for weakly supervised bilingual lexicon induction",
"authors": [
{
"first": "Mladen",
"middle": [],
"last": "Karan",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mladen Karan, Ivan Vuli\u0107, Anna Korhonen, and Goran Glava\u0161. 2020. Classification-based self-learning for weakly supervised bilingual lexicon induction. In Proceedings of ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Inducing crosslingual distributed representations of words",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Klementiev",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Binod",
"middle": [],
"last": "Bhattarai",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "1459--1474",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representa- tions of words. In Proceedings of COLING, pages 1459-1474.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Phrase-based & neural unsupervised machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "5039--5049",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and Marc'Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine trans- lation. In Proceedings of EMNLP, pages 5039- 5049.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Unsupervised crosslingual information retrieval using monolingual data only",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Litschko",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "1253--1256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Litschko, Goran Glava\u0161, Simone Paolo Ponzetto, and Ivan Vuli\u0107. 2018. Unsupervised cross- lingual information retrieval using monolingual data only. In Proceedings of SIGIR, pages 1253-1256.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Investigating cross-lingual alignment methods for contextualized embeddings with token-level evaluation",
"authors": [
{
"first": "Qianchu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "33--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qianchu Liu, Diana McCarthy, Ivan Vuli\u0107, and Anna Korhonen. 2019. Investigating cross-lingual align- ment methods for contextualized embeddings with token-level evaluation. In Proceedings of CoNLL, pages 33-43.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Exploiting similarities among languages for machine translation",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for ma- chine translation. CoRR, abs/1309.4168.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Distributed Representations of Words and Phrases and their Compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Gregory",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S Corrado, and Jeffrey Dean. 2013b. Distributed Rep- resentations of Words and Phrases and their Com- positionality. In Proceedings of NIPS, pages 3111- 3119.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Lnmap: Departures from isomorphic assumption in bilingual lexicon induction through non-linear mapping in latent space",
"authors": [
{
"first": "Bari",
"middle": [],
"last": "Saiful",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mohiuddin",
"suffix": ""
},
{
"first": "Tasnim",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
}
],
"year": 2020,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bari Saiful M. Mohiuddin, Tasnim and Shafiq Joty. 2020. Lnmap: Departures from isomorphic as- sumption in bilingual lexicon induction through non-linear mapping in latent space. CoRR, abs/1309.4168.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "All-but-the-top: Simple and effective postprocessing for word representations",
"authors": [
{
"first": "Jiaqi",
"middle": [],
"last": "Mu",
"suffix": ""
},
{
"first": "Suma",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "Pramod",
"middle": [],
"last": "Viswanath",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaqi Mu, Suma Bhat, and Pramod Viswanath. 2018. All-but-the-top: Simple and effective postprocessing for word representations. In Proceedings of ICLR.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Effective dimensionality reduction for word embeddings",
"authors": [
{
"first": "Vikas",
"middle": [],
"last": "Raunak",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Metze",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 4th Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "235--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vikas Raunak, Vivek Gupta, and Florian Metze. 2019. Effective dimensionality reduction for word embed- dings. In Proceedings of the 4th Workshop on Rep- resentation Learning for NLP, pages 235-243.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A discriminative latent-variable model for bilingual lexicon induction",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "458--468",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Ryan Cotterell, Yova Kementched- jhieva, and Anders S\u00f8gaard. 2018. A discriminative latent-variable model for bilingual lexicon induction. In Proceedings of EMNLP, pages 458-468.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Unsupervised cross-lingual representation learning",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of ACL: Tutorial Abstracts",
"volume": "",
"issue": "",
"pages": "31--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Anders S\u00f8gaard, and Ivan Vuli\u0107. 2019a. Unsupervised cross-lingual representation learning. In Proceedings of ACL: Tutorial Abstracts, pages 31-38.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A survey of cross-lingual word embedding models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Artificial Intelligence Research",
"volume": "65",
"issue": "",
"pages": "569--631",
"other_ids": {
"DOI": [
"10.1613/jair.1.11640"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Ivan Vuli\u0107, and Anders S\u00f8gaard. 2019b. A survey of cross-lingual word embedding models. Journal of Artificial Intelligence Research, 65:569-631.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Cross-lingual alignment of contextual word embeddings, with applications to zeroshot dependency parsing",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Ori",
"middle": [],
"last": "Ram",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "1599--1613",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. Cross-lingual alignment of con- textual word embeddings, with applications to zero- shot dependency parsing. In Proceedings of NAACL- HLT, pages 1599-1613.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Automatic word sense discrimination",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "1",
"pages": "97--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Sch\u00fctze. 1998. Automatic word sense discrim- ination. Computational Linguistics, 24(1):97-123.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax",
"authors": [
{
"first": "L",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "H",
"middle": [
"P"
],
"last": "David",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Turban",
"suffix": ""
},
{
"first": "Nils",
"middle": [
"Y"
],
"last": "Hamblin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hammerla",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel L. Smith, David H.P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of ICLR.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "On the limitations of unsupervised bilingual dictionary induction",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "778--788",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard, Sebastian Ruder, and Ivan Vuli\u0107. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of ACL, pages 778-788.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "An empirical study on post-processing methods for word embeddings",
"authors": [
{
"first": "Shuai",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Mahta",
"middle": [],
"last": "Mousavi",
"suffix": ""
},
{
"first": "Virginia",
"middle": [
"R"
],
"last": "De Sa",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuai Tang, Mahta Mousavi, and Virginia R. de Sa. 2019. An empirical study on post-processing meth- ods for word embeddings. CoRR, abs/1905.10971.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Do we really need fully unsupervised cross-lingual embeddings?",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "4406--4417",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107, Goran Glava\u0161, Roi Reichart, and Anna Ko- rhonen. 2019. Do we really need fully unsuper- vised cross-lingual embeddings? In Proceedings of EMNLP, pages 4406-4417.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "On the role of seed lexicons in learning bilingual word embeddings",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "247--257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107 and Anna Korhonen. 2016. On the role of seed lexicons in learning bilingual word embeddings. In Proceedings of ACL, pages 247-257.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Post-processing of word representations via variance normalization and dynamic embedding",
"authors": [
{
"first": "Bin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Fenxiao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "C.-C. Jay",
"middle": [],
"last": "Kuo",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bin Wang, Fenxiao Chen, Angela Wang, and C.-C. Jay Kuo. 2018. Post-processing of word representations via variance normalization and dynamic embedding. CoRR, abs/1808.06305.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Are girls neko or sh\u014djo? Cross-lingual alignment of non-isomorphic embeddings with iterative normalization",
"authors": [
{
"first": "Mozhi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Keyulu",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ken-Ichi",
"middle": [],
"last": "Kawarabayashi",
"suffix": ""
},
{
"first": "Stefanie",
"middle": [],
"last": "Jegelka",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "3180--3189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mozhi Zhang, Keyulu Xu, Ken-ichi Kawarabayashi, Stefanie Jegelka, and Jordan Boyd-Graber. 2019. Are girls neko or sh\u014djo? Cross-lingual alignment of non-isomorphic embeddings with iterative normal- ization. In Proceedings of ACL, pages 3180-3189.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table/>",
"html": null,
"num": null,
"text": "",
"type_str": "table"
},
"TABREF2": {
"content": "<table/>",
"html": null,
"num": null,
"text": "). 8 All vocabularies are trimmed to the 200K most frequent words.",
"type_str": "table"
},
"TABREF4": {
"content": "<table><tr><td/><td colspan=\"2\">RCSLS</td><td colspan=\"2\">VecMap</td></tr><tr><td/><td colspan=\"4\">BASELINE POSTPROC BASELINE POSTPROC</td></tr><tr><td>Pair</td><td>(SUP)</td><td>(SUP)</td><td colspan=\"2\">(SUP+SL) (SUP+SL)</td></tr><tr><td>DE-HR</td><td>17.2</td><td>21.2</td><td>40.9</td><td>42.5</td></tr><tr><td>DE-TR</td><td>21.4</td><td>23.6</td><td>38.5</td><td>39.1</td></tr><tr><td>FI-FR</td><td>37.8</td><td>40.3</td><td>47.5</td><td>48.9</td></tr><tr><td>FI-HR</td><td>18.9</td><td>23.5</td><td>38.1</td><td>39.9</td></tr><tr><td>HR-IT</td><td>30.2</td><td>31.4</td><td>47.8</td><td>49.1</td></tr><tr><td>TR-FI</td><td>23.6</td><td>26.1</td><td>37.5</td><td>39.0</td></tr></table>",
"html": null,
"num": null,
"text": "",
"type_str": "table"
},
"TABREF5": {
"content": "<table/>",
"html": null,
"num": null,
"text": "",
"type_str": "table"
},
"TABREF6": {
"content": "<table><tr><td>Catalan: CA-</td></tr></table>",
"html": null,
"num": null,
"text": "All BLI scores (MRR) with Bulgarian (BG) as the source language.",
"type_str": "table"
},
"TABREF7": {
"content": "<table/>",
"html": null,
"num": null,
"text": "152 .221 .136 .083 .080 .044 .145 .099 .078 .024 .120 .083 .017 .087 BASELINE (self-learning, 1k) .385 .521 .314 .315 .328 .241 .411 .298 .255 .111 .358 .376 .056 .259 POSTPROC(self-learning, 1k).404 .535 .318 .317 .316 .235 .404 .316 .271 .092 .368 .389 .061 .251",
"type_str": "table"
},
"TABREF8": {
"content": "<table><tr><td>Estonian: ET-</td></tr></table>",
"html": null,
"num": null,
"text": "All BLI scores (MRR) with Esperanto (EO) as the source language.",
"type_str": "table"
},
"TABREF9": {
"content": "<table><tr><td>Basque: EU-</td></tr></table>",
"html": null,
"num": null,
"text": "All BLI scores (MRR) with Estonian (ET) as the source language.",
"type_str": "table"
},
"TABREF10": {
"content": "<table/>",
"html": null,
"num": null,
"text": "All BLI scores (MRR) with Basque (EU) as the source language.",
"type_str": "table"
},
"TABREF11": {
"content": "<table><tr><td>Hebrew: HE-</td></tr></table>",
"html": null,
"num": null,
"text": "All BLI scores (MRR) with Finnish (FI) as the source language.",
"type_str": "table"
},
"TABREF12": {
"content": "<table><tr><td>Indonesian: ID-</td></tr></table>",
"html": null,
"num": null,
"text": "All BLI scores (MRR) with Hungarian (HU) as the source language.",
"type_str": "table"
},
"TABREF13": {
"content": "<table><tr><td>Georgian: KA-</td></tr></table>",
"html": null,
"num": null,
"text": "All BLI scores (MRR) with Indonesian (ID) as the source language.",
"type_str": "table"
},
"TABREF14": {
"content": "<table/>",
"html": null,
"num": null,
"text": "All BLI scores (MRR) with Georgian (KA) as the source language.",
"type_str": "table"
},
"TABREF15": {
"content": "<table><tr><td>Lithuanian: LT-</td></tr></table>",
"html": null,
"num": null,
"text": "All BLI scores (MRR) with Korean (KO) as the source language.",
"type_str": "table"
},
"TABREF16": {
"content": "<table><tr><td>Norwegian: NO-</td></tr></table>",
"html": null,
"num": null,
"text": "All BLI scores (MRR) with Lithuanian (LT) as the source language.",
"type_str": "table"
},
"TABREF17": {
"content": "<table><tr><td>Thai: TH-</td></tr></table>",
"html": null,
"num": null,
"text": "All BLI scores (MRR) with Norwegian (NO) as the source language.(supervised, 5k) .210 .134 .087 .186 .094 .173 .173 .178 .141 .116 .112 .214 .162 .177 BASELINE (self-learning, 5k) .174 .123 .073 .164 .093 .167 .203 .160 .170 .126 .097 .215 .147 .160 POSTPROC (self-learning, 5k) .176 .145 .068 .168 .098 .178 .176 .188 .203 .136 .118 .218 .143 .170 BASELINE (supervised, 1k) .049 .027 .021 .070 .029 .032 .057 .044 .044 .034 .040 .084 .029 .052 BASELINE (self-learning, 1k) .108 .084 .036 .128 .057 .094 .152 .111 .168 .073 .065 .145 .098 .121 POSTPROC(self-learning, 1k) .112 .104 .049 .120 .049 .104 .150 .127 .192 .079 .078 .151 .107 .125",
"type_str": "table"
},
"TABREF18": {
"content": "<table><tr><td>Turkish: TR-</td></tr></table>",
"html": null,
"num": null,
"text": "All BLI scores (MRR) with Thai (TH) as the source language.",
"type_str": "table"
}
}
}
}