Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D18-1043",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:46:44.448651Z"
},
"title": "Non-Adversarial Unsupervised Word Translation",
"authors": [
{
"first": "Yedid",
"middle": [],
"last": "Hoshen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lior",
"middle": [],
"last": "Wolf",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tel Aviv University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Unsupervised word translation from nonparallel inter-lingual corpora has attracted much research interest. Very recently, neural network methods trained with adversarial loss functions achieved high accuracy on this task. Despite the impressive success of the recent techniques, they suffer from the typical drawbacks of generative adversarial models: sensitivity to hyper-parameters, long training time and lack of interpretability. In this paper, we make the observation that two sufficiently similar distributions can be aligned correctly with iterative matching methods. We present a novel method that first aligns the second moment of the word distributions of the two languages and then iteratively refines the alignment. Extensive experiments on word translation of European and Non-European languages show that our method achieves better performance than recent state-of-the-art deep adversarial approaches and is competitive with the supervised baseline. It is also efficient, easy to parallelize on CPU and interpretable.",
"pdf_parse": {
"paper_id": "D18-1043",
"_pdf_hash": "",
"abstract": [
{
"text": "Unsupervised word translation from nonparallel inter-lingual corpora has attracted much research interest. Very recently, neural network methods trained with adversarial loss functions achieved high accuracy on this task. Despite the impressive success of the recent techniques, they suffer from the typical drawbacks of generative adversarial models: sensitivity to hyper-parameters, long training time and lack of interpretability. In this paper, we make the observation that two sufficiently similar distributions can be aligned correctly with iterative matching methods. We present a novel method that first aligns the second moment of the word distributions of the two languages and then iteratively refines the alignment. Extensive experiments on word translation of European and Non-European languages show that our method achieves better performance than recent state-of-the-art deep adversarial approaches and is competitive with the supervised baseline. It is also efficient, easy to parallelize on CPU and interpretable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Inferring word translations between languages is a long-standing research task. Earliest efforts concentrated on finding parallel corpora in a pair of languages and inferring a dictionary by force alignment of words between the two languages. An early example of this approach is the translation achieved using the Rosetta stone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, if most languages share the same expressive power and are used to describe similar human experiences across cultures, they should share similar statistical properties. Exploiting statistical properties of letters has been successfully employed by substitution crypto-analysis since at least the 9th century. It seems likely that one can learn to map between languages statistically, by considering the word distributions. As one specific example, it is likely that the set of elements described by the most common words in one language would greatly overlap with those described in a second language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Another support for the plausibility of unsupervised word translation came with the realization that when words are represented as vectors that encode co-occurrences, the mapping between two languages is well captured by an affine transformation (Mikolov et al., 2013b) . In other words, not only that one can expect the most frequent words to be shared, one can also expect the representations of these words to be similar up to a linear transformation.",
"cite_spans": [
{
"start": 246,
"end": 269,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A major recent trend in unsupervised learning is the use of Generative Adversarial Networks (GANs) presented by Goodfellow et al. (2014) , in which two networks provide mutual training signals to each other: the generator and the discriminator. The discriminator plays an adversarial role to a generative model and is trained to distinguish between two distributions. Typically, these distributions are labeled as \"real\" and \"fake\", where \"fake\" denotes the generated samples.",
"cite_spans": [
{
"start": 112,
"end": 136,
"text": "Goodfellow et al. (2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the context of unsupervised translation (Conneau et al., 2017; Zhang et al., 2017a,b) , when learning from a source language to a target language, the \"real\" distribution is the distribution of the target language and the \"fake\" one is the mapping of the source distribution using the learned mapping. Such approaches have been shown recently to be very effective when employed on top of modern vector representations of words.",
"cite_spans": [
{
"start": 43,
"end": 65,
"text": "(Conneau et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 66,
"end": 88,
"text": "Zhang et al., 2017a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we ask whether GANs are necessary for achieving the level of success recently demonstrated for unsupervised word translation. Given that the learned mapping is simple and that the concepts described by the two languages are similar, we suggest to directly map every word in one language to the closest word in the other. While one cannot expect that all words would match correctly for a random initialization, some would match and may help refine the affine transformation. Once an improved affine transformation is recovered, the matching process can repeat.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Naturally, such an iterative approach relies on a good initialization. For this purpose we employ two methods. First, an initial mapping is obtained by matching the means and covariances of the two distributions. Second, multiple solutions, which are obtained stochastically, are employed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Using multiple stochastic solutions is crucial for languages that are more distant, e.g., more stochastic solutions are required for learning to translate between English and Arabic, in comparison to English and French. Evaluating multiple solutions relies on the ability to automatically identify the true matching without supervision and we present an unsupervised reconstruction-based criterion for determining the best stochastic solution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our presented approach is simple, has very few hyper-parameters, and is trivial to parallelize. It is also easily interpretable, since every step of the method has a clear goal and a clear success metric, which can also be evaluated without the ground truth bilingual lexicon. An extensive set of experiments shows that our much simpler and more efficient method is more effective than the state-ofthe-art GAN based method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The earlier contributions in the field of word translation without parallel corpora were limited to finding matches between a small set of carefully selected words and translations, and relied on cooccurrence statistics (Rapp, 1995) or on similarity in the variability of the context before and after the word (Fung, 1995) . Finding translations of larger sets of words was made possible in followup work by incorporating a seed set of matching words that is either given explicitly or inferred based on words that appear in both languages or are similar in edit distance due to a shared etymology (Fung and Yee, 1998; Rapp, 1999; Schafer and Yarowsky, 2002; Koehn and Knight, 2002; Haghighi et al., 2008; Irvine and Callison-Burch, 2013; Xia et al., 2016; Artetxe et al., 2017) .",
"cite_spans": [
{
"start": 220,
"end": 232,
"text": "(Rapp, 1995)",
"ref_id": "BIBREF18"
},
{
"start": 310,
"end": 322,
"text": "(Fung, 1995)",
"ref_id": "BIBREF6"
},
{
"start": 598,
"end": 618,
"text": "(Fung and Yee, 1998;",
"ref_id": "BIBREF7"
},
{
"start": 619,
"end": 630,
"text": "Rapp, 1999;",
"ref_id": "BIBREF19"
},
{
"start": 631,
"end": 658,
"text": "Schafer and Yarowsky, 2002;",
"ref_id": "BIBREF20"
},
{
"start": 659,
"end": 682,
"text": "Koehn and Knight, 2002;",
"ref_id": "BIBREF14"
},
{
"start": 683,
"end": 705,
"text": "Haghighi et al., 2008;",
"ref_id": "BIBREF9"
},
{
"start": 706,
"end": 738,
"text": "Irvine and Callison-Burch, 2013;",
"ref_id": "BIBREF11"
},
{
"start": 739,
"end": 756,
"text": "Xia et al., 2016;",
"ref_id": "BIBREF22"
},
{
"start": 757,
"end": 778,
"text": "Artetxe et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For example, Koehn and Knight (2002) matched English with German. Multiple heuristics were suggested based on hand crafted rules, including similarity in spelling and word frequency. A weighted linear combination is employed to combine the heuristics and the matching words are identified in a greedy manner. Haghighi et al. (2008) modeled the problem of matching words across independent corpora as a generative model, in which cross-lingual links are represented by latent variables, and employed an iterative EM method.",
"cite_spans": [
{
"start": 13,
"end": 36,
"text": "Koehn and Knight (2002)",
"ref_id": "BIBREF14"
},
{
"start": 309,
"end": 331,
"text": "Haghighi et al. (2008)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Another example that employs iterations was presented by Artetxe et al. (2017) . Similarly to our method, this method relies on word vector embeddings, in their case the word2vec method (Mikolov et al., 2013a) . Unlike our method, their method is initialized using seed matches.",
"cite_spans": [
{
"start": 57,
"end": 78,
"text": "Artetxe et al. (2017)",
"ref_id": "BIBREF0"
},
{
"start": 186,
"end": 209,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our core method incorporates a circularity term, which is also used in (Xia et al., 2016) for the task of NMT and later on in multiple contributions in the field of image synthesis (Kim et al., 2017; Zhu et al., 2017) . This term is employed when learning bidirectional transformations to encourages samples from either domain to be mapped back to exactly the same sample when translated to the other domain and back. Since our transformations are linear, this is highly related to employing orthogonality as done in (Xing et al., 2015; Smith et al., 2017; Conneau et al., 2017) for the task of weakly or unsupervised word vector space alignment. Conneau et al. (2017) also employ a circularity term, but unlike our use of it as part of the optimization's energy term, there it is used for validating the solution and selecting hyperparameters.",
"cite_spans": [
{
"start": 71,
"end": 89,
"text": "(Xia et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 181,
"end": 199,
"text": "(Kim et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 200,
"end": 217,
"text": "Zhu et al., 2017)",
"ref_id": "BIBREF26"
},
{
"start": 517,
"end": 536,
"text": "(Xing et al., 2015;",
"ref_id": "BIBREF23"
},
{
"start": 537,
"end": 556,
"text": "Smith et al., 2017;",
"ref_id": "BIBREF21"
},
{
"start": 557,
"end": 578,
"text": "Conneau et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 647,
"end": 668,
"text": "Conneau et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Very recently, Zhang et al. (2017a,b) ; Conneau et al. (2017) have proposed completely unsupervised solutions. All three solutions are based on GANs. The methods differ in the details of the adversarial training, in the way that model selection is employed to select the best configuration and in the way in which matching is done after the distributions are aligned by the learned transformation.",
"cite_spans": [
{
"start": 15,
"end": 37,
"text": "Zhang et al. (2017a,b)",
"ref_id": null
},
{
"start": 40,
"end": 61,
"text": "Conneau et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Due to the min-max property of GANs, methods which rely on GANs are harder to interpret, since, for example, the discriminator D could focus on a combination of local differences between the distributions. The reliance on a discriminator also means that complex weight dependent metrics are implicitly used, and that these metrics evolve dynamically during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our method does not employ GANs. Alternatives to GANs are also emerging in other do-mains. For example, generative methods were trained by iteratively fitting random (\"noise\") vectors by Bojanowski et al. (2017) ; In the recent image translation work of Chen and Koltun (2017) , distinguishability between distribution of images was measured using activations of pretrained networks, a practice that is referred to as the \"perceptual loss\" (Johnson et al., 2016) .",
"cite_spans": [
{
"start": 187,
"end": 211,
"text": "Bojanowski et al. (2017)",
"ref_id": "BIBREF2"
},
{
"start": 254,
"end": 276,
"text": "Chen and Koltun (2017)",
"ref_id": "BIBREF3"
},
{
"start": 440,
"end": 462,
"text": "(Johnson et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We present an approach for unsupervised word translation consisting of multiple parts: (i) Transforming the word vectors into a space in which the two languages are more closely aligned, (ii) Mini-Batch Cycle iterative alignment. There is an optional final stage of batch-based finetuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Adversarial Word Translation",
"sec_num": "3"
},
{
"text": "Let us define two languages X and Y, each containing a set of N X and N Y words represented by the feature vectors x 1 ..x N X and y 1 ..y N Y respectively. Our objective is to find the correspondence function f (n) such that for every x n , f (n) yields the index of the Y word that corresponds to the word x n . If a set of possible correspondences is available for a given word, our objective is to predict one member of this set. In this unsupervised setting, no training examples of f (n) are given.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Adversarial Word Translation",
"sec_num": "3"
},
{
"text": "Each language consists of a set of words each parameterized by a word vector. A popular example of a word embedding method is FastText (Bojanowski et al., 2016) , which uses the internal word co-occurrence statistics for each language. These word vectors are typically not expected to be aligned between languages and since the alignment method we employ is iterative, a good initialization is key.",
"cite_spans": [
{
"start": 135,
"end": 160,
"text": "(Bojanowski et al., 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approximate Alignment with PCA",
"sec_num": "3.1"
},
{
"text": "Let us motivate our approach by a method commonly used in 3D point cloud matching. Let A be a set of 3D points and T A be the same set of points with a rotated coordinate system. Assuming nonisotropic distributions of points, transforming each set of points to its principle axes of variations (using PCA) will align the two point clouds. As noted by Daras et al. (2012) , PCA-based alignment is common in the literature of point cloud matching.",
"cite_spans": [
{
"start": 351,
"end": 370,
"text": "Daras et al. (2012)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approximate Alignment with PCA",
"sec_num": "3.1"
},
{
"text": "Word distributions are quite different from 3D point clouds: They are much higher dimensional, and it is not obvious a priori that different languages present different \"views\" of the same \"object\" and share exactly the same axes of variation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximate Alignment with PCA",
"sec_num": "3.1"
},
{
"text": "The success of previous results, e.g. (Conneau et al., 2017) , to align word vectors between languages using an orthonormal transformation does give credence to this approach. Our method relies on the assumption that many language pairs share some principle axes of variation. The empirical success of PCA initialization in this work supports this assumption.",
"cite_spans": [
{
"start": 38,
"end": 60,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approximate Alignment with PCA",
"sec_num": "3.1"
},
{
"text": "For each language [X , Y], we first select the N most frequent word vectors. In our implementation, we use N = 5000 and employ FastText vectors of dimension D = 300. We project the word vectors, after centering, to the top p principle components (we use p = 50).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximate Alignment with PCA",
"sec_num": "3.1"
},
{
"text": "Although projecting to the top principle axes of variation would align a rotated non-isotropic point cloud, it does not do so in the general case. This is due to languages having different word distributions and components of variation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mini-Batch Cycle Iterative Closest Point",
"sec_num": "3.2"
},
{
"text": "We therefore attempt to find a transformation T that will align every word x i from language X to a word y f (i) in language Y. The objective is therefore to minimize:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mini-Batch Cycle Iterative Closest Point",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "argmin T i min f (i) |y i \u2212 T x f (i) |",
"eq_num": "(1)"
}
],
"section": "Mini-Batch Cycle Iterative Closest Point",
"sec_num": "3.2"
},
{
"text": "Eq. 1 is difficult to optimize directly and various techniques have been proposed for its optimization. One popular method used in 3D point cloud alignment is Iterative Closest Point (ICP). ICP solves Eq. 1 iteratively in two steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mini-Batch Cycle Iterative Closest Point",
"sec_num": "3.2"
},
{
"text": "1. For each y j , find the nearest T x i . We denote its index by f y (j) = i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mini-Batch Cycle Iterative Closest Point",
"sec_num": "3.2"
},
{
"text": "2. Optimize for T in j y j \u2212 T x fy(j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mini-Batch Cycle Iterative Closest Point",
"sec_num": "3.2"
},
{
"text": "In this work, we use a modified version of ICP which we call Mini-Batch Cycle ICP (MBC-ICP). MBC-ICP learns transformations T xy for X \u2192 Y and T yx for Y \u2192 X . We include cycle-constraints ensuring that a word x transformed to the Y domain and back is unchanged (and similarly for every Y \u2192 X \u2192 Y transformation). The strength of the cycle constraints is parameterized by \u03bb (we have \u03bb = 0.1). We compute the nearest neighbor matches at the beginning of each epoch, and then optimize transformations T yx and T xy using mini-batch SGD with mini-batch size 128. Minibatch rather than full-batch optimization greatly increases the success of the method. Experimental comparisons can be seen in the results section. Note we only compute the nearest neighbors at the beginning of each epoch, rather than for each mini-batch due to the computational expense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mini-Batch Cycle Iterative Closest Point",
"sec_num": "3.2"
},
{
"text": "Every iteration of the final MBC-ICP algorithm therefore becomes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mini-Batch Cycle Iterative Closest Point",
"sec_num": "3.2"
},
{
"text": "1. For each y j , find the nearest T xy x i . We denote its index by f y (j) 2. For each x i , find the nearest T yx y j . We denote its index by f x (i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mini-Batch Cycle Iterative Closest Point",
"sec_num": "3.2"
},
{
"text": "3. Optimize T xy and T yx using mini-batch SGD for a single epoch of {x i } and {y j } on:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mini-Batch Cycle Iterative Closest Point",
"sec_num": "3.2"
},
{
"text": "j y j \u2212 T xy x fy(j) + i x i \u2212 T yx y fx(i) + \u03bb i x i \u2212 T yx T xy x i + \u03bb j y j \u2212 T xy T yx y j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mini-Batch Cycle Iterative Closest Point",
"sec_num": "3.2"
},
{
"text": "A good initialization is important for ICP-type methods. We therefore begin with the projected data in which the transformation is assumed to be relatively small and initialize transformations T xy and T yx with the identity matrix. We denote this step PCA-MBC-ICP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mini-Batch Cycle Iterative Closest Point",
"sec_num": "3.2"
},
{
"text": "Once PCA-MBC-ICP has generated the correspondences functions f x (i) and f y (j), we run a MBC-ICP on the original 300D word vectors (no PCA). We denote this step: RAW-MBC-ICP. We initialize the optimization using f x (i) and f y (j) learned before, and proceed with MBC-ICP. At the end of this stage, we recover transformations T xy andT yx that transform the 300D word vectors from X \u2192 Y and Y \u2192 X respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mini-Batch Cycle Iterative Closest Point",
"sec_num": "3.2"
},
{
"text": "Reciprocal pairs: After several iterations of MBC-ICP, the estimated transformations become quite reliable. We can therefore use this transformation to identify the pairs that are likely to be correct matches. We use the reciprocity heuristic: For every word y \u2208 Y we find the nearest transformed word from the set {T xy x|x \u2208 X}. We also find the nearest neighbors for the Y \u2192 X transformation. If a pair of words is matched in both X \u2192 Y and Y \u2192 X directions, the pair is denoted reciprocal. During RAW-MBC-ICP, we use only reciprocal pairs, after the 50th epoch (this parameter is not sensitive).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mini-Batch Cycle Iterative Closest Point",
"sec_num": "3.2"
},
{
"text": "In summary: we run PCA-MBC-ICP on the 5k most frequent words after transformation to principle components. The resulting correspondences f x (i) and f y (j) are used to initialize a RAW-MBC-ICP on the original 300D data (rather than PCA), using reciprocal pairs. The output of the method are transformation matrices T xy and T yx .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mini-Batch Cycle Iterative Closest Point",
"sec_num": "3.2"
},
{
"text": "MBC-ICP is able to achieve very competitive performance without any further finetuning or use of large corpora. GAN-based methods on the other hand require iterative finetuning (Conneau et al., 2017; Hoshen and Wolf, 2018) to achieve competitive performance. To facilitate comparison with such methods, we also add a variant of our method with identical finetuning to (Conneau et al., 2017 ). As we show in the results section, fine-tuning European languages typically results in small improvements in accuracy (1-2%) for our method, in comparison to 10-15% for the previous work. Following (Xing et al., 2015; Conneau et al., 2017) , fine-tuning is performed by running the Procrustes method iteratively on the full vocabulary of 200k words, initialized with the final transformation matrix from MBC-ICP. The Procrustes method uses SVD to find the optimal orthonormal matrix between X and Y given approximate matches. The new transformation is used to finetune the approximate matches. We run 5 iterations of successive transformation and matching estimation steps.",
"cite_spans": [
{
"start": 177,
"end": 199,
"text": "(Conneau et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 200,
"end": 222,
"text": "Hoshen and Wolf, 2018)",
"ref_id": "BIBREF10"
},
{
"start": 368,
"end": 389,
"text": "(Conneau et al., 2017",
"ref_id": "BIBREF4"
},
{
"start": 591,
"end": 610,
"text": "(Xing et al., 2015;",
"ref_id": "BIBREF23"
},
{
"start": 611,
"end": 632,
"text": "Conneau et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning",
"sec_num": "3.3"
},
{
"text": "Although we optimize the nearest neighbor metric, we found that in accordance with (Conneau et al., 2017) , neighborhood retrieval methods such as Inverted Soft-Max (ISF) (Smith et al., 2017) and Cross-domain Similarity Local Scaling (CSLS) improve final retrieval performance. We therefore evaluate using CSLS. The similarity between a word x \u2208 X and a word y \u2208 Y is computed as 2 cos(T xy x, y) \u2212 r(T xy x) \u2212 r(y), where r(.) is the average cosine similarity between the word and its 10-NN in the other domain.",
"cite_spans": [
{
"start": 83,
"end": 105,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Matching Metrics",
"sec_num": "3.4"
},
{
"text": "Our approach utilizes multiple stochastic solutions, to provide a good initialization for the MBC-ICP algorithm. There are two sources of stochasticity in our system: (i) The randomized nature of the PCA algorithm (it uses random matrices (Liberty et al., 2007) ) (ii) The order of the training samples (the mini-batches) when training the transformations. The main issue faced by unsupervised learning in the case of multiple solutions, is either (i) choosing the best solution in case of a fixed parallel run budget, or (ii) finding a good stopping criterion if attempting to minimize the number of runs serially.",
"cite_spans": [
{
"start": 239,
"end": 261,
"text": "(Liberty et al., 2007)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple Stochastic Solutions",
"sec_num": "4"
},
{
"text": "We use the reconstruction cost as an unsupervised metrics for measuring convergence of MBC-ICP. Specifically, we measure how closely every x \u2208 X and y \u2208 Y is reconstructed by a transformed word from the other domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple Stochastic Solutions",
"sec_num": "4"
},
{
"text": "j y j \u2212 T xy x fy(j) + i x i \u2212 T yx y fx(i) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple Stochastic Solutions",
"sec_num": "4"
},
{
"text": "Although for isotropic distributions this has many degenerate solutions, empirically we find that values that are significantly lower than the median almost always correspond to a good solution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple Stochastic Solutions",
"sec_num": "4"
},
{
"text": "The optimization profile of MBC-ICP is predictable and easily lends itself for choosing effective termination criteria. The optimization profile of a successfully converged and non-converging runs are presented in Fig. 1(a) . The reconstruction loss clearly distinguish between the converged and non-converging runs. Fig. 1(b,c) presents the distribution of final reconstruction costs for 500 different runs for En-F r and En-Ar.",
"cite_spans": [],
"ref_spans": [
{
"start": 214,
"end": 223,
"text": "Fig. 1(a)",
"ref_id": "FIGREF0"
},
{
"start": 317,
"end": 328,
"text": "Fig. 1(b,c)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Multiple Stochastic Solutions",
"sec_num": "4"
},
{
"text": "We evaluated our method extensively to ensure that it is indeed able to effectively and efficiently perform unsupervised word translation. As a strong baseline, we used the code and datasets from the MUSE repository by Conneau et al. (2017) 1 . Care was taken in order to make sure that we report these results as fairly as possible: (1) the results from the previous work were copied as is, 1 https://github.com/facebookresearch/MUSE except for En-It, where our runs indicated better baseline results. (2) For languages not reported, we ran the code with multiple options and report the best results obtained. One crucial option for GAN was whether to center the data or not. From communication with the authors we learned that, in nearly all non-European languages, centering the data is crucial. For European languages, not centering gave better results. For Arabic, centering helps in one direction but is very detrimental in the other. In all such cases, we report the best baseline result per direction. (3) For the supervised baseline, we report both the results from the original paper (in Tab. 1) and the results post Procrustes finetuning, which are better (Tab. 2). (4) Esperanto is not available in the MUSE repository at this time. We asked the authors for the data and will update the paper once available. Currently we are able to say (without the supervision data) that our method converges on En-Eo and Eo-En.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The evaluation concentrated on two aspects of the translation: (i) Word Translation Accuracy measured by the fraction of words translated to a correct meaning in the target language, and (ii) Runtime of the method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We evaluated our method against the best methods from (Conneau et al., 2017) . The supervised baseline method learns an alignment from 5k supervised matches using the Procrustes method. The mapping is then refined using the Procrustes method and CSLS matching on 200k unsupervised word vectors in the source and target languages. The unsupervised method proposed by (Conneau et al., 2017) , uses generative adversarial domain mapping between the word vectors of the 50k most common words in each language. The mapping is then refined using the same procedure that is used in the supervised baseline.",
"cite_spans": [
{
"start": 54,
"end": 76,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 366,
"end": 388,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "A comparison of the word translation accura-cies before finetuning can be seen in Tab. 1. Our method significantly outperforms the method of (Conneau et al., 2017) on all of the evaluated European language pairs. Additionally, for these languages, our method performs comparably to the supervised baseline on all pairs except En-Ru for which supervision seems particularly useful. The same trends are apparent for simple nearest neighbors and CSLS although CSLS always performs better. For non-European languages, none of the unsupervised methods succeeds on all languages. We found that the GAN baseline fails on Farsi, Hindu, Bengali, Vietnamese and one direction of Japanese and Indonesian while our method does not succeed on Chinese, Japanese and Vietnamese. We conclude that the methods have complementary strengths, our method doing better on more languages. On languages where both methods succeed, MBC-ICP tends to do much better.",
"cite_spans": [
{
"start": 141,
"end": 163,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We present a comparison between the methods after finetuning and using the CSLS metric in Tab. 2. All methods underwent the same finetuning procedure. We can see that our method still outperforms the GAN method and is comparable to the supervised baseline on European languages. Another observation is that on most European language pairs, finetuning only makes a small difference for our method (1-2%). An unaligned vocabulary of 7.5k is sufficient to achieve most of the accuracy. This is in contrast with the GAN, that benefits greatly from finetuning on 200k words. Non-European language and English pairs are typically more challenging, finetuning helps much more for all unsupervised methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "It is interesting to examine the languages on which each method could not converge. They typically fall into geographical and semantic clusters. The GAN method failed on Arabic and Hebrew, Hindu, Farsi and Bengali. Whereas our method failed on Japanese and Chinese. We suspect that different statistical properties favor each method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We also compare the different methods in terms of training time required by the method. We emphasize that our method is trivially parallelizable, simply by splitting the random initializations between workers. The run time of each solution of MBC-ICP is 47 seconds on CPU. The run time of all solutions can therefore be as low as a single run, at linear increase in compute resources. As it runs on CPU, parallelization is not expensive. The average number of runs required for conver- Fig. 2) . We note that our method learns translations for both languages at the same time.",
"cite_spans": [],
"ref_spans": [
{
"start": 486,
"end": 493,
"text": "Fig. 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The current state-of-the-art baseline by (Conneau et al., 2017) requires around 3000 seconds on the GPU. It is not obvious how to parallelize such a method efficiently. It requires about 30 times longer to train than our method (with parallelization) and is not practical on a multi-CPU platform. The optional refinement step requires about 10 minutes. The performance increase of refinement for our method are typically quite modest and can be be skipped at the cost of 1-2% in accuracy, the GAN however requires finetuning to obtain competitive results. Another obvious advan- tage is that our method does not require a GPU.",
"cite_spans": [
{
"start": 41,
"end": 63,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Implementation: We used 100 iterations for the PCA-MBC-ICP stage on 50 PCA word vectors. This was run in parallel over 500 stochastic solutions. We selected the solution with the smallest unsupervised reconstitution criterion. This solution was used to initialize RAW-MBC-PCA, which we run for 100 iterations on the raw word vectors. The latter 50 iterations of RAW-MBC-ICP were carried out with only reciprocal pairs contributing to the optimization. Results were typically not sensitive to hyper-parameter choice, although longer optimization generally resulted in better performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Ablation Analyses There are three important steps for the convergence of the ICP step: (i) PCA, (ii) Dimensionality reduction, (iii) Multiple stochastic solutions. In Tab. 3 we present the ablation results on the En-Es pair with PCA and no dimensionality reduction, with only the top 50 PCs and without PCA at all (best run out of 500 chosen using the unsupervised reconstruction loss). We can observe that the convergence rate without PCA and with PCA but without dimensionality reduction is much lower than with PCA, the best run without PCA has not succeeded in obtaining a good translation. This provides evidence that both PCA and dimensionality reduction are essential for the success of the method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We experimented with the different factors of randomness between runs, to understand the causes of diversity allowing for convergence in the more challenging language pairs (such as En-Ar). We performed the following four experiments: i. Fixing PCA and Batch Ordering. ii. Fixing all data batches to have the same ordering in all runs, iii. Fix the PCA bases of all runs, iv. Let both PCA and batch ordering vary between runs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Tab. 4 compares the results on En-Es and En-Ar for the experiments described above. It can be seen that using both sources of stochasticity is usually better. Although there is some probability the PCA will result in aligned principle components between the two languages, this usually does not happen and therefore using stochastic PCA is highly beneficial. Fig. 2 we present the statistics for all language pairs with Procrustes-ICP (P-ICP) vs MBC-ICP. In P-ICP, we first calculate the matches for the vocabulary, and then perform a batch estimate of the transformation using the P-ICP method (starting from PCA word",
"cite_spans": [],
"ref_spans": [
{
"start": 359,
"end": 365,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Fr De Ru It Ar Figure 2 : Histograms of the Reconstruction metric across 500 ICP runs for MBC-ICP (Red) and P-ICP (Blue). The comparison is shown for En-Es, En-Fr, En-De, En-Ru, En-It, En-Ar. On average MBC-ICP converges to much better minima. We can observe that MBC-ICP has many more converging runs than P-ICP. In fact for En-It and En-Ar, P-ICP does not converge even once in 500 runs.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 23,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Es",
"sec_num": null
},
{
"text": "vectors and T xy initialized at identity). The only source of stochasticity in P-ICP is the PCA where in MBC-ICP the order of mini-batches provides further stochasticity. Adding random noise to the mapping initialization was not found to help. Each plot shows the histogram in log space for the number of runs that achieved unsupervised reconstruction loss within the range of the bin. The converged runs with lower reconstruction values typically form a peak which is quite distinct from the non-converged runs allowing for easy detection of converged runs. The rate of convergence generally correlates with our intuition for distance between languages (En-Ar much lower than En-Fr), although there are exceptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Es",
"sec_num": null
},
{
"text": "MBC-ICP converges much better than P-ICP: For the language pairs with a wide convergence range (En-Es, En-Fr, En-Ru) we can see that MBC-ICP converged on many more runs than P-ICP. For the languages with a narrow convergence range (En-Ar, En-It), P-ICP was not able to converge at all. We therefore conclude that the minibatch update and batch-ordering stochasticity increase the convergence range and is important for effective unsupervised matching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Es",
"sec_num": null
},
{
"text": "We have presented an effective technique for unsupervised word-to-word translation. Our method is simple and non-adversarial. We showed empirically that our method outperforms current stateof-the-art GAN methods in terms of pre and post finetuning word translation accuracy. Our method runs on CPU and is much faster than current methods when using parallelization. This will enable researchers from labs that do not possess graphical computing resources to participate in this exciting field. The proposed method is interpretable, i.e. every stage has an intuitive loss function with an easy to understand objective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "It is interesting to consider the relative performance between language pairs. Typically more related languages yielded better performance than more distant languages (but note that Indonesian performed better than Russian when translated to English). Even more interesting is contrasting the better performance of our method on West and South Asian languages, and GAN's better performance on Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Overall, our work highlights the potential benefits of considering alternatives to adversarial methods in unsupervised learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning bilingual word embeddings with (almost) no bilingual data",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "451--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers). volume 1, pages 451-462.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.04606"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vec- tors with subword information. arXiv preprint arXiv:1607.04606 .",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Optimizing the latent space of generative networks",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lopez-Paz",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.05776"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Armand Joulin, David Lopez-Paz, and Arthur Szlam. 2017. Optimizing the la- tent space of generative networks. arXiv preprint arXiv:1707.05776 .",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Photographic image synthesis with cascaded refinement networks",
"authors": [
{
"first": "Qifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Vladlen",
"middle": [],
"last": "Koltun",
"suffix": ""
}
],
"year": 2017,
"venue": "ICCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qifeng Chen and Vladlen Koltun. 2017. Photographic image synthesis with cascaded refinement networks. In ICCV.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Word translation without parallel data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.04087"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087 .",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Investigating the effects of multiple factors towards more accurate 3-d object retrieval",
"authors": [
{
"first": "Petros",
"middle": [],
"last": "Daras",
"suffix": ""
},
{
"first": "Apostolos",
"middle": [],
"last": "Axenopoulos",
"suffix": ""
},
{
"first": "Georgios",
"middle": [],
"last": "Litos",
"suffix": ""
}
],
"year": 2012,
"venue": "IEEE Transactions on Multimedia",
"volume": "14",
"issue": "2",
"pages": "374--388",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petros Daras, Apostolos Axenopoulos, and Georgios Litos. 2012. Investigating the effects of multiple factors towards more accurate 3-d object retrieval. IEEE Transactions on Multimedia 14(2):374-388.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Compiling bilingual lexicon entries from a non-parallel english-chinese corpus",
"authors": [
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Third Workshop on Very Large Corpora. Massachusetts Institute of Technology Cambridge",
"volume": "",
"issue": "",
"pages": "173--183",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascale Fung. 1995. Compiling bilingual lexicon en- tries from a non-parallel english-chinese corpus. In Proceedings of the Third Workshop on Very Large Corpora. Massachusetts Institute of Technol- ogy Cambridge, pages 173-183.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An IR approach for translating new words from nonparallel, comparable texts",
"authors": [
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "Yee",
"middle": [],
"last": "Lo Yuen",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 17th international conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "414--420",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascale Fung and Lo Yuen Yee. 1998. An IR approach for translating new words from nonparallel, compa- rable texts. In Proceedings of the 17th international conference on Computational linguistics-Volume 1. Association for Computational Linguistics, pages 414-420.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Generative adversarial nets",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Pouget-Abadie",
"suffix": ""
},
{
"first": "Mehdi",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Warde-Farley",
"suffix": ""
},
{
"first": "Sherjil",
"middle": [],
"last": "Ozair",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2672--2680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative ad- versarial nets. In Advances in neural information processing systems. pages 2672-2680.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning bilingual lexicons from monolingual corpora",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi, Percy Liang, Taylor Berg-kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proceedings of ACL- HLT.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Identifying analogies across domains",
"authors": [
{
"first": "Yedid",
"middle": [],
"last": "Hoshen",
"suffix": ""
},
{
"first": "Lior",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yedid Hoshen and Lior Wolf. 2018. Identifying analo- gies across domains. In International Conference on Learning Representations.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Supervised bilingual lexicon induction with multiple monolingual signals",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Irvine",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "518--523",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Irvine and Chris Callison-Burch. 2013. Su- pervised bilingual lexicon induction with multiple monolingual signals. In HLT-NAACL. pages 518- 523.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Perceptual losses for real-time style transfer and super-resolution",
"authors": [
{
"first": "Justin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Alahi",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2016,
"venue": "European Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "694--711",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Com- puter Vision. Springer, pages 694-711.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning to discover cross-domain relations with generative adversarial networks",
"authors": [
{
"first": "Taeksoo",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Moonsu",
"middle": [],
"last": "Cha",
"suffix": ""
},
{
"first": "Hyunsoo",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jungkwon",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jiwon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.05192"
]
},
"num": null,
"urls": [],
"raw_text": "Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jungkwon Lee, and Jiwon Kim. 2017. Learning to discover cross-domain relations with generative adversarial networks. arXiv preprint arXiv:1703.05192 .",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning a translation lexicon from monolingual corpora",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 workshop on Unsupervised lexical acquisition",
"volume": "9",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. In Proceedings of the ACL-02 workshop on Unsuper- vised lexical acquisition-Volume 9. Association for Computational Linguistics, pages 9-16.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Randomized algorithms for the low-rank approximation of matrices",
"authors": [
{
"first": "Edo",
"middle": [],
"last": "Liberty",
"suffix": ""
},
{
"first": "Franco",
"middle": [],
"last": "Woolfe",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edo Liberty, Franco Woolfe, Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert. 2007. Ran- domized algorithms for the low-rank approximation of matrices. PNAS .",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 .",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Exploiting similarities among languages for machine translation",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1309.4168"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for ma- chine translation. arXiv preprint arXiv:1309.4168 .",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Identifying word translations in non-parallel texts",
"authors": [
{
"first": "Reinhard",
"middle": [],
"last": "Rapp",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd annual meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "320--322",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reinhard Rapp. 1995. Identifying word translations in non-parallel texts. In Proceedings of the 33rd an- nual meeting on Association for Computational Lin- guistics. Association for Computational Linguistics, pages 320-322.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Automatic identification of word translations from unrelated english and german corpora",
"authors": [
{
"first": "Reinhard",
"middle": [],
"last": "Rapp",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "519--526",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reinhard Rapp. 1999. Automatic identification of word translations from unrelated english and german corpora. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics. Association for Compu- tational Linguistics, pages 519-526.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Inducing translation lexicons via diverse similarity measures and bridge languages",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Schafer",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2002,
"venue": "proceedings of the 6th conference on Natural language learning",
"volume": "20",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Schafer and David Yarowsky. 2002. Inducing translation lexicons via diverse similarity measures and bridge languages. In proceedings of the 6th con- ference on Natural language learning-Volume 20. Association for Computational Linguistics, pages 1- 7.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax",
"authors": [
{
"first": "L",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "H",
"middle": [
"P"
],
"last": "David",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Turban",
"suffix": ""
},
{
"first": "Nils",
"middle": [
"Y"
],
"last": "Hamblin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hammerla",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.03859"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. arXiv preprint arXiv:1702.03859 .",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Dual learning for machine translation",
"authors": [
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nenghai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wei-Ying",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.00179"
]
},
"num": null,
"urls": [],
"raw_text": "Yingce Xia, Di He, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. arXiv preprint arXiv:1611.00179 .",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Normalized word embedding and orthogonal transform for bilingual word translation",
"authors": [
{
"first": "Chao",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiye",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2015,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "1006--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal trans- form for bilingual word translation. In HLT-NAACL. pages 1006-1011.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Adversarial training for unsupervised bilingual lexicon induction",
"authors": [
{
"first": "Meng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1959--1970",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017a. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers). vol- ume 1, pages 1959-1970.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Earth mover's distance minimization for unsupervised bilingual lexicon induction",
"authors": [
{
"first": "Meng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1934--1945",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017b. Earth mover's distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 1934-1945.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Unpaired image-to-image translation using cycle-consistent adversarial networks",
"authors": [
{
"first": "Jun-Yan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Taesung",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Phillip",
"middle": [],
"last": "Isola",
"suffix": ""
},
{
"first": "Alexei",
"middle": [
"A"
],
"last": "Efros",
"suffix": ""
}
],
"year": 2017,
"venue": "The IEEE International Conference on Computer Vision (ICCV)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial net- works. In The IEEE International Conference on Computer Vision (ICCV).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "(a) Evolution of reconstruction loss as a function of epoch number for successful (Blue) and unsuccessful runs (Red). (b) The final reconstruction loss distribution for En-Fr. (c) A similar histogram for En-Ar.",
"uris": null
},
"TABREF0": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>Pair</td><td colspan=\"2\">Supervised</td><td/><td colspan=\"2\">Unsupervised</td><td/></tr><tr><td/><td colspan=\"2\">Baseline</td><td colspan=\"2\">GAN</td><td colspan=\"2\">Ours</td></tr><tr><td/><td>nn</td><td>csls</td><td>nn</td><td>csls</td><td>nn</td><td>csls</td></tr><tr><td/><td/><td colspan=\"3\">European Languages</td><td/><td/></tr><tr><td colspan=\"7\">En-Es 77.4 81.4 69.8 75.7 75.9 81.1</td></tr><tr><td colspan=\"7\">Es-En 77.3 82.9 71.3 79.7 76.0 82.1</td></tr><tr><td>En-Fr</td><td colspan=\"6\">74.9 81.1 70.4 77.8 74.8 81.5</td></tr><tr><td>Fr-En</td><td colspan=\"6\">76.1 82.4 61.9 71.2 75.0 81.3</td></tr><tr><td colspan=\"7\">En-De 68.4 73.5 63.1 70.1 66.9 73.7</td></tr><tr><td colspan=\"7\">De-En 67.7 72.4 59.6 66.4 67.1 72.7</td></tr><tr><td colspan=\"7\">En-Ru 47.0 51.7 29.1 37.2 36.8 44.4</td></tr><tr><td colspan=\"7\">Ru-En 58.2 63.7 41.5 48.1 48.4 55.6</td></tr><tr><td>En-It</td><td colspan=\"6\">75.7 77.3 54.3 65.1 71.1 77.0</td></tr><tr><td>It-En</td><td colspan=\"6\">73.9 76.9 55.0 64.0 70.4 76.6</td></tr><tr><td/><td colspan=\"4\">Non-European Languages</td><td/><td/></tr><tr><td>En-Fa</td><td colspan=\"2\">25.7 33.1</td><td>*</td><td>*</td><td colspan=\"2\">19.6 29.0</td></tr><tr><td>Fa-En</td><td colspan=\"2\">33.5 38.6</td><td>*</td><td>*</td><td colspan=\"2\">28.3 28.3</td></tr><tr><td colspan=\"3\">En-Hi 23.8 33.3</td><td>*</td><td>*</td><td colspan=\"2\">19.4 30.3</td></tr><tr><td colspan=\"3\">Hi-En 34.6 42.8</td><td>*</td><td>*</td><td colspan=\"2\">30.5 38.9</td></tr><tr><td colspan=\"3\">En-Bn 10.3 15.8</td><td>*</td><td>*</td><td>9.7</td><td>13.5</td></tr><tr><td colspan=\"3\">Bn-En 21.5 24.6</td><td>*</td><td>*</td><td>7.1</td><td>14.5</td></tr><tr><td colspan=\"7\">En-Ar 31.3 36.5 18.9 23.5 26.9 33.3</td></tr><tr><td colspan=\"4\">Ar-En 45.0 49.5 28.6</td><td>31</td><td colspan=\"2\">39.8 45.5</td></tr><tr><td colspan=\"7\">En-He 10.3 15.8 17.9 22.7 31.3 38.9</td></tr><tr><td colspan=\"7\">He-En 21.5 24.6 37.3 39.1 43.4 50.8</td></tr><tr><td colspan=\"5\">En-Zh 40.6 42.7 12.7 16.0</td><td>*</td><td>*</td></tr><tr><td colspan=\"5\">Zh-En 30.2 36.7 18.7 25.1</td><td>*</td><td>*</td></tr><tr><td>En-Ja</td><td>2.4</td><td>1.7</td><td>*</td><td>*</td><td>*</td><td>*</td></tr><tr><td>Ja-En</td><td>0.0</td><td>0.0</td><td>3.1</td><td>3.6</td><td>*</td><td>*</td></tr><tr><td>En-Vi</td><td colspan=\"2\">25.0 41.3</td><td>*</td><td>*</td><td>*</td><td>*</td></tr><tr><td>Vi-En</td><td colspan=\"2\">40.6 55.3</td><td>*</td><td>*</td><td>*</td><td>*</td></tr><tr><td>En-Id</td><td colspan=\"6\">55.3 65.6 18.9 23.5 39.4 57.1</td></tr><tr><td>Id-En</td><td colspan=\"2\">58.3 65.0</td><td>*</td><td>*</td><td colspan=\"2\">37.1 58.1</td></tr><tr><td colspan=\"3\">*Failed to converge</td><td/><td/><td/><td/></tr><tr><td colspan=\"7\">gence depends on the language pair (see below,</td></tr></table>",
"text": "Comparison of word translation accuracy (%) -without finetuning. Bold: best unsupervised method."
},
"TABREF1": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>Pair</td><td colspan=\"3\">Supervised Unsupervised</td></tr><tr><td/><td>Baseline</td><td colspan=\"2\">GAN Ours</td></tr><tr><td/><td colspan=\"2\">European Languages</td><td/></tr><tr><td>En-Es</td><td>82.4</td><td>81.7</td><td>82.1</td></tr><tr><td>Es-En</td><td>83.9</td><td>83.3</td><td>84.1</td></tr><tr><td>En-Fr</td><td>82.3</td><td>82.3</td><td>82.3</td></tr><tr><td>Fr-En</td><td>83.2</td><td>82.1</td><td>82.9</td></tr><tr><td>En-De</td><td>75.3</td><td>74.0</td><td>74.7</td></tr><tr><td>De-En</td><td>72.7</td><td>72.2</td><td>73.0</td></tr><tr><td>En-Ru</td><td>50.7</td><td>44.0</td><td>47.5</td></tr><tr><td>Ru-En</td><td>63.5</td><td>59.1</td><td>61.8</td></tr><tr><td>En-It</td><td>78.1</td><td>76.9</td><td>77.9</td></tr><tr><td>It-En</td><td>78.1</td><td>76.7</td><td>77.5</td></tr><tr><td colspan=\"3\">Non-European Languages</td><td/></tr><tr><td>En-Fa</td><td>32.6</td><td>*</td><td>34.6</td></tr><tr><td>Fa-En</td><td>40.2</td><td>*</td><td>41.5</td></tr><tr><td>En-Hi</td><td>34.5</td><td>*</td><td>34.6</td></tr><tr><td>Hi-En</td><td>44.8</td><td>*</td><td>44.5</td></tr><tr><td>En-Bn</td><td>16.6</td><td>*</td><td>14.7</td></tr><tr><td>Bn-En</td><td>24.1</td><td>*</td><td>21.9</td></tr><tr><td>En-Ar</td><td>34.5</td><td>35.3</td><td>35.1</td></tr><tr><td>Ar-En</td><td>49.7</td><td>49.7</td><td>50.6</td></tr><tr><td>En-He</td><td>41.1</td><td>41.6</td><td>40.5</td></tr><tr><td>He-En</td><td>54.9</td><td>52.6</td><td>52.9</td></tr><tr><td>En-Zh</td><td>42.7</td><td>32.5</td><td>*</td></tr><tr><td>Zh-En</td><td>36.7</td><td>31.4</td><td>*</td></tr><tr><td>En-Ja</td><td>1.7</td><td>*</td><td>*</td></tr><tr><td>Ja-En</td><td>0.0</td><td>4.2</td><td>*</td></tr><tr><td>En-Vi</td><td>44.6</td><td>*</td><td>*</td></tr><tr><td>Vi-En</td><td>56.9</td><td>*</td><td>*</td></tr><tr><td>En-Id</td><td>68.0</td><td>67.8</td><td>68.0</td></tr><tr><td>Id-En</td><td>68.0</td><td>66.6</td><td>68.0</td></tr><tr><td>*Failed to converge</td><td/><td/><td/></tr></table>",
"text": "Word translation accuracy (%) -after finetuning and using CSLS. Bold: best unsupervised methods."
},
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>Method</td><td>En-Es</td><td>Es-En</td></tr><tr><td>No PCA</td><td>0.0%</td><td>0.0%</td></tr><tr><td>With 300 PCs</td><td>0.0%</td><td>0.0%</td></tr><tr><td>With 50 PCs</td><td colspan=\"2\">82.2% 83.8%</td></tr></table>",
"text": "En-Es accuracy with and without PCA"
},
"TABREF3": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>Method</td><td colspan=\"2\">En-Es En-Ar</td></tr><tr><td>No randomization</td><td>0.0%</td><td>0.0%</td></tr><tr><td>Randomized Ordering</td><td>0.0%</td><td>0.0%</td></tr><tr><td>Randomized PCA</td><td>9.8%</td><td>0.0%</td></tr><tr><td colspan=\"3\">Randomized Ordering + PCA 16.8% 1.2%</td></tr></table>",
"text": "Fraction of converging runs per stochasticity."
}
}
}
}