|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:40:32.189819Z" |
|
}, |
|
"title": "Joint Training for Learning Cross-lingual Embeddings with Sub-word Information without Parallel Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Ali", |
|
"middle": [ |
|
"Hakimi" |
|
], |
|
"last": "Parizi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of New", |
|
"location": { |
|
"settlement": "Brunswick" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Cook", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of New", |
|
"location": { |
|
"settlement": "Brunswick" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we propose a novel method for learning cross-lingual word embeddings, that incorporates sub-word information during training, and is able to learn high-quality embeddings from modest amounts of monolingual data and a bilingual lexicon. This method could be particularly well-suited to learning cross-lingual embeddings for lowerresource, morphologically-rich languages, enabling knowledge to be transferred from richto lower-resource languages. We evaluate our proposed approach simulating lower-resource languages for bilingual lexicon induction, monolingual word similarity, and document classification. Our results indicate that incorporating sub-word information indeed leads to improvements, and in the case of document classification, performance better than, or on par with, strong benchmark approaches.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we propose a novel method for learning cross-lingual word embeddings, that incorporates sub-word information during training, and is able to learn high-quality embeddings from modest amounts of monolingual data and a bilingual lexicon. This method could be particularly well-suited to learning cross-lingual embeddings for lowerresource, morphologically-rich languages, enabling knowledge to be transferred from richto lower-resource languages. We evaluate our proposed approach simulating lower-resource languages for bilingual lexicon induction, monolingual word similarity, and document classification. Our results indicate that incorporating sub-word information indeed leads to improvements, and in the case of document classification, performance better than, or on par with, strong benchmark approaches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "State-of-the-art approaches in natural language processing (NLP) typically require a substantial amount of human-annotated data (i.e, for supervised approaches to tasks such as part-of-speech tagging or dependency parsing) or they need a very large amount of unannotated text for training (e.g., methods for learning word embeddings). This poses a particular problem for building NLP systems for low-resource languages. There are thousands of human languages, and creating annotated datasets for all of them would be very expensive. Furthermore, many languages have a relatively small number of speakers, and in many cases large amounts of text are not readily-available for building corpora for these languages. A further related challenge is posed by morphologically-rich This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http: //creativecommons.org/licenses/by/4.0/.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "languages, because many word-forms would not be expected to be observed in a training corpus. One way to address these problems is to transfer knowledge from a rich-resource language to a lower-resource language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Word Embeddings are a key feature in approaches for a wide range of NLP tasks, such as part-of-speech tagging (Al-Rfou' et al., 2013) , dependency parsing (Chen and Manning, 2014) , and named entity recognition (Pennington et al., 2014) . If we are able to transfer the knowledge captured in word embeddings for a rich-resource language to another low-resource language, then developing NLP tools could become more feasible for the low-resource language. There has therefore been a wealth of research on cross-lingual word embeddings (e.g., Mikolov et al., 2013b; Vuli\u0107 and Moens, 2016; Lample et al., 2018) , in which embeddings for multiple languages are learned in a shared space, and which can be used to transfer knowledge between languages, such as from a richresource language to a low-resource one (Ruder et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 133, |
|
"text": "(Al-Rfou' et al., 2013)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 155, |
|
"end": 179, |
|
"text": "(Chen and Manning, 2014)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 211, |
|
"end": 236, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 541, |
|
"end": 563, |
|
"text": "Mikolov et al., 2013b;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 564, |
|
"end": 586, |
|
"text": "Vuli\u0107 and Moens, 2016;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 587, |
|
"end": 607, |
|
"text": "Lample et al., 2018)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 806, |
|
"end": 826, |
|
"text": "(Ruder et al., 2019)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Despite the wide range of research on learning cross-lingual embeddings, there are some limitations of these methods that have not been addressed. In the case of a low-resource language, due to the relatively small size of available corpora, a relatively small number of embeddings would be learned. Moreover, in the case of a morphologically-rich language, many wordforms would not be observed in the corpus on which embeddings are trained. As a result, given a subsequent text to process, many words would be expected to be out-of-vocabulary (OOV) with respect to the embedding model. This is a very important issue, because in the case of OOVs, we do not have an embedding for these words, and models for downstream NLP tasks that use embeddings would therefore lack information for these words. Where the number of OOVs is relatively high, such as for low-resource and morphologically-rich languages, this could lead to particularly poor performance in down-stream tasks. This problem has been addressed in monolingual settings by learning embeddings for sub-word units, and then composing representations for OOVs based on their sub-words (Bojanowski et al., 2017; Zhu et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1144, |
|
"end": 1169, |
|
"text": "(Bojanowski et al., 2017;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1170, |
|
"end": 1187, |
|
"text": "Zhu et al., 2019)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recently, with advances in language modelling (Artetxe and Schwenk, 2019) and contextualized language models (Devlin et al., 2019; Conneau and Lample, 2019) , transfer learning has become feasible between languages by using a byte pair encoding (BPE, Sennrich et al., 2016) shared vocabulary, and fine-tuning the models for specific tasks (Wu and Dredze, 2019) . Nevertheless, these models require a substantial amount of training data (Conneau and Lample, 2019) , and in some cases parallel corpora (Artetxe and Schwenk, 2019; Conneau and Lample, 2019) , and are very computationally expensive to train. There is therefore a need for methods that can be trained from a morelimited amount of data and require less computational resources for training, but that nevertheless show comparable performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 73, |
|
"text": "(Artetxe and Schwenk, 2019)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 109, |
|
"end": 130, |
|
"text": "(Devlin et al., 2019;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 131, |
|
"end": 156, |
|
"text": "Conneau and Lample, 2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 245, |
|
"end": 273, |
|
"text": "(BPE, Sennrich et al., 2016)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 339, |
|
"end": 360, |
|
"text": "(Wu and Dredze, 2019)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 436, |
|
"end": 462, |
|
"text": "(Conneau and Lample, 2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 500, |
|
"end": 527, |
|
"text": "(Artetxe and Schwenk, 2019;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 528, |
|
"end": 553, |
|
"text": "Conneau and Lample, 2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we propose a model that can learn cross-lingual word embeddings from a modest amount of monolingual data and a bilingual dictionary. We rely on bilingual dictionaries because they are relatively-widely available. For example, Panlex (Baldwin et al., 2010 ) is a translation resource that combines many bilingual dictionaries and provides translations for 5700 languages. Our proposed model is an extension of the method proposed by Duong et al. (2016) . In their work, they only considered word embeddings, and so their method is unable to form representations for OOVs, and therefore is not expected to perform well for low-resource or morphologically-rich target languages. We extend the method of Duong et al. (2016) by incorporating sub-word information in the process of training cross-lingual word embeddings. In this way, we form a shared embedding space that not only contains embeddings for both source and target language words, but that has also been enriched with sub-word embeddings enabling representations to be formed for OOVs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 269, |
|
"text": "(Baldwin et al., 2010", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 447, |
|
"end": 466, |
|
"text": "Duong et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 715, |
|
"end": 734, |
|
"text": "Duong et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To evaluate our proposed model, we use modest amounts of data for relatively well-resourced languages. We first consider two intrinsic evaluations:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1) the widely-considered task of bilingual lexi-con induction (BLI), and (2) a monolingual word similarity task to show the effectiveness of our proposed approach when the embeddings are used in a monolingual setting. Our results on these tasks demonstrate that incorporating sub-word information leads to improvements for both cross-lingual and monolingual representations. For extrinsic evaluation, to show the impact of having sub-word knowledge in a down-stream NLP task, we consider cross-lingual document classification. Again our results indicate that incorporating sub-word information leads to improvements, and furthermore we find our proposed model to perform on par with, or better than, strong benchmark approaches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A variety of methods have been proposed for learning cross-lingual word embeddings. These methods vary with respect to the level of supervision, and the cross-lingual signals used, such as parallel corpora and bilingual dictionaries. Klementiev et al. (2012) propose a method to learn cross-lingual representations by training a language model on the source and target language and optimizing their objective function jointly. This method, however, requires a parallel corpus, which is not available for many languages, especially low-resource ones. More recently, Artetxe and Schwenk (2019) propose a bi-directional LSTM language model that is trained on a very large parallel corpus, containing 223 million parallel sentences, and jointly learns representations for 93 languages. Aside from requiring a parallel corpus, it is also computationally expensive to train. Mikolov et al. (2013b) argue that the geometric arrangement of word embeddings in two different languages is the same. They therefore propose a method to learn a linear transformation to align the vector spaces of two languages by using a seed lexicon of known translation pairs. Xing et al. (2015) show that normalizing all word vectors to be unit length, and applying an orthogonality constraint on the transformation matrix, improves the approach of Mikolov et al. Artetxe et al. (2017) introduce an alignment-based method which relaxes the requirement of having a bilingual seed lexicon. Their approach begins with a very small seed lexicon -as few as 25 pairs -and in a process of self-learning and through several rounds of bootstrapping, increases the size of the bilingual dictionary. Artetxe et al. (2018b) further relax the need for a bilingual dictionary, and propose a fully unsupervised approach. Their method solves the same mapping problem as Artetxe et al. (2017) , but creates the initial seed lexicon in an unsupervised manner by exploiting the similarity distribution of words in the source and target language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 234, |
|
"end": 258, |
|
"text": "Klementiev et al. (2012)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 565, |
|
"end": 591, |
|
"text": "Artetxe and Schwenk (2019)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 869, |
|
"end": 891, |
|
"text": "Mikolov et al. (2013b)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1149, |
|
"end": 1167, |
|
"text": "Xing et al. (2015)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 1337, |
|
"end": 1358, |
|
"text": "Artetxe et al. (2017)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1827, |
|
"end": 1848, |
|
"text": "Artetxe et al. (2017)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "All of these mapping-based methods require pretrained monolingual word embeddings, the quality of which the final cross-lingual word embeddings are greatly dependent upon. This is problematic in the case that we do not have access to enough training data to learn high quality monolingual embeddings, as would be the case for many low-resource languages. Moreover, it has been shown that fully unsupervised methods do not perform well across all languages, especially in the case of morphologically rich languages, and when the monolingual embeddings do not come from the same domain (S\u00f8gaard et al., 2018; . Furthermore, Ormazabal et al. (2019) show that the isomorphism assumption -i.e., that embeddings for different languages have a similar geometric arrangement, which is key to the success of mappingbased models -does not always hold. They show that methods which jointly learn the embedding space for the source and target language from a parallel corpus are superior to mapping-based methods. However, parallel corpora are a very expensive cross-lingual signal.", |
|
"cite_spans": [ |
|
{ |
|
"start": 584, |
|
"end": 606, |
|
"text": "(S\u00f8gaard et al., 2018;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 622, |
|
"end": 645, |
|
"text": "Ormazabal et al. (2019)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In an alternative approach to learning crosslingual word embeddings, a pesudo-bilingual corpus is first constructed using a bilingual dictionary, and embeddings for the source and target language are then learned from this corpus. Gouws and S\u00f8gaard (2015) concatenate and shuffle the source and target language corpora, and then randomly replace words in this corpus using a bilingual dictionary. They then run CBOW on the constructed corpus to learn word embeddings for both the source and target language. Similarly, Duong et al. (2016) also propose a method that replaces words in a pseudo-bilingual corpus with their translation during training. However, they further propose a way to handle polysemy by choosing the best translation for a word by considering its context using the expectation-maximization algorithm. Compared to mapping-based methods, this approach does not require as large of a corpus for training, because for each word, the context in not only the source language, but also the target language, is used. However, these pseudo-bilingual corpus methods are more expensive to train than mapping-based methods, because the embeddings are learned from scratch, in contrast to mappingbased methods which use pre-trained embeddings and only need to learn the mapping function.", |
|
"cite_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 255, |
|
"text": "Gouws and S\u00f8gaard (2015)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 519, |
|
"end": 538, |
|
"text": "Duong et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Recent approaches to learning cross-lingual embeddings have been trained on concatenated monolingual corpora. Multilingual BERT (mBERT) is a BERT model (Devlin et al., 2019) trained on concatenated Wikipedia corpora for 105 languages. Wu and Dredze (2019) show that since mBERT uses a shared vocabulary for all languages, it can represent embeddings for all languages in a shared space, rather than representing each language in a separate space. This model is therefore able to learn deep contextualized cross-lingual word embeddings without any cross-lingual signal, but is computationally expensive to train. Chaudhary et al. (2018) present a method that uses sub-word information, such as lemmas, morpheme tags, and phoneme n-grams, to transfer knowledge from richresource languages to low-resource ones. They train skip-gram on concatenated monolingual corpora of two related languages and learn representations in a shared space by relying on similar subwords to map related words close to each other in the shared space. They also consider an approach which first trains a model on the rich-resource language and then uses the trained sub-word embeddings to initialize the model for the low-resource language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 173, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 255, |
|
"text": "Wu and Dredze (2019)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 612, |
|
"end": 635, |
|
"text": "Chaudhary et al. (2018)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The approach for learning cross-lingual embeddings proposed in this paper incorporates sub-word information -similar to Chaudhary et al. (2018) and mBert -but in contrast to Chaudhary et al. does not require language-specific tools such as morphological analyzers which might not be available for low-resource languages, and in contrast to mBert is less computationally-expensive to train. The proposed approach is an extension of Duong et al. (2016) that incorporates sub-word information, and requires only modest size monolingual corpora and a bilingual lexicon for training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 143, |
|
"text": "Chaudhary et al. (2018)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 431, |
|
"end": 450, |
|
"text": "Duong et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this section we first describe the approach of Duong et al. (2016) to learning cross-lingual word embeddings, and then present our proposed model, which is an extension of this approach.", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 69, |
|
"text": "Duong et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Embeddings with Pseudo-bilingual Corpora Duong et al. (2016) introduce an approach to learning cross-lingual word representations that can jointly learn representations for words in two languages -referred to as the source and target language -without requiring a parallel corpus. This method is an extension of CBOW (Mikolov et al., 2013a ) that uses two monolingual corpora and a bilingual dictionary. A prefix is added to each word in each monolingual corpus indicating its language. Then, the monolingual corpora are concatenated and the sentences are shuffled. The CBOW objective function, shown below, is only capable of capturing monolingual similarities:", |
|
"cite_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 60, |
|
"text": "Duong et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 339, |
|
"text": "(Mikolov et al., 2013a", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Cross-lingual Word", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "O = i\u2208D (log \u03c3(u T w i h i )+ p j=1 E w j \u223cPn(w) log \u03c3(\u2212u T w j h i ))", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Learning Cross-lingual Word", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Equation 2 is therefore proposed to adapt it to crosslingual settings:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Cross-lingual Word", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "O = i\u2208Ds\u222aDt (\u03b1 log \u03c3(u T w i h i )+ (1 \u2212 \u03b1) log \u03c3(u T w i h i )+ p j=1 E w j \u223cPn(w) log \u03c3(\u2212u T w j h i ))", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Learning Cross-lingual Word", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where h i encodes the context vector,w i is the translation of w i , \u03b1 is a weight parameter, and D s and D t are the source and target language vocabularies, respectively. Duong et al. (2016) also propose an approach to find the best translation for polysemous words using the expectation maximization algorithm and cosine similarity between the context vector -the average of the embeddings for the words in the context -and possible translations. Thus the translation for a word is selected based on its context. Duong et al. (2016) further argue that each of the matrices V and U in word2vec encode different information: V is better for capturing monolingual characteristics, whereas U preserves cross-lingual information. In each update, the context words are pushed closer together in V space, while the target word and its translation become closer in U space and further away from the negative samples. Duong et al. achieve their best results in both monolingual and cross-lingual evaluations by combining V and U during the training phase using a regularization term, \u03b4, in the objective function as shown in Equation 3.", |
|
"cite_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 192, |
|
"text": "Duong et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 516, |
|
"end": 535, |
|
"text": "Duong et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Cross-lingual Word", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "O = O + \u03b4 w\u2208Vs\u222aVt u w \u2212 v w 2 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Cross-lingual Word", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Cross-lingual Word", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For the remainder of the paper we refer to this approach as DUONG2016.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Cross-lingual Word", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Incorporating sub-word information in training word embeddings enhances the quality of the learned embeddings (Bojanowski et al., 2017) . Moreover, because sub-word embeddings can be used to construct representations for OOVs, approaches that incorporate sub-word embeddings are better-suited for low-resource and morphologicallyrich languages which are expected to have relatively high rates of OOVs. In this paper, we extend DUONG2016 by incorporating sub-word information during training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 135, |
|
"text": "(Bojanowski et al., 2017)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Training Incorporating Sub-word Information", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To incorporate sub-word information, we follow a similar approach to Bojanowski et al. (2017) . Each word in the training corpus is augmented with special beginning and end of word markers. Each word is then represented as a bag of character sequences (i.e., sub-words); in our experiments we consider sequences of length 3-6 characters. We additionally include the entire word itself (with beginning and end of word markers) among the sub-words. The embedding for a word is formed by averaging its sub-word embeddings. This gives the following objective function:", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 93, |
|
"text": "Bojanowski et al. (2017)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Training Incorporating Sub-word Information", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "O = i\u2208Ds\u222aDt (\u03b1 log S(w i , c) + (1 \u2212 \u03b1) log S(w i , c) + p j=1 E w j \u223cPn(w) log \u2212S(w j , c)) (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Training Incorporating Sub-word Information", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where c is the context. S, shown in Equation 5, measures the similarity between a word and context, taking into account sub-words: where G w is the set of sub-words appearing in w, and z g is the sub-word embedding for g. To calculate v c , we average representations for each word appearing in c, where each word is represented by the average of its sub-word embeddings. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Training Incorporating Sub-word Information", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "S(w, c) = 1 |G w| g\u2208Gw z T g v c (5) Language Family # Tokens # Types # Embeddings # Dict. entries Chinese Sino-Tibetan 30M (64%) 0.2M (20%) 86K (43%) 1983K Dutch Germanic 84M (64%) 1.3M (8%) 303K (28%) 406K English Germanic 121M 1.1M 240K - French Romance 135M (80%) 1.1M (9%) 288K (30%) 1068K German Germanic 92M (68%) 1.8M (8%) 411K (25%) 964K Italian Romance 119M (68%) 1.2M (7%) 304K (22%) 560K Japanese Japanese 22M (76%) 0.3M (21%) 107K (47%) 736K Russian Slavic 84M (56%) 1.7M (7%) 445K (68%) 1594K Spanish Romance 130M (75%) 1.1M (7%) 279K (22%) 712K", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Training Incorporating Sub-word Information", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For evaluation, we simulate lower-resource languages using 9 well-resourced languages: Chinese, Dutch, English, French, German, Italian, Japanese, Russian, and Spanish. These languages include those considered by Duong et al. (2016) , as well as those in the MLDoc dataset (Schwenk and Li, 2018, which we use for evaluation in Section 5.3). Following previous work (e.g., Duong et al., 2016; Lample et al., 2018) , we only consider pairs of languages with English as either the source or target language, and one of the remaining 8 languages as the other language. To train word embeddings for each language, we use pre-processed Wikipedia dumps (Al-Rfou' et al., 2013) , which are already tokenized and cleaned. To simulate the case of lower-resource languages, following Duong et al. (2016), we randomly select 5 million sentences for each language from their Wikipedia dump. Table 1 shows the number of tokens and types in each corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 232, |
|
"text": "Duong et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 391, |
|
"text": "Duong et al., 2016;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 392, |
|
"end": 412, |
|
"text": "Lample et al., 2018)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 646, |
|
"end": 669, |
|
"text": "(Al-Rfou' et al., 2013)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 878, |
|
"end": 885, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Resources", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We use a bilingual dictionary as the cross-lingual signal in our proposed approach. Our study builds on the work of Duong et al. (2016) , and so for languages that they consider -Dutch, German, Italian, Japanese, and Spanish -we use the same dictionaries that they did, which were extracted from Panlex. 2 For Chinese, French, and Russian we extract dictionaries from Panlex using a similar approach to Duong et al. Table 1 also shows the size of each dictionary, with English as the source language, and the other language as the target language. 3 The coverage of the dictionary with respect to the number of tokens, types, and embeddings learned is also shown. For example, 68% coverage for Italian tokens means that 68% of tokens in the Italian corpus occur in the bilingual dictionary.", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 135, |
|
"text": "Duong et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 548, |
|
"end": 549, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 416, |
|
"end": 423, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Resources", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We present experimental results for two intrinsic evaluations, bilingual lexicon induction and monolingual word similarity, and an extrinsic evaluation on cross-lingual document classification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Bilingual lexicon induction (BLI) is a standard task to evaluate the quality of cross-lingual word embeddings (Vuli\u0107 and Moens, 2013; Artetxe et al., 2017; Ruder et al., 2019) . In this task, we try to find the translation of a source language word in the target language by looking at its nearest neighbours. Ideally, a word and its translation would be located close to each other in the shared cross-lingual word embedding space. Here we focus on comparing our proposed method with DUONG2016 and so consider the same four languages as Duong et al. (2016) : English, Dutch, Italian, and Spanish. In all cases, English is the target language and the other languages are treated as the source language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 133, |
|
"text": "(Vuli\u0107 and Moens, 2013;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 134, |
|
"end": 155, |
|
"text": "Artetxe et al., 2017;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 156, |
|
"end": 175, |
|
"text": "Ruder et al., 2019)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 538, |
|
"end": 557, |
|
"text": "Duong et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bilingual Lexicon Induction", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Model es-en it-en nl-en @1 @5 @10 @1 @5 @10 @1 @5 @10 DUONG2016 ( Following previous work (e.g. Lample et al., 2018; Jawanpuria et al., 2019) , we consider MUSE test sets for evaluation. Word pairs occurring in both the MUSE test sets and our training dictionaries are removed from the training data before training the embeddings. We report precision@N -for N = 1, 5, and 10 -where the system is scored as correct if the gold-standard target word is amongst the top-N most similar target language words (Ruder et al., 2019) . We use cosine as the similarity measure.", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 65, |
|
"text": "(", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 96, |
|
"end": 116, |
|
"text": "Lample et al., 2018;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 117, |
|
"end": 141, |
|
"text": "Jawanpuria et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 504, |
|
"end": 524, |
|
"text": "(Ruder et al., 2019)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bilingual Lexicon Induction", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Results are shown in Table 2 . We begin by considering DUONG2016 and our model using the best parameter settings from Duong et al. (2016) , i.e., a learning rate of 0.025, 25 negative samples, a window size (c) of 48, an embedding size (d) of 200, sub-sampling of 1e \u22124 , \u03b1 of 0.5, and \u03b4 set to 0.01. 4 In terms of precision@1, our model outperforms DUONG2016 for each language, but for precision@5 and precision@10, DUONG2016 performs better.", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 137, |
|
"text": "Duong et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 301, |
|
"end": 302, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 28, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Bilingual Lexicon Induction", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "A window size of 48 takes into account a relatively large amount of context for the target word; however, when incorporating sub-words, as for our proposed model, this wide context could also add noise because of the large number of sub-words in the context, and the wide range of contexts in which sub-words occur. We therefore consider a window size of 5, the fastText default, and 20, which balances having a larger window size against introducing too much noise. Results are shown for this setup for both DUONG2016 and our model For both models, a window size of 5 performs relatively poorly. For DUONG2016, the original window size of 48 performs best in terms of preci-sion@1 for Spanish and Italian, but not Dutch. For our model, the intermediate window size of 20 performs best, except for precision@1 for Spanish and Italian. These results suggest that a model including sub-word information might not be able to use information from a very wide context as effectively as a word-only model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bilingual Lexicon Induction", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Next we consider increasing the embedding size to 300, which is commonly used for fastText (Bojanowski et al., 2017) . We consider this for the best window size for each model, i.e., 48 for DUONG2016 and 20 for our model. 5 Our model with a window size of 20 and embedding size of 300 outperforms DUONG2016 for all parameter settings considered, for all languages and evaluation measures. The difference between our model in this configuration, and DUONG2016 using its original parameter settings, is significant (p < 4.31e \u22126 ) using a one-sided McNemar's test with continuity correction. This demonstrates that incorporating sub-word knowledge during training of crosslingual word embeddings enhances the quality of the resulting word representations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 116, |
|
"text": "(Bojanowski et al., 2017)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bilingual Lexicon Induction", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "These are not state-of-the-art results, where prior work has obtained higher precision. As a point of comparison, we also present results for VecMap (Artetxe et al., 2018a) a supervised mappingbased approach. These results for VecMap are achieved using fastText embeddings trained on full Wikipedia corpora for each language. Our model, on the other hand, is trained on substantially smaller corpora because we focus on approaches that could be applied to lower-resource languages. mBERT and Chaudhary et al. (2018) on language-specific tools, respectively, of these methods.", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 172, |
|
"text": "(Artetxe et al., 2018a)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 492, |
|
"end": 515, |
|
"text": "Chaudhary et al. (2018)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bilingual Lexicon Induction", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For the rest of the paper, \"our model\" refers to the model with an embedding size of 300 and window size of 20. Since changing the window and embedding sizes does not consistently lead to improvements for DUONG2016, and has a negative impact on precision@1, we continue to use the best parameter settings from Duong et al. (2016) for this method.", |
|
"cite_spans": [ |
|
{ |
|
"start": 310, |
|
"end": 329, |
|
"text": "Duong et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bilingual Lexicon Induction", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Here we evaluate the quality of cross-lingual word representations in a monolingual setting. We compare cross-lingual embeddings from our proposed model and DUONG2016. We further consider monolingual embeddings from fastText, a wellknown method to learn embeddings that uses subword information, as a baseline. We consider several parameter settings for fastText. In particular, we consider the best parameter settings for DUONG2016 (CBOW, c = 48, d = 200) , the best parameter settings for our model (CBOW, c = 20, d = 300), and commonly-used fastText settings (skipgram, c = 5, d = 300, and 5 negative samples). In addition, we consider three corpus sizes to train fastText: 5 million sentences (i.e., the same amount of monolingual text that DUONG2016 and our proposed method are trained on), 10 million sentences (the total amount of text in both languages that DUONG2016 and our proposed method are trained on), and full Wikipedia corpora. For the full Wikipedia corpora we only consider the commonly-used parameter settings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 433, |
|
"end": 439, |
|
"text": "(CBOW,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 440, |
|
"end": 447, |
|
"text": "c = 48,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 448, |
|
"end": 456, |
|
"text": "d = 200)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Monolingual Word Similarity", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Following Duong et al. (2016) , we consider English and German for these experiments. We use three datasets for evaluation: English WordSim353 (WS-en, Finkelstein et al., 2002) , German Word-Sim353 (WS-de, Luong et al., 2015) , and Stanford Rare Words (RW-en Luong et al., 2013) . We use cosine as the similarity score. The number of OOVs in WS-en and WS-de is very low (none for WS-en, and two for WS-de). For these datasets, we therefore report results only for in-vocabulary items. For RW-en, however, roughly 25% of the test pairs include an OOV. For this dataset we therefore also report results considering both in-vocabulary words and OOVs (referred to as \"RW-en+OOV\"). Because DUONG2016 is not capable of forming representations for OOVs, in such cases we assign these test pairs the average cosine similarity score over test pairs that are in-vocabulary. Table 3 shows the results. For each dataset, our proposed model outperforms DUONG2016, and also fastText, in all configurations considered. These results indicate that a cross-lingual signal can be used to not only form a cross-lingual shared space, but also to enhance the quality of monolingual embeddings. Note that DUONG2016 improves over fastText on WS-en and WS-de, but not on RWen (or RW-en+OOV) . This indicates that sub-word information is particularly important for forming representations for low-frequency words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 29, |
|
"text": "Duong et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 151, |
|
"end": 176, |
|
"text": "Finkelstein et al., 2002)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 206, |
|
"end": 225, |
|
"text": "Luong et al., 2015)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 259, |
|
"end": 278, |
|
"text": "Luong et al., 2013)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1247, |
|
"end": 1266, |
|
"text": "RWen (or RW-en+OOV)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 864, |
|
"end": 871, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Monolingual Word Similarity", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Here we consider an extrinsic evaluation which uses cross-lingual word embeddings in a downstream task, specifically cross-lingual document classification. This task is motivated by the situation where sufficient labelled training data is not available for a low-resource language. We consider zero-shot classification, i.e., we train a classifier and tune parameters on a rich-resource source language, and then apply the classifier directly to documents in a low-resource target language. Following previous work (e.g., Artetxe and Schwenk, 2019; Wu and Dredze, 2019) , we use the MLDoc dataset (Schwenk and Li, 2018) , which is a subset of the RCV1/RCV2 datasets (Lewis et al., 2004) with balanced classes for training, development, and test sets for the following languages: Chinese, English, French, German, Italian, Japanese, Spanish, and Russian. It has 1000 documents in each of the training and development sets, and 4000 documents in the test set, for each language. Following Artetxe and Schwenk (2019), we use English as the source language, and the other languages as target languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 522, |
|
"end": 548, |
|
"text": "Artetxe and Schwenk, 2019;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 549, |
|
"end": 569, |
|
"text": "Wu and Dredze, 2019)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 597, |
|
"end": 619, |
|
"text": "(Schwenk and Li, 2018)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 666, |
|
"end": 686, |
|
"text": "(Lewis et al., 2004)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Document Classification", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "To build corpora to train embeddings, again following previous work (Duong et al., 2016; Klementiev et al., 2012) , we first randomly sample 400k sentences for each of the source and target language from RCV1/RCV2, 6 and then combine these in-domain corpora with larger Wikipedia corpora. We use the Wikipedia corpora described in Section 4.", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 88, |
|
"text": "(Duong et al., 2016;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 89, |
|
"end": 113, |
|
"text": "Klementiev et al., 2012)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 215, |
|
"end": 216, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Document Classification", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We represent documents as the average of their words' embeddings, where the embeddings are learned by our proposed approach from the corpora described above. We then use a feed-forward classifier (LASER, Artetxe and Schwenk, 2019) , which has been previously applied to cross-lingual document classification, with one hidden layer of 10 hidden units, a learning rate of 0.001, dropout 6 We sampled 80k documents for both the source and target languages, and then sampled 400k sentences. For Spanish, Italian, Russian and Chinese we use all of their RCV2 documents because the total number of documents available for these languages is less than 80k. set to 0.2, and a batch-size of 12, as suggested by Artetxe and Schwenk. We compare our approach against several benchmarks. First we consider the same approach described above, but using embeddings from DUONG2016 instead of our proposed approach. In this case, embeddings for OOVs are not available, and so OOVs are simply ignored in forming document representations. We further consider two strong benchmark approaches -LASER (Artetxe and Schwenk, 2019) and mBERT (Devlin et al., 2019) -that are widely used for comparison (e.g., Wu and Dredze, 2019; Patidar et al., 2019; Keung et al., 2019) . Artetxe and Schwenk recently improved their model, and reported updated results. 7 We use these improved results for comparison. We use mBERT results reported by Wu and Dredze (2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 204, |
|
"end": 230, |
|
"text": "Artetxe and Schwenk, 2019)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 385, |
|
"end": 386, |
|
"text": "6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 702, |
|
"end": 722, |
|
"text": "Artetxe and Schwenk.", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1078, |
|
"end": 1105, |
|
"text": "(Artetxe and Schwenk, 2019)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1116, |
|
"end": 1137, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1182, |
|
"end": 1202, |
|
"text": "Wu and Dredze, 2019;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 1203, |
|
"end": 1224, |
|
"text": "Patidar et al., 2019;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 1225, |
|
"end": 1244, |
|
"text": "Keung et al., 2019)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1328, |
|
"end": 1329, |
|
"text": "7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1409, |
|
"end": 1429, |
|
"text": "Wu and Dredze (2019)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Document Classification", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Results are shown in Table 4 . None of the approaches considered performs best for all languages. However, in terms of the average accuracy over all target languages, our proposed model performs better than DUONG2016, LASER and mBERT. It is worth noting that our model is trained on only 5.4 million sentences in each language, and does not require a parallel corpus. Artetxe and Schwenk (2019) , the next best method in terms of average accuracy, on the other hand, is trained on 225 million parallel sentences. Furthermore, our model outperforms mBERT -a very large language model-based approach -on average, and for all target languages except Chinese and Russian. The current state-of-the-art for MLDOC is XLM f t UDA (Lai et al., 2019) . This model is pre-trained for 15 languages, but not Italian and Japanese, and so results are not available for these languages. XLM f t UDA does however substantially out-perform our proposed model on the other languages, but also requires a large parallel corpus for training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 368, |
|
"end": 394, |
|
"text": "Artetxe and Schwenk (2019)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 722, |
|
"end": 740, |
|
"text": "(Lai et al., 2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 28, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Document Classification", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "In this paper we proposed an approach to learning cross-lingual word embeddings that incorporates sub-word information during training, and relies on only monolingual corpora and a bilingual dictionary. This approach could be particularly wellsuited to lower-resource, morphologically-rich languages, for which large parallel corpora are not available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We evaluated our proposed approach, on a variety of simulated lower-resource languages, for the tasks of BLI, monolingual word similarity, and document classification. Our results on BLI and monolingual word similarity indicated that incorporating sub-word information during training enhances the quality of the resulting cross-lingual, as well as monolingual, representations. For zero-shot crosslingual document classification, incorporating subword information again led to improvements, and our proposed model outperformed benchmark approaches that have substantially higher resource requirements for training. Code and data to reproduce these results has been made available. 8 In future work, we plan to evaluate our proposed approach on truly lower-resource languages to determine the impact of smaller training corpora and bilingual dictionaries on the performance of crosslingual word embeddings. It would also be interesting to consider the morphological richness of languages in this analysis. We further intend to investigate using alternative approaches to forming sub-word representations, such as byte-pair encoding, as well as incorporating positional embeddings into our model (e.g., , to determine their impact on the quality of the resulting cross-lingual embeddings. Finally we plan to evaluate our proposed approach on further extrinsic tasks, such as POS tagging and named entity recognition, focusing on lower-resource languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 682, |
|
"end": 683, |
|
"text": "8", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "This differs from fastText which sums the sub-word embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/longdt219/ XlingualEmb 3 The dictionary size for English is therefore not shown.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The differences between the results for DUONG2016 here and the numbers reported inDuong et al. (2016) are due to differences in the test set. We use the MUSE test set, which was not available in 2016, but is more widely used now.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We also considered a window size of 20 and embedding size of 300 for DUONG2016, but this did not give improvements.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/facebookresearch/ LASER/tree/master/tasks/mldoc", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work is financially supported by the Natural Sciences and Engineering Research Council of Canada, the New Brunswick Innovation Foundation, and the University of New Brunswick. This research was enabled in part by support provided by 8 https://github.com/Cons13411/XLing_ Subword ACENET (https://www.ace-net.ca/) and Compute Canada (www.computecanada.ca).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Polyglot: Distributed word representations for multilingual NLP", |
|
"authors": [ |
|
{ |
|
"first": "Rami", |
|
"middle": [], |
|
"last": "Al-Rfou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "'", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bryan", |
|
"middle": [], |
|
"last": "Perozzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Skiena", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "183--192", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rami Al-Rfou', Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual NLP. In Proceedings of the Seven- teenth Conference on Computational Natural Lan- guage Learning, pages 183-192, Sofia, Bulgaria. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Learning bilingual word embeddings with (almost) no bilingual data", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gorka", |
|
"middle": [], |
|
"last": "Labaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "451--462", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1042" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 451-462, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gorka", |
|
"middle": [], |
|
"last": "Labaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5012--5019", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intel- ligence, pages 5012-5019.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gorka", |
|
"middle": [], |
|
"last": "Labaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "789--798", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1073" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. A robust self-learning method for fully un- supervised cross-lingual mappings of word embed- dings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 789-798, Melbourne, Australia. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "597--610", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00288" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe and Holger Schwenk. 2019. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond. Transac- tions of the Association for Computational Linguis- tics, 7:597-610.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "PanLex and LEXTRACT: Translating all words of all languages of the world", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Pool", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Susan", |
|
"middle": [], |
|
"last": "Colowick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Coling 2010: Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "37--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy Baldwin, Jonathan Pool, and Susan Colow- ick. 2010. PanLex and LEXTRACT: Translating all words of all languages of the world. In Coling 2010: Demonstrations, pages 37-40, Beijing, China. Col- ing 2010 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Enriching word vectors with subword information", |
|
"authors": [ |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "135--146", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00051" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Adapting word embeddings to new languages with morphological and phonological subword representations", |
|
"authors": [ |
|
{ |
|
"first": "Aditi", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chunting", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lori", |
|
"middle": [], |
|
"last": "Levin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Mortensen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3285--3295", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1366" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aditi Chaudhary, Chunting Zhou, Lori Levin, Graham Neubig, David R. Mortensen, and Jaime Carbonell. 2018. Adapting word embeddings to new languages with morphological and phonological subword rep- resentations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 3285-3295, Brussels, Belgium. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A fast and accurate dependency parser using neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "740--750", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/D14-1082" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740-750, Doha, Qatar. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Crosslingual language model pretraining", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7057--7067", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. In Advances in Neural Information Processing Systems, pages 7057-7067.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Learning crosslingual word embeddings without bilingual corpora", |
|
"authors": [ |
|
{ |
|
"first": "Long", |
|
"middle": [], |
|
"last": "Duong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hiroshi", |
|
"middle": [], |
|
"last": "Kanayama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tengfei", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bird", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1285--1295", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D16-1136" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. 2016. Learning crosslingual word embeddings without bilingual corpora. In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 1285- 1295, Austin, Texas. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Placing search in context: The concept revisited", |
|
"authors": [ |
|
{ |
|
"first": "Lev", |
|
"middle": [], |
|
"last": "Finkelstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evgeniy", |
|
"middle": [], |
|
"last": "Gabrilovich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yossi", |
|
"middle": [], |
|
"last": "Matias", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ehud", |
|
"middle": [], |
|
"last": "Rivlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zach", |
|
"middle": [], |
|
"last": "Solan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gadi", |
|
"middle": [], |
|
"last": "Wolfman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eytan", |
|
"middle": [], |
|
"last": "Ruppin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "ACM Transactions on Information Systems", |
|
"volume": "20", |
|
"issue": "1", |
|
"pages": "116--131", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/503104.503110" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on Informa- tion Systems, 20(1):116-131.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Simple task-specific bilingual word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Gouws", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1386--1390", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/N15-1157" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephan Gouws and Anders S\u00f8gaard. 2015. Simple task-specific bilingual word embeddings. In Pro- ceedings of the 2015 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1386-1390, Denver, Colorado. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Learning word vectors for 157 languages", |
|
"authors": [ |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prakhar", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Ar- mand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Learning multilingual word embeddings in latent metric space: A geometric approach", |
|
"authors": [ |
|
{ |
|
"first": "Pratik", |
|
"middle": [], |
|
"last": "Jawanpuria", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arjun", |
|
"middle": [], |
|
"last": "Balgovind", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anoop", |
|
"middle": [], |
|
"last": "Kunchukuttan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bamdev", |
|
"middle": [], |
|
"last": "Mishra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "107--120", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00257" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pratik Jawanpuria, Arjun Balgovind, Anoop Kunchukuttan, and Bamdev Mishra. 2019. Learn- ing multilingual word embeddings in latent metric space: A geometric approach. Transactions of the Association for Computational Linguistics, 7:107-120.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Loss in translation: Learning bilingual word mapping with a retrieval criterion", |
|
"authors": [ |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Herv\u00e9", |
|
"middle": [], |
|
"last": "J\u00e9gou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2979--2984", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1330" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herv\u00e9 J\u00e9gou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 2979-2984, Brussels, Belgium.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Adversarial learning with contextual embeddings for zero-resource cross-lingual classification and NER", |
|
"authors": [ |
|
{ |
|
"first": "Phillip", |
|
"middle": [], |
|
"last": "Keung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yichao", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vikas", |
|
"middle": [], |
|
"last": "Bhardwaj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1355--1360", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1138" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Phillip Keung, Yichao Lu, and Vikas Bhardwaj. 2019. Adversarial learning with contextual embeddings for zero-resource cross-lingual classification and NER. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 1355- 1360, Hong Kong, China. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Inducing crosslingual distributed representations of words", |
|
"authors": [ |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Klementiev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Binod", |
|
"middle": [], |
|
"last": "Bhattarai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "The COL-ING 2012 Organizing Committee", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1459--1474", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexandre Klementiev, Ivan Titov, and Binod Bhat- tarai. 2012. Inducing crosslingual distributed rep- resentations of words. In Proceedings of COLING 2012, pages 1459-1474, Mumbai, India. The COL- ING 2012 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Bridging the domain gap in cross-lingual document classification", |
|
"authors": [ |
|
{ |
|
"first": "Guokun", |
|
"middle": [], |
|
"last": "Lai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barlas", |
|
"middle": [], |
|
"last": "Oguz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.07009" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guokun Lai, Barlas Oguz, and Veselin Stoyanov. 2019. Bridging the domain gap in cross-lingual document classification. arXiv preprint arXiv:1909.07009.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Word translation without parallel data", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Herv\u00e9", |
|
"middle": [], |
|
"last": "J\u00e9gou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "6th International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2018. Word translation without parallel data. In 6th Inter- national Conference on Learning Representations, ICLR 2018.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Rcv1: A new benchmark collection for text categorization research", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tony", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fan", |
|
"middle": [], |
|
"last": "Rose", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Journal of machine learning research", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "361--397", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. Journal of machine learning research, 5(Apr):361-397.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Bilingual word representations with monolingual quality in mind", |
|
"authors": [ |
|
{ |
|
"first": "Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "151--159", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/W15-1521" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151-159, Denver, Col- orado. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Better word representations with recursive neural networks for morphology", |
|
"authors": [ |
|
{ |
|
"first": "Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "104--113", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thang Luong, Richard Socher, and Christopher Man- ning. 2013. Better word representations with re- cursive neural networks for morphology. In Pro- ceedings of the Seventeenth Conference on Computa- tional Natural Language Learning, pages 104-113, Sofia, Bulgaria. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "1st International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. In 1st International Con- ference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Exploiting similarities among languages for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for ma- chine translation. CoRR, abs/1309.4168.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Analyzing the limitations of cross-lingual word embedding mappings", |
|
"authors": [ |
|
{ |
|
"first": "Aitor", |
|
"middle": [], |
|
"last": "Ormazabal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gorka", |
|
"middle": [], |
|
"last": "Labaka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4990--4995", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1492" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aitor Ormazabal, Mikel Artetxe, Gorka Labaka, Aitor Soroa, and Eneko Agirre. 2019. Analyzing the lim- itations of cross-lingual word embedding mappings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4990-4995, Florence, Italy. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "From monolingual to multilingual FAQ assistant using multilingual cotraining", |
|
"authors": [ |
|
{ |
|
"first": "Mayur", |
|
"middle": [], |
|
"last": "Patidar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Surabhi", |
|
"middle": [], |
|
"last": "Kumari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manasi", |
|
"middle": [], |
|
"last": "Patwardhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shirish", |
|
"middle": [], |
|
"last": "Karande", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Puneet", |
|
"middle": [], |
|
"last": "Agarwal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "115--123", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-6113" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mayur Patidar, Surabhi Kumari, Manasi Patwardhan, Shirish Karande, Puneet Agarwal, Lovekesh Vig, and Gautam Shroff. 2019. From monolingual to multilingual FAQ assistant using multilingual co- training. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 115-123, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "GloVe: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/D14-1162" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "A survey of cross-lingual word embedding models", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Vuliundefined", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "65", |
|
"issue": "1", |
|
"pages": "569--630", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1613/jair.1.11640" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Ruder, Ivan Vuliundefined, and Anders S\u00f8gaard. 2019. A survey of cross-lingual word em- bedding models. Journal of Artificial Intelligence Research, 65(1):569-630.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "A corpus for multilingual document classification in eight languages", |
|
"authors": [ |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xian", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Holger Schwenk and Xian Li. 2018. A corpus for mul- tilingual document classification in eight languages. In Proceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Re- sources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1715--1725", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1162" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "On the limitations of unsupervised bilingual dictionary induction", |
|
"authors": [ |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Vuli\u0107", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "778--788", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1072" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anders S\u00f8gaard, Sebastian Ruder, and Ivan Vuli\u0107. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778- 788, Melbourne, Australia. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Do we really need fully unsupervised cross-lingual embeddings?", |
|
"authors": [ |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Vuli\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Goran", |
|
"middle": [], |
|
"last": "Glava\u0161", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4407--4418", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1449" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ivan Vuli\u0107, Goran Glava\u0161, Roi Reichart, and Anna Ko- rhonen. 2019. Do we really need fully unsuper- vised cross-lingual embeddings? In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4407-4418, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Crosslingual semantic similarity of words as the similarity of their semantic word responses", |
|
"authors": [ |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Vuli\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Francine", |
|
"middle": [], |
|
"last": "Moens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "106--116", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2013. Cross- lingual semantic similarity of words as the similarity of their semantic word responses. In Proceedings of the 2013 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 106-116, At- lanta, Georgia. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Bilingual distributed word representations from documentaligned comparable data", |
|
"authors": [ |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Vuli\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Francine", |
|
"middle": [], |
|
"last": "Moens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "55", |
|
"issue": "", |
|
"pages": "953--994", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2016. Bilingual distributed word representations from document- aligned comparable data. Journal of Artificial Intel- ligence Research, 55:953-994.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT", |
|
"authors": [ |
|
{ |
|
"first": "Shijie", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Dredze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "833--844", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1077" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, be- cas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Normalized word embedding and orthogonal transform for bilingual word translation", |
|
"authors": [ |
|
{ |
|
"first": "Chao", |
|
"middle": [], |
|
"last": "Xing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chao", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiye", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1006--1011", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/N15-1104" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal trans- form for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 1006-1011, Denver, Colorado. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "A systematic study of leveraging subword information for learning word representations", |
|
"authors": [ |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Vuli\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "912--932", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1097" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yi Zhu, Ivan Vuli\u0107, and Anna Korhonen. 2019. A sys- tematic study of leveraging subword information for learning word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 912-932, Minneapolis, Min- nesota. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"type_str": "table", |
|
"text": "The size of the corpus for each language, in terms of the number of tokens and types. The language family, number of embeddings learned from each corpus, and number of entries in the bilingual dictionary, is also shown for each language. The parenthetical numbers indicate coverage in the dictionary.", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"text": "c = 48, d = 200) 54.59 83.12 86.87 45.98 77.11 81.79 40.73 71.72 77.06 DUONG2016 (c = 5, d = 200) 28.20 70.26 76.36 21.08 60.78 67.47 24.36 55.07 62.65 DUONG2016 (c = 20, d = 200) 50.50 82.92 87.07 41.83 77.11 81.53 41.41 72.19 77.88 DUONG2016 (c = 48, d = 300) 50.90 83.86 87.54 44.24 77.44 82.33 38.16 71.31 77.67 Our Model (c = 48, d = 200) 60.15 79.84 84.26 54.62 73.83 78.92 42.25 67.39 72.80 Our Model (c = 5, d = 200) 41.39 78.63 85.06 36.21 72.42 79.45 36.54 69.15 76.25 Our Model (c = 20, d = 200) 59.14 83.12 87.27 54.02 77.64 82.00 47.56 73.00 78.69 Our Model (c = 20, d = 300) 60.21 84.53 89.28 55.15 80.12 84.94 46.21 74.83 80.11 VecMap 81.27 91.07 93.27 76.13 86.87 89.47 71.53 83.93 86.53", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"text": "Precision@N for bilingual lexicon induction. The best performance, for each dataset and evaluation measure, is shown in boldface.", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"text": "Spearman's correlation for monolingual similarity on each dataset, for each method considered. The best performance on each dataset is shown in boldface.", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"text": "Accuracy on the MLDoc zero-shot cross-lingual document classification task, for each model and target language, with English as the source language. The average accuracy over all target languages is also shown.", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |