Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W19-0103",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:17:07.851291Z"
},
"title": "Unsupervised Learning of Cross-Lingual Symbol Embeddings Without Parallel Data",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Granroth-Wilding",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Helsinki",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Hannu",
"middle": [],
"last": "Toivonen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Helsinki",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a new method for unsupervised learning of multilingual symbol (e.g. character) embeddings, without any parallel data or prior knowledge about correspondences between languages. It is able to exploit similarities across languages between the distributions over symbols' contexts of use within their language, even in the absence of any symbols in common to the two languages. In experiments with an artificially corrupted text corpus, we show that the method can retrieve character correspondences obscured by noise. We then present encouraging results of applying the method to real linguistic data, including for low-resourced languages. The learned representations open the possibility of fully unsupervised comparative studies of text or speech corpora in low-resourced languages with no prior knowledge regarding their symbol sets.",
"pdf_parse": {
"paper_id": "W19-0103",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a new method for unsupervised learning of multilingual symbol (e.g. character) embeddings, without any parallel data or prior knowledge about correspondences between languages. It is able to exploit similarities across languages between the distributions over symbols' contexts of use within their language, even in the absence of any symbols in common to the two languages. In experiments with an artificially corrupted text corpus, we show that the method can retrieve character correspondences obscured by noise. We then present encouraging results of applying the method to real linguistic data, including for low-resourced languages. The learned representations open the possibility of fully unsupervised comparative studies of text or speech corpora in low-resourced languages with no prior knowledge regarding their symbol sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Linguistic typology aims to map connections and similarities between different languages or dialects along multiple dimensions of comparison. A large proportion of languages spoken today have few speakers and little data annotated with linguistic analyses such as syntactic parses or partof-speech tags. This makes mapping their typology difficult, but doing so could help in developing just such resources, for example by language transfer. There may exist digital text in these languages (e.g. forum posts or newspapers), or field recordings of speech. We attempt to learn about a language's typology purely from its surface form.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We focus on languages known to be fairly closely related (e.g. in the same language family), but where knowing more about the precise nature of the typology (e.g. regular sound correspondences in cognate words or differences in morphology) could help with resource development.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One example is the Uralic family, which contains many low-resourced languages and dialects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To compare languages' surface forms, we must first address how to compare their basic units, characters in the case of text (List, 2014) . Even closely related languages may use different writing systems, conventions, or transcription practices, as well as having systematic linguistic differences. These considerations mean that, without prior knowledge of a correspondence between two languages, it may not make sense to assume that, say, the letter a in one is directly comparable to a in the other. For example, Swedish \u00e5 typically corresponds Finnish o, and loanwords from Swedish to Finnish replace the former with the latter. Whilst such direct and well known correspondences can easily be written down by someone familiar with the language pair, capturing less clear-cut or systematic correspondences, and doing so for a large number of low-resourced language pairs, is labour intensive.",
"cite_spans": [
{
"start": 124,
"end": 136,
"text": "(List, 2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In an extreme case, two corpora may use completely distinct symbol sets, e.g. different scripts. There may be systematic linguistic differences that create a close correspondence between different symbols across languages (List, 2014) , such as the phonological correspondence between Frisian f and Danish v (Fenna et al., 2014) . It may also be desirable to find correspondences between sequences of symbols, e.g. Spanish \u00f1 and Portuguese nh.",
"cite_spans": [
{
"start": 222,
"end": 234,
"text": "(List, 2014)",
"ref_id": "BIBREF19"
},
{
"start": 308,
"end": 328,
"text": "(Fenna et al., 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We tackle this problem using unsupervised learning of vector representations (embeddings) of symbols, learning purely from unannotated, unaligned linguistic corpora. Here, we apply our method to text, learning representations of characters, but it is equally applicable to other sequences, such as phonetic sequences from speech. To be applicable to extreme cases of very little overlap between symbol vocabularies (e.g. different scripts, or types of phonological transcription), it does not assume a correspondence even between common symbols. E.g., if both use a, it treats a in the two languages as distinct symbols (1:a and 2:a). This means that, where such correspondences are found, we know that they are motivated by statistical regularities in their usages, rather than any initial bias. It may learn that 1:a corresponds to 2:a, or to 2:\u00e4, or that it has a weak correspondence to multiple characters. This makes for a challenging learning task, since it becomes impossible to exploit the idea behind typical distributional methods -that similar symbols can be recognized by similarities between their contexts of occurrences -since the contexts across languages consist of symbols from distinct sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present a method that is able to discover similarities between inter-lingual symbol pairs by exploiting similarities between their respective intra-lingual distributions over contexts of occurrence. It must recognize that 1:a plays a role in relation to other symbols in language 1 that is similar to, say, 2:\u00e4's role in relation to other symbols in language 2. It does not rely on parallel or comparable corpora, so is robust to use on whatever corpora are available for the languages of interest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we describe our learning method, XSYM ( \u00a73). Then we present two sets of experiments. In the first ( \u00a74), we use artificially corrupted linguistic data, allowing us to observe how well the technique recovers known mappings between character pairs obscured by the corruption. In the second ( \u00a75), we demonstrate encouraging initial results of applying the method to real linguistic data, including several low-resourced pairs, which show that it is able to build a coherent space of characters, for example placing the majority of identical characters in two related languages close to each other. This demonstrates its potential to recover correspondences between symbol pairs on the basis of distributional statistics without any other connection between the observed corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Code for data preprocessing and model training, as well as trained embeddings, are available online 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Like us, Tsvetkov et al. (2016) employ a language modeling objective with neural networks to learn multilingual embeddings for symbols (phones). They supply typological information to improve the representations. We believe that the present method is better suited to direct cross-lingual comparison of symbols and, since we aim to discover typological information, do not incorporate this in the input. \u00d6stling and Tiedemann (2016) use a character-level, multilingual language model to learn vectors to represent languages. Whilst their model shares information between languages, we focus on modeling commonalities at the level of symbol embeddings. We expect the cross-lingual information our method captures to be complementary to that in the language vectors.",
"cite_spans": [
{
"start": 9,
"end": 31,
"text": "Tsvetkov et al. (2016)",
"ref_id": null
},
{
"start": 404,
"end": 432,
"text": "\u00d6stling and Tiedemann (2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "A particular area where symbol alignment is required is cognate discovery -finding words with a common linguistic origin. List (2014) describes uses of string alignment methods, the predominant approach in the literature. He distinguishes paradigmatic aspects (correspondences between basic units, like phones) and syntagmatic aspects (comparisons in terms of sequence structure). Approaches to paradigmatic modeling include: assuming a simple set of correspondences between symbols, e.g. aligning identical symbols (Brew et al., 1996; Kondrak, 2000; Proki\u0107 et al., 2009) ; abstracting or normalizing symbols to comparable classes (Kondrak and Hirst, 2002; Diana Inkpen, 2005; List, 2012) ; and learning scoring functions or mappings to align symbols, often initializing using one of the previous assumptions (Pirkola et al., 2003; Mulloni and Pekar, 2006; Mulloni, 2007; Kondrak, 2009; Delmestri and Cristianini, 2010; Gomes and Lopes, 2011; Ciobanu and Dinu, 2014) . Our approach in these terms is to learn paradigmatic correspondences from purely syntagmatic information. Some methods handle sound (e.g. phone) sequences, others text: ours, like Tsvetkov et al. (2016) , can be applied to either. In contrast to alignment approaches, Hall and Klein (2010) use a Bayesian model of language change to account for differences in phonetic surface forms. McCoy and Frank (2018) use contextbased character embeddings for cognate discovery and propose a method to discover cognates in a low-resourced language via a better-resourced pivot language. Our embeddings could be used with the same cognate alignment technique and evaluation scheme in future work. Our method provides an alternative, potentially more flexible, way to align with a low-resourced language.",
"cite_spans": [
{
"start": 516,
"end": 535,
"text": "(Brew et al., 1996;",
"ref_id": "BIBREF1"
},
{
"start": 536,
"end": 550,
"text": "Kondrak, 2000;",
"ref_id": "BIBREF14"
},
{
"start": 551,
"end": 571,
"text": "Proki\u0107 et al., 2009)",
"ref_id": "BIBREF29"
},
{
"start": 631,
"end": 656,
"text": "(Kondrak and Hirst, 2002;",
"ref_id": "BIBREF16"
},
{
"start": 657,
"end": 676,
"text": "Diana Inkpen, 2005;",
"ref_id": "BIBREF6"
},
{
"start": 677,
"end": 688,
"text": "List, 2012)",
"ref_id": "BIBREF18"
},
{
"start": 809,
"end": 831,
"text": "(Pirkola et al., 2003;",
"ref_id": "BIBREF27"
},
{
"start": 832,
"end": 856,
"text": "Mulloni and Pekar, 2006;",
"ref_id": "BIBREF23"
},
{
"start": 857,
"end": 871,
"text": "Mulloni, 2007;",
"ref_id": "BIBREF22"
},
{
"start": 872,
"end": 886,
"text": "Kondrak, 2009;",
"ref_id": "BIBREF15"
},
{
"start": 887,
"end": 919,
"text": "Delmestri and Cristianini, 2010;",
"ref_id": "BIBREF5"
},
{
"start": 920,
"end": 942,
"text": "Gomes and Lopes, 2011;",
"ref_id": "BIBREF9"
},
{
"start": 943,
"end": 966,
"text": "Ciobanu and Dinu, 2014)",
"ref_id": "BIBREF3"
},
{
"start": 1149,
"end": 1171,
"text": "Tsvetkov et al. (2016)",
"ref_id": null
},
{
"start": 1237,
"end": 1258,
"text": "Hall and Klein (2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Most methods depend to some degree on linguistic resources. Many require a list of known cognate pairs (Mulloni and Pekar, 2006; Mulloni, 2007; Delmestri and Cristianini, 2010; Gomes and Lopes, 2011; Ciobanu and Dinu, 2014) , or a manually aligned corpus (Navlea and Todirascu, 2011; List, 2012) , others language-specific knowledge about symbols (Kondrak, 2000) or NLP tools, such as part-of-speech taggers (Brew et al., 1996; Navlea and Todirascu, 2011) . Hall and Klein (2010) require a phylogeny of the input languages. We avoid reliance on any language-specific resources.",
"cite_spans": [
{
"start": 103,
"end": 128,
"text": "(Mulloni and Pekar, 2006;",
"ref_id": "BIBREF23"
},
{
"start": 129,
"end": 143,
"text": "Mulloni, 2007;",
"ref_id": "BIBREF22"
},
{
"start": 144,
"end": 176,
"text": "Delmestri and Cristianini, 2010;",
"ref_id": "BIBREF5"
},
{
"start": 177,
"end": 199,
"text": "Gomes and Lopes, 2011;",
"ref_id": "BIBREF9"
},
{
"start": 200,
"end": 223,
"text": "Ciobanu and Dinu, 2014)",
"ref_id": "BIBREF3"
},
{
"start": 255,
"end": 283,
"text": "(Navlea and Todirascu, 2011;",
"ref_id": "BIBREF24"
},
{
"start": 284,
"end": 295,
"text": "List, 2012)",
"ref_id": "BIBREF18"
},
{
"start": 347,
"end": 362,
"text": "(Kondrak, 2000)",
"ref_id": "BIBREF14"
},
{
"start": 408,
"end": 427,
"text": "(Brew et al., 1996;",
"ref_id": "BIBREF1"
},
{
"start": 428,
"end": 455,
"text": "Navlea and Todirascu, 2011)",
"ref_id": "BIBREF24"
},
{
"start": 458,
"end": 479,
"text": "Hall and Klein (2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The issue of cross-lingual symbol alignment also arises in other tasks and similar approaches are used. For example, methods for computing language similarity from the surface form fall into the same categories described above for cognate identification (Batagelj et al., 1992; Kita, 1999; Petroni and Serva, 2008; Gamallo et al., 2017) .",
"cite_spans": [
{
"start": 254,
"end": 277,
"text": "(Batagelj et al., 1992;",
"ref_id": "BIBREF0"
},
{
"start": 278,
"end": 289,
"text": "Kita, 1999;",
"ref_id": "BIBREF13"
},
{
"start": 290,
"end": 314,
"text": "Petroni and Serva, 2008;",
"ref_id": "BIBREF26"
},
{
"start": 315,
"end": 336,
"text": "Gamallo et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Unsupervised or semi-supervised learning of multilingual representations has been addressed at other levels of analysis (e.g. Kuhn, 2004; Snyder et al., 2009; Christodoulopoulos et al., 2012) . Many could be applied to unsupervised typology, since linguistic typology concerns all levels of analysis, so are complementary to that we present. Conneau et al. (2017) present unsupervised learning of multilingual word embeddings. This could be applied to low-resourced languages and combined with our method to identify words that are related in both etymology and meaning (the Specific Homologue Detection Problem, List, 2014).",
"cite_spans": [
{
"start": 126,
"end": 137,
"text": "Kuhn, 2004;",
"ref_id": "BIBREF17"
},
{
"start": 138,
"end": 158,
"text": "Snyder et al., 2009;",
"ref_id": "BIBREF31"
},
{
"start": 159,
"end": 191,
"text": "Christodoulopoulos et al., 2012)",
"ref_id": "BIBREF2"
},
{
"start": 342,
"end": 363,
"text": "Conneau et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Conneau et al.'s learning problem is similar to ours, applied to word meaning rather than symbol correspondence. Whilst a similar technique could perhaps be applied to the present task, our method focuses specifically on similarities in local contexts of symbol use, rather than similarities in the structure of embedding spaces, which are less informative in the case of small vocabularies of characters or phonemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "We describe a model that assigns language modeltype scores to short sequences of symbols. We train the model and use the learned embeddings and n-gram composition function. We are not ultimately interested in the predictive model, only the derived representations. The learning technique follows other representation learning algorithms (such as Mikolov et al., 2013) in using negative sampling. However, these methods cannot be applied directly, since the fact that the vocabularies of observed contexts are distinct for the two languages means they are unable to discover similarities between characters across languages.",
"cite_spans": [
{
"start": 346,
"end": 367,
"text": "Mikolov et al., 2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "p o h j a n L-ngram R-ngram L-vec R-vec",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "In each sample seen at training time (pohjan in Fig. 1 ), all characters are from the same language, so the vectors for a Finnish character are affected only by other Finnish characters surrounding it. It is therefore possible that the resulting embeddings are grouped by language, effectively learning an independent predictor for each language. Training a high-capacity model (like an RNN) on multilingual data tends to result in this outcome. However, limiting the capacity of the network can force the model to share information between the languages at the level of embeddings. It then benefits the model to learn embeddings that exploit similarities across languages in the relationships between adjacent character sequences within a language. For example, if a is often followed by b in both languages, and there are also similarities between usage of 1:b and 2:b, the model can exploit this by learning similar vectors for 1:a and 2:a, and simultaneously 1:b and 2:b.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 54,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "Our unsupervised representation learning method, XSYM, consists of a feedforward neural network (Fig. 1 ) that takes as input a short sequence of characters and predicts whether or not it is a real sample from one of the languages in the training data. The character vocabularies are distinguished in the input: e.g. fi:a is distinct from et:a. The required limitation of capacity mentioned above is achieved by limiting the size of the layers and using only a small number of layers for the predictor.",
"cite_spans": [],
"ref_spans": [
{
"start": 96,
"end": 103,
"text": "(Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "The length of the input sequence is variable. Each side (L-ngram and R-ngram) may be a single symbol, represented by the symbol's embedding (which becomes L-vec/R-vec), or a bi-or tri-gram, whose embeddings are concatenated and projected by a linear transformation to get a vector for the ngram, L-vec or R-vec. Separate transformations are learned for bi-grams and tri-grams. The same embeddings are used in each input position and the same composition function on both sides. L-vec and R-vec have the same size as the embeddings learned for individual characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "The outputs of the two compositions are passed to a predictor function: a tanh layer and a sigmoid activation for the final output node, P . Varying the size of the two ngrams (L-and R-ngram) independently, so that a bigram is sometimes observed beside a unigram, sometimes a bigram, etc, causes the composed representations to reside in the same vector space, since they are inputs to the same predictor function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "In the experiments, we use an embedding (and composed n-gram representation) size of 30. The hidden layer in the predictor also has 30 nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "Positive samples are taken by passing a sliding window over the text, alternating corpora. Each positive sample is accompanied by a randomly generated negative. The positive and negative output values are used with a Bayesian Personalized Rank (BPR) objective function for training. BPR has been successfully used for similar representation learning tasks, where negative data is not directly available: it encourages negative samples to be ranked lower than corresponding positives (Riedel et al., 2013) .",
"cite_spans": [
{
"start": 483,
"end": 504,
"text": "(Riedel et al., 2013)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "The sizes of the L-and R-ngrams are drawn independently at random. Each negative sample replaces either the L-or R-ngram of its corresponding positive (randomly, either poh or jan in Fig. 1 ) with characters drawn independently from the unigram distribution of the language of the sample.",
"cite_spans": [],
"ref_spans": [
{
"start": 183,
"end": 189,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "All parameters, including embeddings, are initialized randomly. Dropout is applied to the embeddings and composed n-grams and a unit norm constraint is placed on the embeddings. We train using stochastic gradient descent with Adam learning rate adaptation, batch size 1000.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "The learned embeddings are affected by random initialization. As is typical in unsupervised learning, there is no simple way to select the best model, since we cannot evaluate the learned representations on a validation set. Conneau et al. (2017) define an unsupervised validation criterion to handle this problem in unsupervised alignment of word embeddings, which they use for model selection, as a proxy for word translation accuracy.",
"cite_spans": [
{
"start": 225,
"end": 246,
"text": "Conneau et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Validation criterion",
"sec_num": "3.3"
},
{
"text": "Eqn. 1 defines a validation criterion for a trained set of embeddings, nn-sim. To the extent that it correlates with the accuracy of correspondences found in the embeddings, it is suitable for model selection. To the extent that this holds throughout training, it can also be used for early stopping. We test these correlations in the next section. Given embeddings for languages A and B, we compute for each character in A the cosine similarity to its nearest neighbour from B, and take the mean over A's characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Validation criterion",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "nn-sim = 1 |A| X a2A min b2B cos(a, b)",
"eq_num": "(1)"
}
],
"section": "Validation criterion",
"sec_num": "3.3"
},
{
"text": "Eqn. 2 defines an evaluation metric pair-rank that can be computed where the desired pair correspondences (a, b) 2 C are known. It measures how well the correspondences are retrieved by the embeddings . For each (a, b) , we compute the rank, by cosine distance from a, of b among all characters in B, normalized by the size of B. We compute the same in the opposite direction and take the average of all values. A lower value reflects a better retrieval of correspondences.",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 218,
"text": ". For each (a, b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Validation criterion",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "pair-rank = 1 2|C| X (a,b)2C rank b (cos(a, B)) |B| + rank a (cos(b, A)) |A|",
"eq_num": "(2)"
}
],
"section": "Validation criterion",
"sec_num": "3.3"
},
{
"text": "4 Experiments with artificial data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Validation criterion",
"sec_num": "3.3"
},
{
"text": "For any pair of related languages, we expect to find a spectrum of correspondences between their characters, ranging from some very close pairs, through weaker correspondences, to no correspondence at all. There exists no gold-standard list of correspondences that a good model should find, making it difficult to evaluate representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "4.1"
},
{
"text": "(a) Kaiken t\u00e4m\u00e4n lis\u00e4ksi saan hellyytt\u00e4 ja l\u00e4mp\u00f6\u00e4 sek\u00e4 saan antaa sit\u00e4 . (b) K\u00eaikei \u00eb\u00e4m\u00e4n o?t\u00e4nsi n\u00ea\u00ean hssG\u00f6\u00f6\u00eb\u00eb\u00e4 \u00ea\u00ea H\u00e4m\u00de?\u00e4 sAEi\u00e4 s\u00ea\u00ebn on\u00eb?\u00ea si\u00eb\u00e4 a Figure 2 : Example sentence from the YLILAUTA corpus in its original form (a) and with the highest level of all three types of corruption (b). The model is trained on an uncorrupted portion of the corpus as one language and a distinct subset to which this corruption has been applied.",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 155,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motivation",
"sec_num": "4.1"
},
{
"text": "We begin by testing XSYM on artificial datasets. We apply several types of corruption to real linguistic data, replacing some characters at random and combining or splitting others, then treat the corrupted data as a new language, with a distinct character set. The result is in some respects superficially similar to the relationship between related languages and presents similar challenges to the learning method. Crucially, having corrupted the data by known processes, we know which correspondences a successful method should recover.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "4.1"
},
{
"text": "First, we use corrupted data to measure how well the validation criterion nn-sim correlates with retrieval of known correspondences, measured by pair-rank. Then we analyze how robust the method is to the different types of corruption to get some insight into how it behaves.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "4.1"
},
{
"text": "We apply three different types of corruption. The input data has a character vocabulary V i , the corrupted data V o which may be different, since some corruptions add or remove characters. Corruptions are applied in the order presented. An example of the resulting text is given in Fig. 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 283,
"end": 289,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corruptions",
"sec_num": "4.2"
},
{
"text": "Random noise: Randomly sample a given proportion p noise of character tokens and, for each, sample a character at random to replace it with from the unigram distribution over V i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corruptions",
"sec_num": "4.2"
},
{
"text": "Systematic mapping: Systematically substitute a character a (randomly chosen from V i ) with b (randomly chosen from V o ). The resulting bs are indistinguishable from those that were bs in the input. a is now not in V o , since it never occurs in the corrupted data. Characters are chosen for mapping until the expected proportion of tokens affected is >p map . Since characters are sampled greedily to preserve randomness, the actual proportion,p map , may be greater than p map . as for mapping, until the expected proportion affected is >p split . The actual proportion isp split .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corruptions",
"sec_num": "4.2"
},
{
"text": "We train embeddings using XSYM with two corpora, as if they represented different languages. The first is a randomly chosen subset of 95k documents from the YLILAUTA corpus of Finnish forum posts 2 . The second is a distinct subset of the same size, to which the corruptions have been applied. We run the training under different levels of each type of corruption, applying all 27 combinations of p = 0, 0.15, 0.3 for p noise , p map and p split .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corruptions",
"sec_num": "4.2"
},
{
"text": "In the first experiment, we measure the correlation between nn-sim and pair-rank. We train each model once for exactly 10 corpus iterations, outputting both metrics every 500k samples, resulting in 70 measures per model. In the second, we train all models again, using nn-sim as an unsupervised criterion for early stopping and model selection over 5 random intializations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corruptions",
"sec_num": "4.2"
},
{
"text": "Testing validation criterion nn-sim. We find a Pearson correlation coefficient (PCC) of r = 0.79 between nn-sim and pair-rank from the 1,890 measurements taken during training. The high correlation suggests that nn-sim is a good criterion to use for early stopping. Furthermore, measuring only at the end of training, we get r = 0.83, supporting the use of nn-sim to choose between embeddings from alternative initializations. We can expect that embeddings that maximize nn-sim would also have maximized (or close) pair-rank, had we been able to measure it using known correspondences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Testing effect of corruptions. Training all models with early stopping and model selection, we measured the correlation between the level of each corruption (and the sum of the three) and the pairrank of the final embeddings (Table 1) . We also report the slope of the regression between the corruption levels and pair-rank. Values of pair-rank range from 6%, for a low level of corruption, to 37% for a high level, with a mean of 16% over all 27 tests.",
"cite_spans": [],
"ref_spans": [
{
"start": 225,
"end": 234,
"text": "(Table 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "There is a high correlation for character mapping: the more characters are conflated with others in the vocabulary, the harder it is to identify the correspondences. This is unsurprising: to maintain the same level of accuracy after a mapping a ) b, the method must recognize the similarity in the contexts of 2:b in the corrupted data to those of both 1:a and 1:b in the uncorrupted data. The contextual distribution of 2:b's usage is in effect the average of those of 1:a and 1:b, so becomes hard to identify with either.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "There is a relatively low correlation for random noise. The method is robust to this corruption, which obscures the regularities in the data, but has no systematic effect on the contextual distributions of any of the symbols.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "There is no correlation for character splitting. When 1:a is split at random so that it appears as either 2:a or the newly added 2:b, both 2:a and 2:b can be expected to have similar contextual distributions to 1:a. The splitting reduces the amount of data from which to infer the distributions, but does not prevent the model from discovering the similarity, even under high levels of other corruptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "These results suggest promisingly that XSYM is effective at recovering correspondences between symbols in two datasets where there are similarities in the symbols' contexts of use. It is impossible to know how these different types and levels of corruption correspond to the difficulties the method faces dealing with real data. However, this experiment confirms that the model is discovering and exploiting the sort of distributional similarities that we would hope, even where the contextual distributions are not directly comparable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "We now apply XSYM to real linguistic data. To ensure that the method is not exploiting similarities between two corpora due to a shared domain (e.g., prevalence of particular cognate words peculiar to that domain), we apply it to corpora from unrelated domains, as well as in-domain pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with linguistic corpora",
"sec_num": "5"
},
{
"text": "We first compare Finnish and Estonian. Whilst not low-resourced languages, it is easier to interpret results from these well-studied, closely related languages, and they are a good starting point for studying low-resourced Uralic languages. For Finnish, we use the YLILAUTA corpus again. For Estonian, we use the newspaper portion of the Estonian Reference Corpus, balanced subcorpus (Kaalep et al., 2010, henceforth EST-REF-NEWS) . We use only the first 190k documents in Ylilauta, to match the size of EST-REF-NEWS (\u21e05.8M tokens). We lower-case the text to simplify analysis and treat very rare characters (< 500 occurrences) as a single out-of-vocabulary token. We also run on a single-domain corpus pair, to see how the outcome is affected by comparable versus noncomparable corpora. We train on YLILAUTA together with the forum portion of the Estonian Reference Corpus (\u21e06.4M tokens, henceforth EST-REF-FORUM). Training parameters are identical to the previous section and nn-sim is used for early stopping and model selection.",
"cite_spans": [
{
"start": 384,
"end": 430,
"text": "(Kaalep et al., 2010, henceforth EST-REF-NEWS)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with linguistic corpora",
"sec_num": "5"
},
{
"text": "We also apply the method to several combinations of low-resourced Uralic (North Finnic) languages: two dialects of Karelian (Olonets and North Karelian) and the severely endangered Ingrian language (\u21e0130 speakers). All corpora are Bible translations from the University of Helsinki Corpus Server 3 , with \u21e0150k, 200k and 30k tokens respectively. We report metrics for some pairs within low-resourced languages and also for Ingrian-Finnish, since many applications will involve comparing a low-resourced language to a better-resourced one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with linguistic corpora",
"sec_num": "5"
},
{
"text": "Since this is an unsupervised learning task and there is no gold-standard set of correspondences, we cannot directly evaluate the embeddings quantitatively. Ultimately, their value will be tested by their usefulness in a downstream task, such as cognate discovery, but we leave this to future work. Fig. 3 shows reductions to 2D using multidimensional scaling (MDS) of the embeddings trained on Finnish and Estonian with mixed domains. We show a plot of the embeddings for all individual characters and another including the most frequent character bigrams and trigrams in each language. Fig. 4 shows single-character embeddings for single-domain corpora.",
"cite_spans": [],
"ref_spans": [
{
"start": 299,
"end": 305,
"text": "Fig. 3",
"ref_id": "FIGREF1"
},
{
"start": 588,
"end": 594,
"text": "Fig. 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.1"
},
{
"text": "The plots give a broad notion of the layout of the space, but poorly reflect proximity between individual pairs. We also present statistics about the proximity of common characters with frequency 0.5% in both corpora (e.g. fi:t-et:t) in Table 2 . We measure where et:t appears in a ranking of all Estonian characters by proximity to fi:t, and average over all pairs, in both directions. We also report the percentage of cases where the identical character is the nearest (R@1) and within the nearest 3 characters (R@3) in the other language.",
"cite_spans": [],
"ref_spans": [
{
"start": 237,
"end": 244,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.1"
},
{
"text": "Importantly, this is not an evaluation metric, but rather a sanity check: a lower value does not necessarily reflect better embeddings, since there may be good reasons to map non-identical characters close to each other. (Indeed, this is one of the motivations for our approach.) However, the fact that the ranking is typically low is an encouraging sign that the method is succeeding in discovering meaningful correspondences between the languages. Moreover, we see no clear difference in this respect between cross-domain and in-domain learning. The results for Uralic languages demonstrate the applicability of the method to small datasets for low-resourced languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.1"
},
{
"text": "To give further insight into what is being captured, in Table 3 we present, for two language pairs, nearest neighbours across languages for all cases where the nearest was not the identical character. Of particular interest here are the discovered close correspondences \u0161-s and y-\u00fc between North Karelian and Olonets. Table 4 shows, for one pair in one direction, other near neighbours where the nearest is the same character.",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 63,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 318,
"end": 325,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.1"
},
{
"text": "We plan to perform extrinsic evaluation of learned embeddings, like Tsvetkov et al. (2016) , testing the embeddings on downstream tasks. One example is cognate discovery, where the learned similarities may bring advantages over the assumed or initial correspondences used in related work, for example where distinct symbol sets are used. Learned similarities can be incorporated into many existing cognate discovery methods (e.g. Kondrak, 2009) Table 2 : Correspondence between common characters for cross-domain and in-domain models, as a sanity check. Chars 1 and 2 are the number of characters in each language's vocabulary after the frequency filter. Mean pair rank (MPR): mean rank of a character by cosine similarity to its identical character in the other language. R@1 is the proportion that are nearest neighbours, R@3 the proportion that are within the three closest.",
"cite_spans": [
{
"start": 68,
"end": 90,
"text": "Tsvetkov et al. (2016)",
"ref_id": null
},
{
"start": 430,
"end": 444,
"text": "Kondrak, 2009)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 445,
"end": 452,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Future work",
"sec_num": "6"
},
{
"text": "h v h r h v m r , n , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ing Fi",
"sec_num": null
},
{
"text": "n , s i \u00e4 . A potential benefit of this method is its ability to capture correspondences between different lengths of n-grams, not just individual symbols. In our analysis (Fig. 3) we have used this by including a language's most common n-grams in projections, but other ways to select pertinent correspondences are possible, for example taking into account similarities or the structure of the vector space as well as frequency.",
"cite_spans": [],
"ref_spans": [
{
"start": 172,
"end": 180,
"text": "(Fig. 3)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Ing Fi",
"sec_num": null
},
{
"text": "XSYM is similar to Polyglot language models (Tsvetkov et al., 2016) . We have suggested, but not demonstrated here, that it is better suited to direct comparison of symbols. Investigation of the properties of representations learned by the two methods is required and we will test XSYM on the tasks reported by Tsvetkov et al. (2016) .",
"cite_spans": [
{
"start": 44,
"end": 67,
"text": "(Tsvetkov et al., 2016)",
"ref_id": null
},
{
"start": 311,
"end": 333,
"text": "Tsvetkov et al. (2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ing Fi",
"sec_num": null
},
{
"text": "We plan to apply XSYM to other symbol sequences, in particular, to sequences of phonetic symbols from speech (like List, 2014) . It may be possible to use automatic transcriptions that do not require language-specific transcribers, since the symbols need not correspond to linguistically motivated systems, such as IPA. Although designed for learning about linguistic sequences, XSYM could potentially also be applied also to nonlinguistic data to discover links between sequences that use distinct vocabularies. We will investigate what characteristics of sequences are essential in finding useful abstractions (e.g. vocabulary size).",
"cite_spans": [
{
"start": 115,
"end": 126,
"text": "List, 2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ing Fi",
"sec_num": null
},
{
"text": "We have presented an unsupervised method that uses a neural network to learn vector representa-tions of symbols and short n-grams on the basis of their contexts observed in sequences. It is able to learn comparable representations of symbols from multiple languages that use distinct symbol sets, learning to exploit similarities in the context distributions of the symbols across languages, even though the symbols in the contexts are also drawn from distinct vocabularies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We have demonstrated the method's ability to recover mappings between vocabularies, even when they are obscured by ambiguity in the mappings and noise, provided that the noise does not obscure the distributions over the symbols' contexts too much. We then showed some results of applying the method to real linguistic data, focusing here on characters in text and several Uralic language pairs. We found that it was able to recognize many characters that are common to the corpus pairs as being closely related by their contexts of use. An even closer correspondence was found between closely related, low-resourced dialects, despite a much smaller training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The learned similarities between symbols provide a way to bootstrap discovery of other linguistic similarities, such as morphology or cognate words. We leave testing on these applications to future work and have presented here some analysis of the learned representations, which appear highly promising. We suggest that the results have great potential as a first step in fully unsupervised linguistic typology. Discovered correspondences may also be able to tell us about typology in themselves. For example, some measures of orthographic difference and sound correspondences correlate with geographic factors in language development (Heeringa et al., 2013; Proki\u0107 and Cysouw, 2013) . Discovered strong symbol correspondences (especially if the method is applied to phonetic sequences) could also be of typological interest in themselves.",
"cite_spans": [
{
"start": 635,
"end": 658,
"text": "(Heeringa et al., 2013;",
"ref_id": "BIBREF11"
},
{
"start": 659,
"end": 683,
"text": "Proki\u0107 and Cysouw, 2013)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The method presented is a generic representation learning technique for symbol sequences. As well as text, it could also be applied to other linguistic sequences, such a phonetic transcriptions, and potentially even to non-linguistic sequences. On the basis of the encouraging initial results presented here, we suggest that it warrants further investigation, including linguistic applications, such as unsupervised cognate discovery, and other aspects of linguistic typology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://mark.granroth-wilding.co.uk/ papers/unsup_symbol/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://urn.fi/urn:nbn:fi: lb-2015031802",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://urn.fi/urn:nbn:fi:lb-201403269",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was funded by the Academy of Finland Digital Language Typology project (no. 12933481).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "8"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic clustering of languages",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Batagelj",
"suffix": ""
},
{
"first": "Toma\u017e",
"middle": [],
"last": "Pisanski",
"suffix": ""
},
{
"first": "Damijana",
"middle": [],
"last": "Ker\u017ei\u010d",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "3",
"pages": "339--352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Batagelj, Toma\u017e Pisanski, and Damijana Ker\u017ei\u010d. 1992. Automatic clustering of languages. Computational Linguistics, 18(3):339-352.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Word-pair extraction for lexicography",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Brew",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mckelvie",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 2nd International Conference on New Methods in Language Processing",
"volume": "",
"issue": "",
"pages": "45--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Brew, David McKelvie, et al. 1996. Word-pair extraction for lexicography. In Proceedings of the 2nd International Conference on New Methods in Language Processing, pages 45-55.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Turning the pipeline into a loop: Iterated unsupervised dependency parsing and PoS induction",
"authors": [
{
"first": "Christos",
"middle": [],
"last": "Christodoulopoulos",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure",
"volume": "",
"issue": "",
"pages": "96--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christos Christodoulopoulos, Sharon Goldwater, and Mark Steedman. 2012. Turning the pipeline into a loop: Iterated unsupervised dependency parsing and PoS induction. In Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure, pages 96-99.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic detection of cognates using orthographic alignment",
"authors": [
{
"first": "Alina",
"middle": [],
"last": "Maria Ciobanu",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Liviu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dinu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the ACL",
"volume": "2",
"issue": "",
"pages": "99--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alina Maria Ciobanu and Liviu P Dinu. 2014. Au- tomatic detection of cognates using orthographic alignment. In Proceedings of the 52nd Annual Meet- ing of the ACL, volume 2, pages 99-105.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Word translation without parallel data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017. Word translation without parallel data. CoRR, abs/1710.04087.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "String similarity measures and PAM-like matrices for cognate identification",
"authors": [
{
"first": "Antonella",
"middle": [],
"last": "Delmestri",
"suffix": ""
},
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonella Delmestri and Nello Cristianini. 2010. String similarity measures and PAM-like matrices for cognate identification. Bucharest Working Pa- pers in Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic identification of cognates and false friends in French and English",
"authors": [
{
"first": "Oana",
"middle": [],
"last": "Grzegorz Kondrak Diana Inkpen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frunza",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "251--257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Kondrak Diana Inkpen, Oana Frunza. 2005. Automatic identification of cognates and false friends in French and English. In Proceedings of the International Conference Recent Advances in Natu- ral Language Processing, pages 251-257.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Does instruction about phonological correspondences contribute to the intelligibility of a related language?",
"authors": [
{
"first": "Swarte",
"middle": [],
"last": "Bergsma Fenna",
"suffix": ""
},
{
"first": "Gooskens",
"middle": [],
"last": "Femke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Charlotte",
"suffix": ""
}
],
"year": 2014,
"venue": "Dutch Journal of Applied Linguistics",
"volume": "3",
"issue": "1",
"pages": "45--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bergsma Fenna, Swarte Femke, and Gooskens Char- lotte. 2014. Does instruction about phonological correspondences contribute to the intelligibility of a related language? Dutch Journal of Applied Lin- guistics, 3(1):45-61.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "From language identification to language distance",
"authors": [
{
"first": "Pablo",
"middle": [],
"last": "Gamallo",
"suffix": ""
},
{
"first": "I\u00f1aki",
"middle": [],
"last": "Jos\u00e9 Ramom Pichel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Alegria",
"suffix": ""
}
],
"year": 2017,
"venue": "Physica A: Statistical Mechanics and its Applications",
"volume": "484",
"issue": "",
"pages": "152--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pablo Gamallo, Jos\u00e9 Ramom Pichel, and I\u00f1aki Alegria. 2017. From language identification to language dis- tance. Physica A: Statistical Mechanics and its Ap- plications, 484:152-162.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Measuring spelling similarity for cognate identification",
"authors": [
{
"first": "Lu\u00eds",
"middle": [],
"last": "Gomes",
"suffix": ""
},
{
"first": "Jos\u00e9 Gabriel Pereira",
"middle": [],
"last": "Lopes",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Portuguese Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "624--633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu\u00eds Gomes and Jos\u00e9 Gabriel Pereira Lopes. 2011. Measuring spelling similarity for cognate identifica- tion. In Proceedings of the Portuguese Conference on Artificial Intelligence, pages 624-633.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Finding cognate groups using phylogenies",
"authors": [
{
"first": "David",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "1030--1039",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Hall and Dan Klein. 2010. Finding cognate groups using phylogenies. In Proceedings of the 48th Annual Meeting of the ACL, pages 1030-1039.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Lexical and orthographic distances between Germanic, Romance and Slavic languages and their relationship to geographic distance. Phonetics in Europe: Perception and Production",
"authors": [
{
"first": "Wilbert",
"middle": [],
"last": "Heeringa",
"suffix": ""
},
{
"first": "Jelena",
"middle": [],
"last": "Golubovic",
"suffix": ""
},
{
"first": "Charlotte",
"middle": [],
"last": "Gooskens",
"suffix": ""
},
{
"first": "Anja",
"middle": [],
"last": "Sch\u00fcppert",
"suffix": ""
},
{
"first": "Femke",
"middle": [],
"last": "Swarte",
"suffix": ""
},
{
"first": "Stefanie",
"middle": [],
"last": "Voigt",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "99--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilbert Heeringa, Jelena Golubovic, Charlotte Gooskens, Anja Sch\u00fcppert, Femke Swarte, and Stefanie Voigt. 2013. Lexical and orthographic distances between Germanic, Romance and Slavic languages and their relationship to geographic distance. Phonetics in Europe: Perception and Production, pages 99-137.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Estonian Reference Corpus: Its composition and morphology-aware user interface",
"authors": [
{
"first": "Kadri",
"middle": [],
"last": "Heiki-Jaan Kaalep",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Muischnek",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 4th International Conference Baltic HLT",
"volume": "",
"issue": "",
"pages": "143--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heiki-Jaan Kaalep, Kadri Muischnek, Kristel Uiboaed, and Kaarel Veskis. 2010. The Estonian Reference Corpus: Its composition and morphology-aware user interface. In Proceedings of the 4th Interna- tional Conference Baltic HLT, pages 143-146.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatic clustering of languages based on probabilistic models",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Kita",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of Quantitative Linguistics",
"volume": "6",
"issue": "2",
"pages": "167--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Kita. 1999. Automatic clustering of languages based on probabilistic models. Journal of Quantita- tive Linguistics, 6(2):167-171.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A new algorithm for the alignment of phonetic sequences",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 1st NAACL",
"volume": "",
"issue": "",
"pages": "288--295",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Kondrak. 2000. A new algorithm for the alignment of phonetic sequences. In Proceedings of the 1st NAACL, pages 288-295.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Identification of cognates and recurrent sound correspondences in word lists",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2009,
"venue": "TAL",
"volume": "50",
"issue": "",
"pages": "201--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Kondrak. 2009. Identification of cognates and recurrent sound correspondences in word lists. TAL, 50:201-235.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Algorithms for language reconstruction",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Kondrak and Graeme Hirst. 2002. Algo- rithms for language reconstruction. Ph.D. thesis, University of Toronto.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Experiments in parallel-text based grammar induction",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Kuhn. 2004. Experiments in parallel-text based grammar induction. In Proceedings of the 42nd An- nual Meeting of the ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Lexstat: Automatic detection of cognates in multilingual wordlists",
"authors": [
{
"first": "Johann-Mattis",
"middle": [],
"last": "List",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the EACL 2012 Joint Workshop of LINGVIS & UNCLH",
"volume": "",
"issue": "",
"pages": "117--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johann-Mattis List. 2012. Lexstat: Automatic de- tection of cognates in multilingual wordlists. In Proceedings of the EACL 2012 Joint Workshop of LINGVIS & UNCLH, pages 117-125.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Sequence comparison in historical linguistics",
"authors": [
{
"first": "Johann-Mattis",
"middle": [],
"last": "List",
"suffix": ""
}
],
"year": 2014,
"venue": "Dissertations in Language and Cognition",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johann-Mattis List. 2014. Sequence comparison in historical linguistics. Ph.D. thesis, Heinrich-Heine- Universit\u00e4t D\u00fcsseldorf. Dissertations in Language and Cognition, 1.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Phonologically informed edit distance algorithms for word alignment with low-resource languages",
"authors": [
{
"first": "T",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Mccoy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Society for Computation in Linguistics (SCiL",
"volume": "",
"issue": "",
"pages": "102--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard T. McCoy and Robert Frank. 2018. Phono- logically informed edit distance algorithms for word alignment with low-resource languages. In Proceed- ings of the Society for Computation in Linguistics (SCiL) 2018, pages 102-112.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. CoRR, abs/1301.3781.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Automatic prediction of cognate orthography using support vector machines",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Mulloni",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the ACL: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "25--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Mulloni. 2007. Automatic prediction of cog- nate orthography using support vector machines. In Proceedings of the 45th Annual Meeting of the ACL: Student Research Workshop, pages 25-30.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Automatic detection of orthographic cues for cognate recognition",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Mulloni",
"suffix": ""
},
{
"first": "Viktor",
"middle": [],
"last": "Pekar",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of LREC'06",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Mulloni and Viktor Pekar. 2006. Automatic de- tection of orthographic cues for cognate recognition. Proceedings of LREC'06.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Using cognates in a French-Romanian lexical alignment system: A comparative study",
"authors": [
{
"first": "Mirabela",
"middle": [],
"last": "Navlea",
"suffix": ""
},
{
"first": "Amalia",
"middle": [],
"last": "Todirascu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "247--253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirabela Navlea and Amalia Todirascu. 2011. Using cognates in a French-Romanian lexical alignment system: A comparative study. In Proceedings of the International Conference Recent Advances in Natu- ral Language Processing, pages 247-253.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Continuous multilinguality with language vectors",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "\u00d6stling",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert \u00d6stling and J\u00f6rg Tiedemann. 2016. Continu- ous multilinguality with language vectors. CoRR, abs/1612.07486.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Language distance and tree reconstruction",
"authors": [
{
"first": "Filippo",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Maurizio",
"middle": [],
"last": "Serva",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Statistical Mechanics: Theory and Experiment",
"volume": "",
"issue": "08",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Filippo Petroni and Maurizio Serva. 2008. Lan- guage distance and tree reconstruction. Journal of Statistical Mechanics: Theory and Experiment, 2008(08):P08012.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Fuzzy translation of cross-lingual spelling variants",
"authors": [
{
"first": "Ari",
"middle": [],
"last": "Pirkola",
"suffix": ""
},
{
"first": "Jarmo",
"middle": [],
"last": "Toivonen",
"suffix": ""
},
{
"first": "Heikki",
"middle": [],
"last": "Keskustalo",
"suffix": ""
},
{
"first": "Kari",
"middle": [],
"last": "Visala",
"suffix": ""
},
{
"first": "Kalervo",
"middle": [],
"last": "J\u00e4rvelin",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 26th Annual International ACM SIGIR Conference",
"volume": "",
"issue": "",
"pages": "345--352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Pirkola, Jarmo Toivonen, Heikki Keskustalo, Kari Visala, and Kalervo J\u00e4rvelin. 2003. Fuzzy transla- tion of cross-lingual spelling variants. In Proceed- ings of the 26th Annual International ACM SIGIR Conference, pages 345-352.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Combining regular sound correspondences and geographic spread",
"authors": [
{
"first": "Jelena",
"middle": [],
"last": "Proki\u0107",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Cysouw",
"suffix": ""
}
],
"year": 2013,
"venue": "Language Dynamics and Change",
"volume": "3",
"issue": "2",
"pages": "147--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelena Proki\u0107 and Michael Cysouw. 2013. Combin- ing regular sound correspondences and geographic spread. Language Dynamics and Change, 3(2):147- 168.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Multiple sequence alignments in linguistics",
"authors": [
{
"first": "Jelena",
"middle": [],
"last": "Proki\u0107",
"suffix": ""
},
{
"first": "Martijn",
"middle": [],
"last": "Wieling",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Nerbonne",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the EACL 2009 Workshop on Language Technology and Resources for Cultural Heritage, Social Sciences, Humanities, and Education, LaTeCH-SHELT&R '09",
"volume": "",
"issue": "",
"pages": "18--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelena Proki\u0107, Martijn Wieling, and John Nerbonne. 2009. Multiple sequence alignments in linguistics. In Proceedings of the EACL 2009 Workshop on Lan- guage Technology and Resources for Cultural Her- itage, Social Sciences, Humanities, and Education, LaTeCH-SHELT&R '09, pages 18-25.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Relation extraction with matrix factorization and universal schemas",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Benjamin M",
"middle": [],
"last": "Marlin",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NAACL HLT 2013",
"volume": "",
"issue": "",
"pages": "74--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Pro- ceedings of NAACL HLT 2013, pages 74-84.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Unsupervised multilingual grammar induction",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Snyder",
"suffix": ""
},
{
"first": "Tahira",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "73--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Snyder, Tahira Naseem, and Regina Barzi- lay. 2009. Unsupervised multilingual grammar in- duction. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th In- ternational Joint Conference on Natural Language Processing of the AFNLP, pages 73-81.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Polyglot neural language models: A case study in cross-lingual phonetic representation learning",
"authors": [
{
"first": "Alan",
"middle": [
"W"
],
"last": "Mortensen",
"suffix": ""
},
{
"first": "Lori",
"middle": [
"S"
],
"last": "Black",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mortensen, Alan W. Black, Lori S. Levin, and Chris Dyer. 2016. Polyglot neural language models: A case study in cross-lingual phonetic representation learning. CoRR, abs/1605.03832.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Structure of the neural network used to learn cross-lingual embeddings. The embeddings are used in the bottom layer. The output is a value between 0 and 1 that is used in the BPR objective function, with either positive or negative examples provided at the inputs."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "MDS reductions of mixed-domain embeddings for Finnish (blue) and Estonian (green). Plot of individual characters (top) and most frequent character bigrams and trigrams (bottom). '7 !' represents space."
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "MDS reduction of single-domain, forum post embeddings for Finnish (b) and Estonian (g)."
},
"TABREF0": {
"text": "Systematic splitting: Randomly choose a character a from V o after the previous step, add new character b and randomly map half of as to b. Choose a number of characters in the same way",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"2\">Metric PCC Slope</td></tr><tr><td>p noise p map p split sum</td><td>0.35 0.67 -0.12 -0.06 0.22 0.36 0.52 0.11</td></tr><tr><td colspan=\"2\">Table 1: Pearson correlation coefficient and regression slope between the level of each type of corruption and the pair-rank evaluation metric.</td></tr></table>"
},
"TABREF3": {
"text": "Nearest neighbours across Finnish (Ylilauta)-Ingrian and North Karelian-Olonets, where the closest is not the same. Bold are not in the other language.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF4": {
"text": "",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>: Nearest Olonets neighbours to North Kare-lian, where the nearest is identical, down to a cosine similarity of 0.5.</td></tr></table>"
}
}
}
}