ACL-OCL / Base_JSON /prefixC /json /coling /2020.coling-main.105.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:42:46.736201Z"
},
"title": "Probing Multilingual BERT for Genetic and Typological Signals",
"authors": [
{
"first": "Taraka",
"middle": [],
"last": "Rama",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of North Texas",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Lisa",
"middle": [],
"last": "Beinborn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of North Texas",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of North Texas",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We probe the layers in multilingual BERT (mBERT) for phylogenetic and geographic language signals across 100 languages and compute language distances based on the mBERT representations. We 1) employ the language distances to infer and evaluate language trees, finding that they are close to the reference family tree in terms of quartet tree distance, 2) perform distance matrix regression analysis, finding that the language distances can be best explained by phylogenetic and worst by structural factors and 3) present a novel measure for measuring diachronic meaning stability (based on cross-lingual representation variability) which correlates significantly with published ranked lists based on linguistic approaches. Our results contribute to the nascent field of typological interpretability of cross-lingual text representations.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We probe the layers in multilingual BERT (mBERT) for phylogenetic and geographic language signals across 100 languages and compute language distances based on the mBERT representations. We 1) employ the language distances to infer and evaluate language trees, finding that they are close to the reference family tree in terms of quartet tree distance, 2) perform distance matrix regression analysis, finding that the language distances can be best explained by phylogenetic and worst by structural factors and 3) present a novel measure for measuring diachronic meaning stability (based on cross-lingual representation variability) which correlates significantly with published ranked lists based on linguistic approaches. Our results contribute to the nascent field of typological interpretability of cross-lingual text representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Cross-lingual text representations have become extremely popular in NLP, since they promise universal text processing in multiple human languages with labeled training data only in a single one. They go back at least to the work of Klementiev et al. (2012) , and have seen an exploding number of contributions in recent years. Recent cross-lingual models provide representations for about 100 languages and vary in their training objectives. In offline learning, cross-lingual representations are obtained by projecting independently trained monolingual representations into a shared representational space using bilingual lexical resources (Faruqui and Dyer, 2014; Artetxe et al., 2017) . In joint learning , the cross-lingual representations are learned directly, for example as a byproduct of large-scale machine translation (Artetxe and Schwenk, 2018) .",
"cite_spans": [
{
"start": 232,
"end": 256,
"text": "Klementiev et al. (2012)",
"ref_id": "BIBREF37"
},
{
"start": 641,
"end": 665,
"text": "(Faruqui and Dyer, 2014;",
"ref_id": "BIBREF23"
},
{
"start": 666,
"end": 687,
"text": "Artetxe et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 828,
"end": 855,
"text": "(Artetxe and Schwenk, 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As parallel data is scarce for less frequent language pairs, the multilingual BERT model (mBERT) simply trains the BERT architecture (Devlin et al., 2019) on multilingual input from Wikipedia. The cross-lingual signal is thus only learned implicitly because mBERT uses the same representational space independent of the input language. This naive approach yields surprisingly high scores for cross-lingual downstream tasks, but the transfer does not work equally well for all languages. Pires et al. (2019) show that the performance differences between languages are gradual and that the representational similarity between languages seem to correlate with typological features. These relationships between languages remain opaque in cross-lingual representations and pose a challenge for the evaluation of their adequacy. Evaluations in down-stream tasks are an unreliable approximation because they can often be solved without accounting for deep linguistic knowledge or for interdependencies between subgroups of languages (Liang et al., 2020) .",
"cite_spans": [
{
"start": 133,
"end": 154,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 487,
"end": 506,
"text": "Pires et al. (2019)",
"ref_id": "BIBREF50"
},
{
"start": 1026,
"end": 1046,
"text": "(Liang et al., 2020)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While more language-agnostic representations can be beneficial to improve the average performance in task-oriented settings and to smooth the performance differences between high-and low-resource languages (Libovick\u1ef3 et al., 2019; Zhao et al., 2020a) , linguists are more interested in the representational differences between languages. The field of computational historical linguistics, for example, examines subtle semantic and syntactic cues to infer phylogenetic relations between languages (Rama and Borin, 2015; J\u00e4ger, 2014) . Important aspects are the diachronic stability of word meaning (Pagel et al., 2007; Holman et al., 2008) and the analysis of structural properties for inferring deep language relationships (Greenhill et al., 2010; Wichmann and Saunders, 2007) .",
"cite_spans": [
{
"start": 206,
"end": 230,
"text": "(Libovick\u1ef3 et al., 2019;",
"ref_id": "BIBREF43"
},
{
"start": 231,
"end": 250,
"text": "Zhao et al., 2020a)",
"ref_id": null
},
{
"start": 496,
"end": 518,
"text": "(Rama and Borin, 2015;",
"ref_id": "BIBREF54"
},
{
"start": 519,
"end": 531,
"text": "J\u00e4ger, 2014)",
"ref_id": "BIBREF33"
},
{
"start": 597,
"end": 617,
"text": "(Pagel et al., 2007;",
"ref_id": "BIBREF48"
},
{
"start": 618,
"end": 638,
"text": "Holman et al., 2008)",
"ref_id": "BIBREF30"
},
{
"start": 723,
"end": 747,
"text": "(Greenhill et al., 2010;",
"ref_id": "BIBREF28"
},
{
"start": 748,
"end": 776,
"text": "Wichmann and Saunders, 2007)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Traditionally, these phenomena have been approximated using hand-selected word lists and typological databases. Common ancestors for languages are typically inferred based on cases of shared word meaning and surface form overlap and it can be assumed that these core properties are also captured in large-scale cross-lingual representations to a certain extent. For example, find that phylogenetic relations between languages can be reconstructed from cross-lingual representations if the training objective optimizes monolingual semantic constraints for each language separately as in the multilingual MUSE model (Conneau et al., 2017) . MUSE is restricted to only 29 frequent languages, however. While mBERT is a powerful cross-lingual model covering an order of magnitude more languages (104), a better understanding of the type of signal captured in its representations is needed to assess its applicability as a testbed for cross-lingual or historical linguistic hypotheses. Our analysis quantifies the representational similarity across languages in mBERT and disentangles it along genetic, geographic, and structural factors.",
"cite_spans": [
{
"start": 614,
"end": 636,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In general, the urge to improve the interpretability of internal neural representations has become a major research field in recent years. Whereas dense representations of images can be projected back to pixels to facilitate visual inspection, interpreting the linguistic information captured in dense representation of languages is more complex (Alishahi et al., 2019; Conneau and Kiela, 2018) . Diagnostic classifiers , representational stability analysis (Abnar et al., 2019) and indirect visualization techniques (Belinkov and Glass, 2019) are only a few examples for newly developed probing techniques. They are used to examine whether the representations capture part-of-speech information (Zhang and Bowman, 2018) , syntactic agreement (Giulianelli et al., 2018) , speech features (Chrupa\u0142a et al., 2017) , and cognitive cues (Wehbe et al., 2014) . However, the majority of these interpretability studies focus solely on English. Krasnowska-Kiera\u015b and Wr\u00f3blewska (2019) perform a contrastive analysis of the syntactic interpretability of English and Polish representations and Eger et al. (2020) probe representations in three lower-resource languages. Cross-lingual interpretability research for multiple languages focuses on the ability to transfer representational knowledge across languages for zero-shot semantics (Pires et al., 2019) and for syntactic phenomena (Dhar and Bisazza, 2018) . In this work, we contribute to the nascent field of typological and comparative linguistic interpretability of language representations at scale (Kudugunta et al., 2019) and analyze representations for more than 100 languages.",
"cite_spans": [
{
"start": 346,
"end": 369,
"text": "(Alishahi et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 370,
"end": 394,
"text": "Conneau and Kiela, 2018)",
"ref_id": "BIBREF12"
},
{
"start": 458,
"end": 478,
"text": "(Abnar et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 517,
"end": 543,
"text": "(Belinkov and Glass, 2019)",
"ref_id": "BIBREF5"
},
{
"start": 696,
"end": 720,
"text": "(Zhang and Bowman, 2018)",
"ref_id": "BIBREF63"
},
{
"start": 743,
"end": 769,
"text": "(Giulianelli et al., 2018)",
"ref_id": "BIBREF25"
},
{
"start": 788,
"end": 811,
"text": "(Chrupa\u0142a et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 833,
"end": 853,
"text": "(Wehbe et al., 2014)",
"ref_id": "BIBREF60"
},
{
"start": 1326,
"end": 1346,
"text": "(Pires et al., 2019)",
"ref_id": "BIBREF50"
},
{
"start": 1375,
"end": 1399,
"text": "(Dhar and Bisazza, 2018)",
"ref_id": "BIBREF19"
},
{
"start": 1547,
"end": 1571,
"text": "(Kudugunta et al., 2019)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions: We probe the representations of one of the current most popular cross-lingual models (mBERT) and find that mBERT lacks information to perform well on cross-lingual semantic retrieval, but can indeed be used to accurately infer a phylogenetic language tree for 100 languages. Our results indicate that the quality of the induced tree depends on the inference algorithm and might also be the effect of several conflated signals. In order to better disentangle phylogenetic, geographic, and structural factors, we go beyond simple tree comparison and probe language distances inferred from cross-lingual representations by means of multiple regression. We find phylogenetic similarity to be the strongest and structural similarity to be the weakest signal in our experiments. The phylogenetic signal is present across all layers of mBERT. Our analysis not only contributes to a better interpretation and understanding of mBERT, but may also help explain its cross-lingual behavior in downstream tasks (Pires et al., 2019) . 1",
"cite_spans": [
{
"start": 1017,
"end": 1037,
"text": "(Pires et al., 2019)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Representational distance between two languages refers to the (averaged) differences between model representations for selected concepts in the two languages. Interpretability analyses attempt to disentangle the typological factors that influence the representational distance. In this work, we distinguish between phylogenetic, geographic and structural factors. Two languages are considered to be phylogenetically close if they descend from a common ancestor language. Geographically close languages are languages which are primarily spoken in regions with a small physical distance on Earth. Structural similarity between languages refers to shared syntactic and morphological features. For many languages, the three categories overlap, but they are not necessarily linked. For example, Spanish and Basque are geographically close, but structurally and phylogenetically quite distant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Previous approaches differ in the type of cross-lingual representations, the number of languages, and the methodology for determining representational distance and for interpreting the typological signal. Kudugunta et al. (2019) obtain representations for 102 language pairs (English \u2194 language X) using neural machine translation and then visualize the representations using dimensionality reduction. They explore the visualization qualitatively and find clusters which resemble language families. However, when zooming into the clusters, it becomes evident that a mixture of genetic and geographic factors contributes to the representational distance. For instance, Dravidian and Indo-Aryan languages overlap completely. induce bilingual vector spaces for 21 Europarl languages and quantify representational distance between languages by averaging over the pairwise similarity between word representations. They find that the differences can be better explained by geographic than by phylogenetic factors. Conversely, Rabinovich et al. (2017) analyze English translations of sentences in 17 Europarl languages and find that syntactic traces of the native language of the translator can best be explained by language genetics. Bjerva et al. (2019) use the same dataset and train language representations on the linguistic structure of the sentences. They find that the representational distance between languages can be better explained by structural similarity (obtained from dependency trees) than by language genetics. Pretrained cross-lingual models are optimized for tasks such as bilingual lexicon induction and machine translation. Even if linguistic information is not explicitly provided during training, recent interpretability research indicates that phylogenetic properties are encoded in the resulting representations. Beinborn and Choenni (2019) obtain representations for Swadesh word lists from the MUSE model (Conneau et al., 2017) which jointly optimizes monolingual and crosslingual semantic constraints. They find that hiearchical clustering over the representational distance between languages yields phylogenetically plausible language trees. Interestingly, they cannot trace the phylogenetic signal in representations from the sentence-based LASER model (Artetxe and Schwenk, 2018) which is trained to learn language-neutral representations for machine translation. Libovick\u1ef3 et al. (2019) analyze representations from mBERT and find that clustering over averaged representations for the 104 languages yields phylogenetically plausible language groups. They argue that mBERT is not language-neutral and that semantic phenomena are not modeled properly across languages. In our analysis, we further quantify the representational distance and disentangle it along phylogenetic, geographic, and structural factors.",
"cite_spans": [
{
"start": 205,
"end": 228,
"text": "Kudugunta et al. (2019)",
"ref_id": "BIBREF39"
},
{
"start": 1020,
"end": 1044,
"text": "Rabinovich et al. (2017)",
"ref_id": "BIBREF52"
},
{
"start": 1228,
"end": 1248,
"text": "Bjerva et al. (2019)",
"ref_id": "BIBREF7"
},
{
"start": 1927,
"end": 1949,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 2278,
"end": 2305,
"text": "(Artetxe and Schwenk, 2018)",
"ref_id": "BIBREF2"
},
{
"start": 2390,
"end": 2413,
"text": "Libovick\u1ef3 et al. (2019)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Bjerva and Augenstein (2018) train cross-lingual representations in an unsupervised way for different linguistic levels including phonology, morphology, and syntax. They use them to infer missing typological features for more than 800 languages. Malaviya et al. (2017) also infer missing features in typological databases from cross-lingual representations. Inferring such missing features can be considered a form of probing. Indeed, in contemporaneous work, Choenni and Shutova (2020) predict typological properties from representations of four different recent state-of-the-art cross-lingual encoders using probing classifiers. We do not use probing classifiers in our work because the choice of classifier and the size of its training data may affect the probing outcomes (Eger et al., 2020) . Table 1 summarizes selected interpretability approaches analyzing the typological signal in crosslingual representations. The findings for the dominant signal type vary strongly due to different choices for the representational model, the analysis unit, and the number of languages. Our work differs from previous work mainly in terms of the battery of tests probing for genetic and typological signals and the preciseness in teasing apart the different typological components.",
"cite_spans": [
{
"start": 246,
"end": 268,
"text": "Malaviya et al. (2017)",
"ref_id": "BIBREF46"
},
{
"start": 776,
"end": 795,
"text": "(Eger et al., 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 798,
"end": 805,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In the following, we first briefly describe the two cross-lingual embedding spaces analyzed in this work, mBERT and FastText ( \u00a73.1). Then, we detail how we compute distances between languages using representations from these spaces ( \u00a73.2) and concept lists developed in historical linguistics ( \u00a73.3). Once we have language distances, we infer trees from distance matrices and compare these trees to gold standard phylogenetic trees ( \u00a73.4) to evaluate how strong a historical linguistic tree signal is contained in our cross-lingual representations ( \u00a74).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "We compare two different models: mBERT ia a cross-lingual model trained with a language-neutral contextualized objective and FastText stands for static monolingual word representations that have been aligned into a joint multilingual space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Representations",
"sec_num": "3.1"
},
{
"text": "mBERT mBERT is based on the multi-layer bidirectional transformer model BERT (Devlin et al., 2019) . It is trained on the task of masked language modeling and next sentence prediction. The base model consists of 12 representational layers. mBERT is trained on the merged Wikipedias of 104 languages, with a shared word-piece vocabulary. It does not use explicit alignments across languages, thus has no mechanism to enforce that translation equivalent word pairs have similar representations. Recently, there has been a vivid debate regarding the quality of representations produced by mBERT. Pires et al. (2019) claim that it is surprisingly good at zero-shot cross-lingual transfer, and works best for structurally similar languages. find that mBERT exhibits vector space misalignment across languages and zero-shot cross-lingual transfer is improved after their suggested re-mapping. K et al. 2020show that lexical overlap plays no big role in cross-lingual transfer for mBERT, but the depth of the network does, with deeper models having better transfer. Zhao et al. (2020b) find that mBERT lacks fine-grained cross-lingual text understanding and can be fooled by adversarial inputs produced by the corrupt input produced by MT systems.",
"cite_spans": [
{
"start": 77,
"end": 98,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 593,
"end": 612,
"text": "Pires et al. (2019)",
"ref_id": "BIBREF50"
},
{
"start": 1059,
"end": 1078,
"text": "Zhao et al. (2020b)",
"ref_id": "BIBREF65"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Representations",
"sec_num": "3.1"
},
{
"text": "FastText FastText (Bojanowski et al., 2017) builds static word representations on the basis of a word's characters. This allows it to induce better representations for infrequent and unknown words. We use a joint multilingually aligned vector space spanning 44 languages using the RCLS method described in Joulin et al. (2018) and refer to it as mFastText. 2",
"cite_spans": [
{
"start": 18,
"end": 43,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 306,
"end": 326,
"text": "Joulin et al. (2018)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Representations",
"sec_num": "3.1"
},
{
"text": "Assume we have M languages and N concepts (illustrated in Table 4 in the appendix). Assume further that each concept is expressed as a word in each language which is represented by a d-dimensional vector. If all the vectors reside in a cross-lingually shared space, then the representational distance between two languages can be obtained by averaging the pairwise distances between all word vectors in the two languages for the N concepts. That means one computes:",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Representational Distance",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "dist(i, j) = 1 N N k=1 d(v k (i), v k (j))",
"eq_num": "(1)"
}
],
"section": "Representational Distance",
"sec_num": "3.2"
},
{
"text": "where v k (i) and v k (j) stand for the vectors corresponding to the k-th concept for languages i and j, respectively (with words v k (i) and v k (j)). In our experiments, we use cosine distance, but d may in principle refer to any suitable distance measure, e.g., Euclidean distance or Spearman correlation. 3, 4 When the corresponding words for each concept are not available in all languages, but only in one language (e.g., English), Beinborn and Choenni (2019) instead set v k (i) to be the nearest neighbor of the English word for concept k in language i. This has the advantage that one can infer language distances without translation data in target languages. A drawback of this approach is that the relation between nearest neighbors in a vector space may not be that of similarity but of relatedness, e.g., nose is related to mouth, but it is not a synonym (meaning-equivalent).",
"cite_spans": [
{
"start": 309,
"end": 311,
"text": "3,",
"ref_id": null
},
{
"start": 312,
"end": 313,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representational Distance",
"sec_num": "3.2"
},
{
"text": "In our experiments below, words for all concepts k are available in all languages, thus we do not need to resort to nearest neighbors of the English words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representational Distance",
"sec_num": "3.2"
},
{
"text": "All our experiments are based on multilingual word lists obtained from linguistic databases. NorthEuraLex 5 features word lists for 1,016 concepts in 100 languages spoken in Northern Eurasia which have been transcribed by linguists (Dellert et al., 2020) . The database is known for its high quality, but unfortunately covers only 54 of the 104 languages in mBERT. In order to analyze more languages, we additionally use PanLex 6 which contains lists for 207 concepts in more than 500 languages (Kamholz et al., 2014) . It covers 99 languages in mBERT, but the quality of the word lists is not uniform across languages. PanLex sometimes includes multiple word lists written in different scripts for the same language, e.g. for Greek. In such a case, we include all available word lists for the language in our analysis.",
"cite_spans": [
{
"start": 232,
"end": 254,
"text": "(Dellert et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 495,
"end": 517,
"text": "(Kamholz et al., 2014)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Concept Lists",
"sec_num": "3.3"
},
{
"text": "Historical differences between languages are commonly represented in phylogenetic trees which group languages by their evolution from common ancestors. We want to examine to which extent these phylogenetic differences can explain the observed representational distance in the cross-lingual model. We calculate all pairwise representational distances between languages as in Eq. (1). From this distance matrix, we infer a language tree using two inference techniques that are widely popular in computational biology for inferring species trees: 1) The unweighted pair group method with arithmetic mean (UP-GMA) (Sokal and Michener, 1958) initially assumes that each language forms an individual cluster and then successively joins the two clusters with the smallest average distance. 2) The iterative Neighbor Joining (Saitou and Nei, 1987) algorithm starts with an unstructured star-like tree and iteratively adds nodes to create subtrees.",
"cite_spans": [
{
"start": 610,
"end": 636,
"text": "(Sokal and Michener, 1958)",
"ref_id": "BIBREF57"
},
{
"start": 817,
"end": 839,
"text": "(Saitou and Nei, 1987)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating language trees",
"sec_num": "3.4"
},
{
"text": "Reference tree Most automatic clustering methods produce binary trees due to computational simplifications whereas phylogenetic trees by linguistic experts are usually m-ary. To faciliate the evaluation of the inferred tree, previous work used a binary Levenshtein-based approximation (Serva and Petroni, 2008) as reference tree. This approximation provides an acceptable reference for a small subset of languages, but does not accurately reflect the more fine-grained differences for the Indo-European language family (Fortson, 2004) . As we are evaluating a much larger set of languages here, we use the more reliable reference trees compiled by linguistic experts available in Glottolog (Hammarstr\u00f6m et al., 2020) .",
"cite_spans": [
{
"start": 285,
"end": 310,
"text": "(Serva and Petroni, 2008)",
"ref_id": "BIBREF56"
},
{
"start": 519,
"end": 534,
"text": "(Fortson, 2004)",
"ref_id": "BIBREF24"
},
{
"start": 690,
"end": 716,
"text": "(Hammarstr\u00f6m et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating language trees",
"sec_num": "3.4"
},
{
"text": "Tree evaluation In order to compare the m-ary reference tree to the binary inferred tree, we apply a variant of quartet distance known as generalized quartet distance (Pompei et al., 2011) . This metric evaluates the quality of the whole tree by comparing subgroups of four languages (quartets) which form so-called butterfly structures. A butterfly quartet refers to a quartet in which the four languages can be structured as two pairs of languages belonging to the same subfamily. For example, the pairs Spanish/Italian and Russian/Ukrainian form a butterfly structure whereas the four languages Hindi-German- ",
"cite_spans": [
{
"start": 167,
"end": 188,
"text": "(Pompei et al., 2011)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating language trees",
"sec_num": "3.4"
},
{
"text": ":v k (i) = d(v k (i), vn(i)) n\u2208[1,...,N ] Then,v k (\u2022) \u2208 R N while v k (\u2022) \u2208 R d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating language trees",
"sec_num": "3.4"
},
{
"text": "These second-order vectors can be used in Eq. (1) to replace the original vectors v k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating language trees",
"sec_num": "3.4"
},
{
"text": "5 http://northeuralex.org/ 6 http://dev.panlex.org/db/panlex_swadesh.zip",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating language trees",
"sec_num": "3.4"
},
{
"text": "Armenian-Latin all belong to different subgroups which are directly connected to the root node of the Indo-European tree. We evaluate our inferred tree by calculating the number of butterfly quartets which deviate from the reference tree normalized by the number of all butterfly quartets in the reference tree. It is also possible to subject the distance matrix to other forms of clustering or dimensionality reduction techniques such as k-nearest neighbor, PCA, or t-sne (Maaten and Hinton, 2008) . However, such flat clustering methods do not induce a tree structure and are not directly comparable to the reference trees of language families available in an online repository such as Glottolog (Hammarstr\u00f6m et al., 2020) .",
"cite_spans": [
{
"start": 473,
"end": 498,
"text": "(Maaten and Hinton, 2008)",
"ref_id": "BIBREF45"
},
{
"start": 698,
"end": 724,
"text": "(Hammarstr\u00f6m et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating language trees",
"sec_num": "3.4"
},
{
"text": "In the following, we detail a series of probing experiments with both mBERT and FastText. To extract representations from mBERT, we feed a single word v k (i) from a language i corresponding to concept k into mBERT and extract the corresponding representations v (r) k (i) in all layers r = 0, . . . , 12. We are fully aware that using mBERT in a context-independent way ignores main benefits of the model. We do so in order to leverage concept lists at large scale for a majority of the 100 languages available in mBERT. Otherwise, we would have to experiment with sentence-aligned data, which is available only for much smaller subsets (< 30) of our languages. Nevertheless, we believe that a good contextual model should also be equipped with good context-independent token representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Monolingual language models are commonly evaluated by their ability to model semantic similarity and their performance on downstream tasks. For multilingual models, a suitable evaluation task for lexical semantics is bilingual lexicon induction (BLI). The goal is to take an input word in the source language and retrieve its translation-equivalent in the target language. In a decontextualized setting, multiple targets can be considered to be a correct translation due to the polysemy of words. As the word lists only account for a single correct solution, we cast bilingual lexicon induction as a ranking task. We rank all target words in the concept list based on their representational distance to the source word in the model and evaluate this ranking using the mean reciprocal rank (MRR) as proposed in Glava\u0161 et al. (2019) . The MRR ranges from 0 to 1; a value of 1 indicates that the target is correctly ranked on 1, a value of 1/n indicates that the target is on averaged ranked on n.",
"cite_spans": [
{
"start": 810,
"end": 830,
"text": "Glava\u0161 et al. (2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Semantics",
"sec_num": "4.1"
},
{
"text": "For the PanLex list of 207 words, the MRR obtained by a random baseline would be 0.03. The result of mBERT for the language pair (bos,hrv) is almost perfect with an MRR of 0.98, but for pairs of more distant languages the BLI quality is considerably lower. Overall, the average performance for mBERT (0.16) is five times better than random guessing, but consistently lower than the performance for mFastText (0.46 on average). 7 Overall, this shows that mBERT does not properly capture multilingual semantics, a finding that is echoed in some other recent works Zhao et al., 2020b) . The apparent reason lies in its naive training process, which does not exploit cross-lingual signals but merely trains on the concatenation of all languages. Nonetheless, the model performs surprisingly well in some downstream cross-lingual tasks (Pires et al., 2019) . In the following experiments, we examine whether the model instead relies on typological properties of languages.",
"cite_spans": [
{
"start": 562,
"end": 581,
"text": "Zhao et al., 2020b)",
"ref_id": "BIBREF65"
},
{
"start": 831,
"end": 851,
"text": "(Pires et al., 2019)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Semantics",
"sec_num": "4.1"
},
{
"text": "We perform tree inference using both Neighbor-Joining (NJ) and UPGMA on the concept lists from PanLex (99 languages) and NorthEuraLex (54 languages). We infer trees for all the layers of mBERT and evaluate the quality of the inferred trees as described in Section 3.4. Table 2 shows that especially the initial-middle and the final layers of mBERT yield a small distance to the gold standard trees.",
"cite_spans": [],
"ref_spans": [
{
"start": 269,
"end": 276,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Phylogenetic signal",
"sec_num": "4.2"
},
{
"text": "Overall, however, the strength of the phylogenetic signal varies with respect to the selected concept list and the tree inference algorithm. Interestingly, the results for the PanLex word list are better although this setup covers more languages. UPGMA yields lower distances to the gold tree for both concept lists. In comparison, the results for mFastText are considerably worse when using UPGMA (>0.5), but comparable when using NJ (around 0.32). It should be noted though that mFastText covers only 44 languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phylogenetic signal",
"sec_num": "4.2"
},
{
"text": "Word List 0 1 2 3 4 5 6 7 8 9 10 11 12 Avg. UPGMA PanLex .34 .30 .17 .18 .21 .26 .28 .20 .23 .21 .22 .23 .21 .20 NorthEuralex .43 .29 .26 .28 .30 .31 .31 .35 .34 .32 .37 .34 .32 .31 NJ PanLex .38 .31 .30 .30 .26 .31 .25 .32 .32 .32 .35 .34 .30 .30 NorthEuralex .41 .36 .35 .32 .32 .31 .31 .32 .32 .32 .32 .32 .40 .37 Table 2 : Distances between the Glottolog reference tree and the phylogenetic tree inferred from mBERT representations from the 12 different layers and the average of all layers. Generalized quarted distances range between 0 and 1, lower distances are better.",
"cite_spans": [
{
"start": 63,
"end": 325,
"text": "PanLex .34 .30 .17 .18 .21 .26 .28 .20 .23 .21 .22 .23 .21 .20 NorthEuralex .43 .29 .26 .28 .30 .31 .31 .35 .34 .32 .37 .34 .32 .31 NJ PanLex .38 .31 .30 .30 .26 .31 .25 .32 .32 .32 .35 .34 .30 .30 NorthEuralex .41 .36 .35 .32 .32 .31 .31 .32 .32 .32 .32 .32 .40",
"ref_id": null
}
],
"ref_spans": [
{
"start": 5,
"end": 56,
"text": "List 0 1 2 3 4 5 6 7 8 9 10 11 12 Avg.",
"ref_id": "TABREF1"
},
{
"start": 330,
"end": 337,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "In order to qualitatively analyze representational distances, we subject the distance matrix from the second layer (which yields the best scores according to UPGMA) to the t-sne algorithm as in previous work (Libovick\u1ef3 et al., 2019; Kudugunta et al., 2019) . Figure 1 : The t-sne plot for the Swadesh list distances from layer 2. The family codes are from the ASJP database (Wichmann et al., 2020) and are explained in Table 5 in appendix.",
"cite_spans": [
{
"start": 208,
"end": 232,
"text": "(Libovick\u1ef3 et al., 2019;",
"ref_id": "BIBREF43"
},
{
"start": 233,
"end": 256,
"text": "Kudugunta et al., 2019)",
"ref_id": "BIBREF39"
},
{
"start": 374,
"end": 397,
"text": "(Wichmann et al., 2020)",
"ref_id": "BIBREF62"
}
],
"ref_spans": [
{
"start": 259,
"end": 267,
"text": "Figure 1",
"ref_id": null
},
{
"start": 419,
"end": 426,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Visual exploration",
"sec_num": null
},
{
"text": "In order to better disentangle the typological signal, we examine additional categories established by . We determine the explainable predictors for the representational distances between languages using matrix regression (Legendre et al., 1994) . We regress d ij = dist(i, j) computed based on Eq. (1) on the following language distances:",
"cite_spans": [
{
"start": 222,
"end": 245,
"text": "(Legendre et al., 1994)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other typological signals",
"sec_num": "4.3"
},
{
"text": "\u2022 Phylogenetic distance (gen ij ) between two languages computed from Glottolog reference trees as the ratio between the number of non-shared branches divided by the number of branches from root to the tip. \u2022 Geographical distance (geo ij ) between two points on Earth approximated through great circle distance (Department, 1997) . For detailed description of the distances, see . 10 In our experiments, we use the precomputed distance matrices provided along with the lang2vec Python package 11 . We estimate the coefficients as follows:",
"cite_spans": [
{
"start": 312,
"end": 330,
"text": "(Department, 1997)",
"ref_id": null
},
{
"start": 382,
"end": 384,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other typological signals",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d ij = c + \u03b1 \u2022 gen ij + \u03b2 \u2022 geo ij + \u03b3 \u2022 struc ij + \u03b7 \u2022 phon ij + \u03bb \u2022 inv ij",
"eq_num": "(2)"
}
],
"section": "Other typological signals",
"sec_num": "4.3"
},
{
"text": "Since the entries in the distance matrices are non-independent, we use matrix regression analysis (Legendre et al., 1994) as implemented in the R package ecodist (Goslee et al., 2007) for computing the regression coefficients. The significance of the regression coefficients is also tested using a Mantel test where the matrix columns are permuted 10 5 times. The significant regression coefficients (p < 0.001) and their sizes are shown in Figure 2 . The phylogenetic signal in mBERT is stronger than the geographical signal and this is especially true for PanLex where the geographical signal is never significant. The genetic signal is more prominent in the initial layers. It then decreases over layers, but re-emerges in the final layer. As we use isolated concepts in our setup, we did not expect structural features to be a significant predictor at all and are surprised about the PanLex results. We further hypothesized that phonological distances might be a weak but significant predictor due to related words (both cognates and borrowings) and shared scripts (also showing up in the t-sne plot in Figure 1 ) which is not supported in our experiments.",
"cite_spans": [
{
"start": 98,
"end": 121,
"text": "(Legendre et al., 1994)",
"ref_id": "BIBREF40"
},
{
"start": 162,
"end": 183,
"text": "(Goslee et al., 2007)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 441,
"end": 449,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 1107,
"end": 1115,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Other typological signals",
"sec_num": "4.3"
},
{
"text": "Overall, we find that the R 2 values (see Table 3 ) from the regression analyses are significant across all the layers for both the datasets, but they are not very large. This suggests that there exist other factors explaining the representational distances that our equation does not account for or that the linear model is not fully appropriate. In the case of the mFastText model, the R 2 value is at about 0.24 but none of the regression coefficients are significant.",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 49,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Other typological signals",
"sec_num": "4.3"
},
{
"text": "We calculate cross-lingual variability in mBERT by considering the pairwise cosine similarities between representations in languages i and j for all concepts k in PanLex: PanLex .39 .39 .41 .37 .34 .28 .27 .29 .29 .25 .23 .25 .36 .37 NorthEuralex .39 .45 .48 .55 .54 .50 .46 .43 .40 .37 .36 .30 .40 .48 Table 3 : R 2 values from the regression analyses for each layer.",
"cite_spans": [
{
"start": 171,
"end": 298,
"text": "PanLex .39 .39 .41 .37 .34 .28 .27 .29 .29 .25 .23 .25 .36 .37 NorthEuralex .39 .45 .48 .55 .54 .50 .46 .43 .40 .37 .36 .30 .40",
"ref_id": null
}
],
"ref_spans": [
{
"start": 303,
"end": 310,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Meaning Stability",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c ij (k) = cossim(v k (i), v k (j))",
"eq_num": "(3)"
}
],
"section": "Meaning Stability",
"sec_num": "4.4"
},
{
"text": "and then analyzing how these cosine similarities vary. We conjecture that variability of mBERT representations of a concept k across languages is indicative of its diachronic stability, where diachronic meaning stability measures the resistance of a concept to lexical replacement. To capture cross-lingual variability, we calculate statistics s on the c ij (k) values. As statistics, we consider the mean and standard deviation values of c ij (k) for each fixed concept k. Thus, each concept k receives a 'variability score' s(k) given by s applied to the c ij (k) values: s(k) = s (c 12 (k), c 13 (k), . . . , c 1M (k), . . . ), where M is the number of languages involved. Note that the standard deviation statistic (intuitively) captures the notion of cross-lingual variability while the mean statistic measures the average degree of similarity of the representations of words for a target concept across languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meaning Stability",
"sec_num": "4.4"
},
{
"text": "We finally correlate the statistics with diachronic stability scores for concepts k extracted from the following lists: (i) ranked list of Holman et al. (2008) ; (ii) WOLD's (World Loanword Database; Haspelmath and Tadmor (2009) ) meaning scores for age, simplicity, and borrowing; (iii) 100-item Leipzig-Jakarta list (Tadmor, 2009) and its replication (LJ-replica) based on later versions of the WOLD scores (Dellert and Buch, 2018) ; (iv) Swadesh list ranked by word replacement rates computed from a phylogenetic analysis (Pagel et al., 2007) ; (v) n-gram entropy based stability measure (Rama and Borin, 2014) ; (vi) inf, a information-theoretic weighted string similarity (Dellert and Buch, 2018) ; (vii) and a Levenshtein distance based measure (Petroni and Serva, 2011) . All the ranked lists are drawn from Dellert and Buch (2018) . Figure 3a shows the correlation between s(k) and diachronic stability when the statistic is the mean. It can be seen that Layers 3-7 and 10-11 correlate positively with Pagel et al. (2007) and inf. This suggests that the mean statistic measures susceptibility to replacement as opposed to stability since in both the lists, the higher the score, the lower is a concept's stability. Layer 12 correlates with 5 of the 9 lists and shows positive correlation with both WOLD's age and simplicity scores and LJ-replica. In contrast, layer 12 correlates negatively with both Pagel et al. (2007) and inf, suggesting that the measure is inconsistent in the layer. The mean statistic is also not consistent across the layers and shows inverse correlations with inf and Pagel et al. (2007) in layer 12 compared to the other layers.",
"cite_spans": [
{
"start": 139,
"end": 159,
"text": "Holman et al. (2008)",
"ref_id": "BIBREF30"
},
{
"start": 215,
"end": 228,
"text": "Tadmor (2009)",
"ref_id": "BIBREF58"
},
{
"start": 318,
"end": 332,
"text": "(Tadmor, 2009)",
"ref_id": "BIBREF58"
},
{
"start": 409,
"end": 433,
"text": "(Dellert and Buch, 2018)",
"ref_id": "BIBREF15"
},
{
"start": 525,
"end": 545,
"text": "(Pagel et al., 2007)",
"ref_id": "BIBREF48"
},
{
"start": 591,
"end": 613,
"text": "(Rama and Borin, 2014)",
"ref_id": "BIBREF53"
},
{
"start": 677,
"end": 701,
"text": "(Dellert and Buch, 2018)",
"ref_id": "BIBREF15"
},
{
"start": 751,
"end": 776,
"text": "(Petroni and Serva, 2011)",
"ref_id": "BIBREF49"
},
{
"start": 815,
"end": 838,
"text": "Dellert and Buch (2018)",
"ref_id": "BIBREF15"
},
{
"start": 1010,
"end": 1029,
"text": "Pagel et al. (2007)",
"ref_id": "BIBREF48"
},
{
"start": 1409,
"end": 1428,
"text": "Pagel et al. (2007)",
"ref_id": "BIBREF48"
},
{
"start": 1600,
"end": 1619,
"text": "Pagel et al. (2007)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [
{
"start": 841,
"end": 850,
"text": "Figure 3a",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Meaning Stability",
"sec_num": "4.4"
},
{
"text": "The correlations from standard deviation statistic is given in Figure 3b . Here, layers 0-5 show correlations with 8 of the 9 ranked lists. In layers 0-5, the upper half of the figure has positive correlations with word lists ranked by decreasing order of stability (LJ-replica, Petroni and Serva (2011) and Holman et al. (2008) ) whereas the lower half of the figure correlates negatively with the rankings of Pagel et al. (2007) , inf and Rama and Borin (2014) where a lower score for a meaning indicates higher stability. Layers 0-7 & 10-12 correlate positively with WOLD indices such as age and simplicity. There is a negative correlation with inf and a positive correlation with the measure of Petroni and Serva (2011) across all the layers. The standard deviation statistic is consistent across all the layers in terms of correlations against the 9 lists.",
"cite_spans": [
{
"start": 279,
"end": 303,
"text": "Petroni and Serva (2011)",
"ref_id": "BIBREF49"
},
{
"start": 308,
"end": 328,
"text": "Holman et al. (2008)",
"ref_id": "BIBREF30"
},
{
"start": 411,
"end": 430,
"text": "Pagel et al. (2007)",
"ref_id": "BIBREF48"
},
{
"start": 441,
"end": 462,
"text": "Rama and Borin (2014)",
"ref_id": "BIBREF53"
},
{
"start": 699,
"end": 723,
"text": "Petroni and Serva (2011)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [
{
"start": 63,
"end": 72,
"text": "Figure 3b",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Meaning Stability",
"sec_num": "4.4"
},
{
"text": "We conclude from this experiment that cross-lingual variability of representations in mBERT, as measured by standard deviation, 12 indeed correlates with diachronic (cross-temporal) stability as given by proposed historical linguistic indicators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meaning Stability",
"sec_num": "4.4"
},
{
"text": "We applied a series of tests for probing mBERT for typological signals. While the language trees inferred from mBERT representations are sometimes close to the reference trees, they may confound multiple factors. A more-fined grained investigation of t-sne plots followed by matrix regression analyses suggests that representational distances correlate most with phylogenetic and geographical distances between languages. Further, the rankings from cross-lingual stability scores correlate significantly with meaning lists for items supposed to be resistant to cross-temporal lexical replacement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding remarks",
"sec_num": "5"
},
{
"text": "Our results contribute to the recent discourses on interpretability and introspection of black-box NLP representations Kudugunta et al., 2019; Jacovi and Goldberg, 2020) . In our case, we asked how mBERT perceives of the similarity of two languages and related this to phylogenetic, geographic and structural factors. In future work, we aim to use our inferred similarities to predict transfer behavior in downstream tasks between specific language pairs.",
"cite_spans": [
{
"start": 119,
"end": 142,
"text": "Kudugunta et al., 2019;",
"ref_id": "BIBREF39"
},
{
"start": 143,
"end": 169,
"text": "Jacovi and Goldberg, 2020)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding remarks",
"sec_num": "5"
},
{
"text": "Finally, we strongly caution against using our conclusions as support for hypotheses relating semantics and language phylogeny (e.g., the Sapir-Whorf hypothesis). Our results for bilingual lexicon induction indicate that mBERT representations are only mildly semantic cross-lingually which corroborates similar findings in related work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding remarks",
"sec_num": "5"
},
{
"text": "Our code and data are available from https://github.com/PhyloStar/mBertTypology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://fasttext.cc/docs/en/aligned-vectors.html 3 Note that we compute the average of the distances, while it is also possible to compute the distance of the average representations(Libovick\u1ef3 et al., 2019).4 An alternative to this direct comparison of the word vectors is a 'second-order' encoding where the representationv k (i) for a word is determined by the distances of its vector v k (i) to the vectors for the N concepts",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Only for 12 language pairs, the MRR is higher for mBERT than for mFastText",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://wals.info/ 9 http://test.terraling.com/groups/7 10 http://www.cs.cmu.edu/\u02dcdmortens/uriel.html 11 https://github.com/antonisa/lang2vec",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Like the mean, other statistics, such as minimum and maximum, did also not exhibit significant correlations. We found significant correlations only for the standard deviation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The last author has been funded by the HMWK (Hessisches Ministerium f\u00fcr Wissenschaft und Kunst) as part of structural location promotion for TU Darmstadt in the context of the Hessian excellence cluster initiative \"Content Analytics for the Social Good\" (CA-SG).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Blackbox meets blackbox: Representational similarity and stability analysis of neural language models and brains",
"authors": [
{
"first": "Samira",
"middle": [],
"last": "Abnar",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Beinborn",
"suffix": ""
},
{
"first": "Rochelle",
"middle": [],
"last": "Choenni",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the ACL-Workshop on Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "191--203",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samira Abnar, Lisa Beinborn, Rochelle Choenni, and Willem Zuidema. 2019. Blackbox meets blackbox: Rep- resentational similarity and stability analysis of neural language models and brains. In Proceedings of the ACL-Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 191-203.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Analyzing and interpreting neural networks for NLP: A report on the first BlackboxNLP workshop",
"authors": [
{
"first": "Afra",
"middle": [],
"last": "Alishahi",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
}
],
"year": 2019,
"venue": "Natural Language Engineering",
"volume": "25",
"issue": "4",
"pages": "543--557",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Afra Alishahi, Grzegorz Chrupa\u0142a, and Tal Linzen. 2019. Analyzing and interpreting neural networks for NLP: A report on the first BlackboxNLP workshop. Natural Language Engineering, 25(4):543-557.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1812.10464"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe and Holger Schwenk. 2018. Massively Multilingual Sentence Embeddings for Zero-Shot Cross- Lingual Transfer and Beyond. arXiv e-prints, page arXiv:1812.10464.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning bilingual word embeddings with (almost) no bilingual data",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "451--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In ACL, pages 451-462.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semantic drift in multilingual representations",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Beinborn",
"suffix": ""
},
{
"first": "Rochelle",
"middle": [],
"last": "Choenni",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.10820"
]
},
"num": null,
"urls": [],
"raw_text": "Lisa Beinborn and Rochelle Choenni. 2019. Semantic drift in multilingual representations. arXiv preprint arXiv:1904.10820.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Analysis methods in neural language processing: A survey",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "49--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transac- tions of the Association for Computational Linguistics, 7:49-72.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "From phonology to syntax: Unsupervised linguistic typology at different levels with language embeddings",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Bjerva",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "907--916",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Bjerva and Isabelle Augenstein. 2018. From phonology to syntax: Unsupervised linguistic typology at different levels with language embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 907-916.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "What do language representations really represent? Computational Linguistics",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Bjerva",
"suffix": ""
},
{
"first": "Maria",
"middle": [
"Han"
],
"last": "Robert\u00f6stling",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Veiga",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Augenstein",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "381--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Bjerva, Robert\u00d6stling, Maria Han Veiga, J\u00f6rg Tiedemann, and Isabelle Augenstein. 2019. What do language representations really represent? Computational Linguistics, pages 381-389.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "TACL",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL, 5:135-146.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Multilingual alignment of contextual word representations",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Kitaev",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Cao, Nikita Kitaev, and Dan Klein. 2020. Multilingual alignment of contextual word representations. In International Conference on Learning Representations.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "What does it mean to be language-agnostic? probing multilingual sentence encoders for typological properties",
"authors": [
{
"first": "Rochelle",
"middle": [],
"last": "Choenni",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rochelle Choenni and Ekaterina Shutova. 2020. What does it mean to be language-agnostic? probing multilingual sentence encoders for typological properties.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Representations of language in a model of visually grounded speech signal",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
},
{
"first": "Lieke",
"middle": [],
"last": "Gelderloos",
"suffix": ""
},
{
"first": "Afra",
"middle": [],
"last": "Alishahi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "613--622",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Chrupa\u0142a, Lieke Gelderloos, and Afra Alishahi. 2017. Representations of language in a model of visually grounded speech signal. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 613-622, Vancouver, Canada, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "SentEval: An evaluation toolkit for universal sentence representations",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau and Douwe Kiela. 2018. SentEval: An evaluation toolkit for universal sentence representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Supervised learning of universal sentence representations from natural language inference data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "670--680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670-680, Copenhagen, Denmark, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2126--2136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, German Kruszewski, Guillaume Lample, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Melbourne, Australia, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A new approach to concept basicness and stability as a window to the robustness of concept list rankings",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Dellert",
"suffix": ""
},
{
"first": "Armin",
"middle": [],
"last": "Buch",
"suffix": ""
}
],
"year": 2018,
"venue": "Language Dynamics and Change",
"volume": "8",
"issue": "2",
"pages": "157--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Dellert and Armin Buch. 2018. A new approach to concept basicness and stability as a window to the robustness of concept list rankings. Language Dynamics and Change, 8(2):157-181.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "NorthEuraLex: a wide-coverage lexical database of Northern Eurasia",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Dellert",
"suffix": ""
},
{
"first": "Thora",
"middle": [],
"last": "Daneyko",
"suffix": ""
},
{
"first": "Alla",
"middle": [],
"last": "M\u00fcnch",
"suffix": ""
},
{
"first": "Alina",
"middle": [],
"last": "Ladygina",
"suffix": ""
},
{
"first": "Armin",
"middle": [],
"last": "Buch",
"suffix": ""
},
{
"first": "Natalie",
"middle": [],
"last": "Clarius",
"suffix": ""
},
{
"first": "Ilja",
"middle": [],
"last": "Grigorjew",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Balabel",
"suffix": ""
},
{
"first": "Hizniye",
"middle": [
"Isabella"
],
"last": "Boga",
"suffix": ""
},
{
"first": "Zalina",
"middle": [],
"last": "Baysarova",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "M\u00fchlenbernd",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Wahle",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "J\u00e4ger",
"suffix": ""
}
],
"year": 2020,
"venue": "Lang. Resour. Evaluation",
"volume": "54",
"issue": "1",
"pages": "273--301",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Dellert, Thora Daneyko, Alla M\u00fcnch, Alina Ladygina, Armin Buch, Natalie Clarius, Ilja Grigorjew, Mo- hamed Balabel, Hizniye Isabella Boga, Zalina Baysarova, Roland M\u00fchlenbernd, Johannes Wahle, and Gerhard J\u00e4ger. 2020. NorthEuraLex: a wide-coverage lexical database of Northern Eurasia. Lang. Resour. Evaluation, 54(1):273-301.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Admiralty Manual of Navigation: BR 45(1)",
"authors": [],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Great Britain. Navy Department. 1997. Admiralty Manual of Navigation: BR 45(1). Number Bd. 1 in BR Series. Stationery Office.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In NAACL, pages 4171-4186, June.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Does syntactic knowledge in multilingual language models transfer across languages?",
"authors": [
{
"first": "Prajit",
"middle": [],
"last": "Dhar",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Bisazza",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "374--377",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prajit Dhar and Arianna Bisazza. 2018. Does syntactic knowledge in multilingual language models transfer across languages? In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 374-377, Brussels, Belgium, November. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "On the linearity of semantic change: Investigating meaning variation via dynamic graph models",
"authors": [
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Mehler",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "52--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steffen Eger and Alexander Mehler. 2016. On the linearity of semantic change: Investigating meaning variation via dynamic graph models. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 52-58, Berlin, Germany, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Language classification from bilingual word embedding graphs",
"authors": [
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Armin",
"middle": [],
"last": "Hoenen",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Mehler",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "3507--3518",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steffen Eger, Armin Hoenen, and Alexander Mehler. 2016. Language classification from bilingual word embed- ding graphs. In COLING, pages 3507-3518.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "How to probe sentence embeddings in lowresource languages: On structural design choices for probing task evaluation",
"authors": [
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2020,
"venue": "CONLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2020. How to probe sentence embeddings in low- resource languages: On structural design choices for probing task evaluation. In CONLL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Improving vector space word representations using multilingual correlation",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "462--471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual corre- lation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 462-471, Gothenburg, Sweden, April. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Indo-European language and culture: an introduction",
"authors": [
{
"first": "F",
"middle": [],
"last": "Benjamin",
"suffix": ""
},
{
"first": "I",
"middle": [
"V"
],
"last": "Fortson",
"suffix": ""
}
],
"year": 2004,
"venue": "Blackwell Textbooks in Linguistics",
"volume": "19",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin F. Fortson, IV. 2004. Indo-European language and culture: an introduction, volume 19 of Blackwell Textbooks in Linguistics. Blackwell, Oxford.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information",
"authors": [
{
"first": "Mario",
"middle": [],
"last": "Giulianelli",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Harding",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Mohnert",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "240--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 240-248, Brussels, Belgium, November. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "How to (properly) evaluate cross-lingual word embeddings: On strong baselines, comparative analyses, and some misconceptions",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Litschko",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "710--721",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goran Glava\u0161, Robert Litschko, Sebastian Ruder, and Ivan Vuli\u0107. 2019. How to (properly) evaluate cross-lingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 710-721, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The ecodist package for dissimilarity-based analysis of ecological data",
"authors": [
{
"first": "C",
"middle": [],
"last": "Sarah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goslee",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Urban",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Statistical Software",
"volume": "22",
"issue": "7",
"pages": "1--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah C Goslee, Dean L Urban, et al. 2007. The ecodist package for dissimilarity-based analysis of ecological data. Journal of Statistical Software, 22(7):1-19.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The shape and tempo of language evolution",
"authors": [
{
"first": "J",
"middle": [],
"last": "Simon",
"suffix": ""
},
{
"first": "Quentin",
"middle": [
"D"
],
"last": "Greenhill",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Atkinson",
"suffix": ""
},
{
"first": "Russell D",
"middle": [],
"last": "Meade",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gray",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Royal Society B: Biological Sciences",
"volume": "277",
"issue": "",
"pages": "2443--2450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon J Greenhill, Quentin D Atkinson, Andrew Meade, and Russell D Gray. 2010. The shape and tempo of language evolution. Proceedings of the Royal Society B: Biological Sciences, 277(1693):2443-2450.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Explorations in automated language classification",
"authors": [
{
"first": "S\u00f8ren",
"middle": [],
"last": "Eric W Holman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wichmann",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Cecil",
"suffix": ""
},
{
"first": "Viveka",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [],
"last": "Velupillai",
"suffix": ""
},
{
"first": "Dik",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bakker",
"suffix": ""
}
],
"year": 2008,
"venue": "Folia Linguistica",
"volume": "42",
"issue": "3-4",
"pages": "331--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric W Holman, S\u00f8ren Wichmann, Cecil H Brown, Viveka Velupillai, Andr\u00e9 M\u00fcller, and Dik Bakker. 2008. Explorations in automated language classification. Folia Linguistica, 42(3-4):331-354.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure",
"authors": [
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Veldhoen",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Artificial Intelligence Research",
"volume": "61",
"issue": "",
"pages": "907--926",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Towards faithfully interpretable nlp systems: How should we define and evaluate faithfulness?",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Jacovi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable nlp systems: How should we define and evaluate faithfulness?",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Phylogenetic inference from word lists using weighted alignment with empirically determined weights",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "J\u00e4ger",
"suffix": ""
}
],
"year": 2014,
"venue": "Language Dynamics and Change",
"volume": "",
"issue": "",
"pages": "155--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerhard J\u00e4ger. 2014. Phylogenetic inference from word lists using weighted alignment with empirically deter- mined weights. In Language Dynamics and Change, pages 155-204. Brill.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Loss in translation: Learning bilingual word mapping with a retrieval criterion",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herv\u00e9 J\u00e9gou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Cross-lingual ability of multilingual bert: An empirical study",
"authors": [
{
"first": "K",
"middle": [],
"last": "Karthikeyan",
"suffix": ""
},
{
"first": "Zihan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Mayhew",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilingual bert: An empirical study. In International Conference on Learning Representations.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Panlex: Building a resource for panlingual lexical translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Kamholz",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Pool",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"M"
],
"last": "Colowick",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "3145--3150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Kamholz, Jonathan Pool, and Susan M Colowick. 2014. Panlex: Building a resource for panlingual lexical translation. In LREC, pages 3145-3150.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Inducing crosslingual distributed representations of words",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Klementiev",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Binod",
"middle": [],
"last": "Bhattarai",
"suffix": ""
}
],
"year": 2012,
"venue": "The COLING 2012 Organizing Committee",
"volume": "",
"issue": "",
"pages": "1459--1474",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of COLING 2012, pages 1459-1474, Mumbai, India, December. The COLING 2012 Organizing Committee.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Empirical linguistic study of sentence embeddings",
"authors": [
{
"first": "Katarzyna",
"middle": [],
"last": "Krasnowska",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Kiera\u015b",
"suffix": ""
},
{
"first": "Alina",
"middle": [],
"last": "Wr\u00f3blewska",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "5729--5739",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katarzyna Krasnowska-Kiera\u015b and Alina Wr\u00f3blewska. 2019. Empirical linguistic study of sentence embeddings. In ACL, pages 5729-5739.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Investigating multilingual NMT representations at scale",
"authors": [
{
"first": "Sneha",
"middle": [],
"last": "Kudugunta",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Bapna",
"suffix": ""
},
{
"first": "Isaac",
"middle": [],
"last": "Caswell",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1565--1575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sneha Kudugunta, Ankur Bapna, Isaac Caswell, and Orhan Firat. 2019. Investigating multilingual NMT represen- tations at scale. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1565- 1575, Hong Kong, China, November. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Modeling brain evolution from behavior: a permutational regression approach",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Legendre",
"suffix": ""
},
{
"first": "Fran\u00e7ois-Joseph",
"middle": [],
"last": "Lapointe",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Casgrain",
"suffix": ""
}
],
"year": 1994,
"venue": "Evolution",
"volume": "48",
"issue": "5",
"pages": "1487--1499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Legendre, Fran\u00e7ois-Joseph Lapointe, and Philippe Casgrain. 1994. Modeling brain evolution from behav- ior: a permutational regression approach. Evolution, 48(5):1487-1499.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Xglue: A new benchmark dataset for cross-lingual pre-training",
"authors": [
{
"first": "Yaobo",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Yeyun",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Fenfei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Weizhen",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Linjun",
"middle": [],
"last": "Shou",
"suffix": ""
},
{
"first": "Daxin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Guihong",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Ruofei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Sining",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Taroon",
"middle": [],
"last": "Bharti",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Qiao",
"suffix": ""
},
{
"first": "Jiun-Hung",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Winnie",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Shuguang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Fan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Campos",
"suffix": ""
},
{
"first": "Rangan",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, and Ming Zhou. 2020. Xglue: A new benchmark dataset for cross-lingual pre-training, understanding and generation. arXiv, abs/2004.01401.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "How language-neutral is multilingual bert? arXiv preprint",
"authors": [
{
"first": "Jind\u0159ich",
"middle": [],
"last": "Libovick\u1ef3",
"suffix": ""
},
{
"first": "Rudolf",
"middle": [],
"last": "Rosa",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.03310"
]
},
"num": null,
"urls": [],
"raw_text": "Jind\u0159ich Libovick\u1ef3, Rudolf Rosa, and Alexander Fraser. 2019. How language-neutral is multilingual bert? arXiv preprint arXiv:1911.03310.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Uriel and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Littell",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "David R Mortensen",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Carlisle",
"middle": [],
"last": "Kairis",
"suffix": ""
},
{
"first": "Lori",
"middle": [],
"last": "Turner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "2",
"issue": "",
"pages": "8--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Littell, David R Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. Uriel and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 8-14.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Visualizing data using t-sne",
"authors": [
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of machine learning research",
"volume": "9",
"issue": "",
"pages": "2579--2605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579-2605.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Learning language representations for typology prediction",
"authors": [
{
"first": "Chaitanya",
"middle": [],
"last": "Malaviya",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Littell",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2529--2535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chaitanya Malaviya, Graham Neubig, and Patrick Littell. 2017. Learning language representations for typology prediction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2529-2535.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "PHOIBLE 2.0. Max Planck Institute for the Science of Human History",
"authors": [],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Moran and Daniel McCloy, editors. 2019. PHOIBLE 2.0. Max Planck Institute for the Science of Human History, Jena.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Frequency of word-use predicts rates of lexical evolution throughout indo-european history",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Pagel",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Quentin",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Atkinson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Meade",
"suffix": ""
}
],
"year": 2007,
"venue": "Nature",
"volume": "449",
"issue": "7163",
"pages": "717--720",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Pagel, Quentin D Atkinson, and Andrew Meade. 2007. Frequency of word-use predicts rates of lexical evolution throughout indo-european history. Nature, 449(7163):717-720.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Automated word stability and language phylogeny",
"authors": [
{
"first": "Filippo",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Maurizio",
"middle": [],
"last": "Serva",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Quantitative Linguistics",
"volume": "18",
"issue": "1",
"pages": "53--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Filippo Petroni and Maurizio Serva. 2011. Automated word stability and language phylogeny. Journal of Quanti- tative Linguistics, 18(1):53-62.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "How multilingual is multilingual BERT?",
"authors": [
{
"first": "Telmo",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4996--5001",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "On the accuracy of language trees",
"authors": [
{
"first": "Simone",
"middle": [],
"last": "Pompei",
"suffix": ""
},
{
"first": "Vittorio",
"middle": [],
"last": "Loreto",
"suffix": ""
},
{
"first": "Francesca",
"middle": [],
"last": "Tria",
"suffix": ""
}
],
"year": 2011,
"venue": "PloS one",
"volume": "",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Pompei, Vittorio Loreto, and Francesca Tria. 2011. On the accuracy of language trees. PloS one, 6(6).",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Found in translation: Reconstructing phylogenetic language trees from translations",
"authors": [
{
"first": "Ella",
"middle": [],
"last": "Rabinovich",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Ordan",
"suffix": ""
},
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "530--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ella Rabinovich, Noam Ordan, and Shuly Wintner. 2017. Found in translation: Reconstructing phylogenetic lan- guage trees from translations. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 530-540.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "N-gram approaches to the historical dynamics of basic vocabulary",
"authors": [
{
"first": "Taraka",
"middle": [],
"last": "Rama",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Borin",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Quantitative Linguistics",
"volume": "21",
"issue": "1",
"pages": "50--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taraka Rama and Lars Borin. 2014. N-gram approaches to the historical dynamics of basic vocabulary. Journal of Quantitative Linguistics, 21(1):50-64.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Comparative evaluation of string similarity measures for automatic language classification",
"authors": [
{
"first": "Taraka",
"middle": [],
"last": "Rama",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Borin",
"suffix": ""
}
],
"year": 2015,
"venue": "Sequences in Language and Text",
"volume": "",
"issue": "",
"pages": "203--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taraka Rama and Lars Borin. 2015. Comparative evaluation of string similarity measures for automatic language classification. In J\u00e1n Ma\u010dutek and George K. Mikros, editors, Sequences in Language and Text, pages 203-231. Walter de Gruyter.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "The neighbor-joining method: a new method for reconstructing phylogenetic trees",
"authors": [
{
"first": "Naruya",
"middle": [],
"last": "Saitou",
"suffix": ""
},
{
"first": "Masatoshi",
"middle": [],
"last": "Nei",
"suffix": ""
}
],
"year": 1987,
"venue": "Molecular biology and evolution",
"volume": "4",
"issue": "4",
"pages": "406--425",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naruya Saitou and Masatoshi Nei. 1987. The neighbor-joining method: a new method for reconstructing phylo- genetic trees. Molecular biology and evolution, 4(4):406-425.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Indo-european languages tree by levenshtein distance",
"authors": [
{
"first": "Maurizio",
"middle": [],
"last": "Serva",
"suffix": ""
},
{
"first": "Filippo",
"middle": [],
"last": "Petroni",
"suffix": ""
}
],
"year": 2008,
"venue": "Europhysics Letters)",
"volume": "81",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maurizio Serva and Filippo Petroni. 2008. Indo-european languages tree by levenshtein distance. EPL (Euro- physics Letters), 81(6):68005.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "A statistical method for evaluating systematic relationships",
"authors": [
{
"first": "R",
"middle": [
"R"
],
"last": "Sokal",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Michener",
"suffix": ""
}
],
"year": 1958,
"venue": "",
"volume": "38",
"issue": "",
"pages": "1409--1438",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. R. Sokal and C. D. Michener. 1958. A statistical method for evaluating systematic relationships. University of Kansas Science Bulletin, 38:1409-1438.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Loanwords in the world's languages. findings and results",
"authors": [
{
"first": "Uri",
"middle": [],
"last": "Tadmor",
"suffix": ""
}
],
"year": 2009,
"venue": "Loanwords in the world's languages. A comparative handbook",
"volume": "",
"issue": "",
"pages": "55--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Uri Tadmor. 2009. Loanwords in the world's languages. findings and results. In Martin Haspelmath and Uri Tadmor, editors, Loanwords in the world's languages. A comparative handbook, pages 55-75. de Gruyter, Berlin and New York.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Crosslingual alignment vs joint training: A comparative study and a simple unified framework",
"authors": [
{
"first": "Zirui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiateng",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Ruochen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zirui Wang, Jiateng Xie, Ruochen Xu, Yiming Yang, Graham Neubig, and Jaime G. Carbonell. 2020. Cross- lingual alignment vs joint training: A comparative study and a simple unified framework. In International Conference on Learning Representations.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Aligning context-based statistical models of language with brain activity during reading",
"authors": [
{
"first": "Leila",
"middle": [],
"last": "Wehbe",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "233--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leila Wehbe, Ashish Vaswani, Kevin Knight, and Tom Mitchell. 2014. Aligning context-based statistical models of language with brain activity during reading. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 233-243, Doha, Qatar, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "How to use typological databases in historical linguistic research",
"authors": [
{
"first": "S\u00f8ren",
"middle": [],
"last": "Wichmann",
"suffix": ""
},
{
"first": "Arpiar",
"middle": [],
"last": "Saunders",
"suffix": ""
}
],
"year": 2007,
"venue": "Diachronica",
"volume": "24",
"issue": "2",
"pages": "373--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00f8ren Wichmann and Arpiar Saunders. 2007. How to use typological databases in historical linguistic research. Diachronica, 24(2):373-404.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "The ASJP database (version 19). Jena: Max Planck Institute for the Science of Human History",
"authors": [
{
"first": "Eric",
"middle": [
"W"
],
"last": "S\u00f8ren Wichmann",
"suffix": ""
},
{
"first": "Cecil H",
"middle": [],
"last": "Holman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00f8ren Wichmann, Eric W Holman, and Cecil H Brown. 2020. The ASJP database (version 19). Jena: Max Planck Institute for the Science of Human History.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Language modeling teaches you more than translation does: Lessons learned through auxiliary syntactic task analysis",
"authors": [
{
"first": "Kelly",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "359--361",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelly Zhang and Samuel Bowman. 2018. Language modeling teaches you more than translation does: Lessons learned through auxiliary syntactic task analysis. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 359-361.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Johannes Bjerva, and Isabelle Augenstein. 2020a. Inducing language-agnostic multilingual representations",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Zhao, Steffen Eger, Johannes Bjerva, and Isabelle Augenstein. 2020a. Inducing language-agnostic multilin- gual representations.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "On the limitations of cross-lingual encoders as exposed by reference-free machine translation evaluation",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Maxime",
"middle": [],
"last": "Peyrard",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "West",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
}
],
"year": 2020,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Zhao, Goran Glava\u0161, Maxime Peyrard, Yang Gao, Robert West, and Steffen Eger. 2020b. On the limitations of cross-lingual encoders as exposed by reference-free machine translation evaluation. In ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The visualization inFigure 1clearly shows a mix of phylogenetic and other clusters. Instances of phylogenetic clusters include separation of Germanic languages (excluding English which is placed apart) and Romance languages in the lower left. In contrast, the three Dravidian languages (Tamil, Malayalam,Telugu) are placed on the right most part of the plot together with Hindi and Bengali (Indo-European languages), illustrating more of a geographical similarity. The Slavic languages show up in two clusters: 1) Western Slavic Languages (Polish, Czech, and Slovak in the lower half) 2) Eastern Slavic languages such as Russian and Ukranian together with Turkic languages such as Azeri and Kazakh written in Cyrllic script. At the same time, the different word lists of Azeri (written in different scripts) are placed together, suggesting that mBERT representations are also script-agnostic. Uralic languages such as Finnish and Estonian are closer to the other Baltic languages which are not clustered together with the other Slavic languages. These clusters cannot be sufficiently explained solely by phylogenetic properties.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "\u2022 Structural distance (struc ij ): Cosine distance computed over averaged syntactic features from WALS 8 , SSWL 9 , and mini-grammars parsed from Ethnologue (Lewis, 2009). \u2022 Phonological distance (phon ij ): Cosine distance computed over averaged phonological features available in WALS and Ethnologue. \u2022 Phoneme Inventory distance (inv ij ): Cosine distance computed between binary feature vectors as given in the PHOIBLE database (Moran and McCloy, 2019) which consist of features such as presence or absence of retroflex sounds in a language.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "The coefficient values (\u03b1, \u03b2, \u03b3) from Eq. (2) for each mBERT layer.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF4": {
"text": "Correlation between variability of cross-lingual representations from mBERT and diachronic stability lists (significant at p < .01).",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"html": null,
"text": "Summary of related work on typological interpretability of crosslingual representations.",
"type_str": "table",
"num": null,
"content": "<table/>"
}
}
}
}