ACL-OCL / Base_JSON /prefixR /json /repl4nlp /2020.repl4nlp-1.13.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:58:44.715047Z"
},
"title": "On the Choice of Auxiliary Languages for Improved Sequence Tagging",
"authors": [
{
"first": "Lukas",
"middle": [],
"last": "Lange",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Heike",
"middle": [],
"last": "Adel",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Jannik",
"middle": [],
"last": "Str\u00f6tgen",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent work showed that embeddings from related languages can improve the performance of sequence tagging, even for monolingual models. In this analysis paper, we investigate whether the best auxiliary language can be predicted based on language distances and show that the most related language is not always the best auxiliary language. Further, we show that attention-based meta-embeddings can effectively combine pre-trained embeddings from different languages for sequence tagging and set new state-of-the-art results for part-of-speech tagging in five languages.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent work showed that embeddings from related languages can improve the performance of sequence tagging, even for monolingual models. In this analysis paper, we investigate whether the best auxiliary language can be predicted based on language distances and show that the most related language is not always the best auxiliary language. Further, we show that attention-based meta-embeddings can effectively combine pre-trained embeddings from different languages for sequence tagging and set new state-of-the-art results for part-of-speech tagging in five languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "State-of-the-art methods for sequence tagging tasks, such as named entity recognition (NER) and part-of-speech (POS) tagging, exploit embeddings as input representation. Recent work suggested to include embeddings trained on related languages as auxiliary embeddings to improve model performance: Catalan and Portuguese embeddings, for instance, help NER models on Spanish-English code-switching data (Winata et al., 2019a) . In this paper, we analyze whether auxiliary embeddings should be chosen from related languages, or if embeddings from more distant languages could also help.",
"cite_spans": [
{
"start": 401,
"end": 423,
"text": "(Winata et al., 2019a)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For this, we revisit current language distance measures (Gamallo et al., 2017) and adapt them to the embeddings and training data used in our experiments. We investigate the question, whether we can predict the best auxiliary language based on those language distance measures. Our results suggest that no strong correlation exists between language distance and performance and that even less related languages can be a good choice as auxiliary languages.",
"cite_spans": [
{
"start": 56,
"end": 78,
"text": "(Gamallo et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our experiments, we explore both available monolingual and multilingual pre-trained byte-pair encoding (Heinzerling and Strube, 2018) and FLAIR embeddings (Akbik et al., 2018) . For combining monolingual subword embeddings from different languages, we investigate two different methods: the concatenation of embeddings and the use of attention-based meta-embeddings (Kiela et al., 2018; Winata et al., 2019a) .",
"cite_spans": [
{
"start": 106,
"end": 136,
"text": "(Heinzerling and Strube, 2018)",
"ref_id": "BIBREF8"
},
{
"start": 158,
"end": 178,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 369,
"end": 389,
"text": "(Kiela et al., 2018;",
"ref_id": "BIBREF11"
},
{
"start": 390,
"end": 411,
"text": "Winata et al., 2019a)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We perform experiments on CoNLL and universal dependency datasets for NER and POS tagging in five languages and show that meta-embeddings are a promising alternative to the concatenation of additional auxiliary embeddings as they learn to decide on the auxiliary languages in an unsupervised way. Moreover, the inclusion of more languages is often beneficial and meta-embeddings can be effectively used to leverage a larger number of embeddings and achieve new state-of-theart performance on all five POS tagging tasks. Finally, we propose guidelines to decide which auxiliary languages one should use in which setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Combination of Embeddings. Previous work has seen performance gains by combining, e.g., various types of word embeddings (Tsuboi, 2014) or variants of the same type of embeddings trained on different corpora (Luo et al., 2014) . For the combination, alternative solutions have been proposed, such as different input channels (Zhang et al., 2016) , concatenation (Yin and Sch\u00fctze, 2016) , averaging of embeddings (Coates and Bollegala, 2018) , and attention (Kiela et al., 2018) . In this paper, we compare the inclusion of auxiliary languages via concatenation to the dynamic combination with attention. : Overview of our model architecture (left). The embedding combination e can be either computed using the concatenation e CON CAT (middle) or the meta embedding method e AT T (right).",
"cite_spans": [
{
"start": 121,
"end": 135,
"text": "(Tsuboi, 2014)",
"ref_id": "BIBREF20"
},
{
"start": 208,
"end": 226,
"text": "(Luo et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 325,
"end": 345,
"text": "(Zhang et al., 2016)",
"ref_id": "BIBREF25"
},
{
"start": 362,
"end": 385,
"text": "(Yin and Sch\u00fctze, 2016)",
"ref_id": "BIBREF24"
},
{
"start": 412,
"end": 440,
"text": "(Coates and Bollegala, 2018)",
"ref_id": "BIBREF4"
},
{
"start": 457,
"end": 477,
"text": "(Kiela et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "in code-switching settings, i.e., it was shown that Catalan and Portuguese embeddings help for Spanish-English NER. In a later work, it was shown that also more distant languages can be beneficial (Winata et al., 2019b), but only tests in the special setting of code-switching NER were performed and no connection between the relatedness of languages and the performance increase was analyzed. In contrast, our work shows that the inclusion of auxiliary languages increases performance in monolingual settings as well and we analyze whether language distance measures can be used to select the best auxiliary language in advance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Language Distance Measures. Gamallo et al. (2017) proposed to measure distances between languages by using the perplexity of language models trained on one language and applied to another language. Campos et al. (2019) used a similar method to retrace changes in multilingual diachronic corpora over time. Another popular measure of similarity is based on vocabulary overlap, assuming that similar languages share a large portion of their vocabulary (Brown et al., 2008) .",
"cite_spans": [
{
"start": 28,
"end": 49,
"text": "Gamallo et al. (2017)",
"ref_id": "BIBREF7"
},
{
"start": 198,
"end": 218,
"text": "Campos et al. (2019)",
"ref_id": "BIBREF3"
},
{
"start": 450,
"end": 470,
"text": "(Brown et al., 2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Following Lample et al. (2016) , we use a bidirectional long short-term memory network (BiL-STM) (Hochreiter and Schmidhuber, 1997) followed by a conditional random field (CRF) classifier (Lafferty et al., 2001 ) (see Figure 1a ).",
"cite_spans": [
{
"start": 10,
"end": 30,
"text": "Lample et al. (2016)",
"ref_id": "BIBREF13"
},
{
"start": 97,
"end": 131,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF10"
},
{
"start": 188,
"end": 210,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 218,
"end": 227,
"text": "Figure 1a",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Sequence Tagging Model",
"sec_num": "3"
},
{
"text": "Each input word is represented with a pretrained word vector. We experiment with byte-pair encoding (BPEmb) (Heinzerling and Strube, 2018) and FLAIR embeddings (Akbik et al., 2018) , as for both of them pretrained embeddings are publicly available for all the languages we consider. 1",
"cite_spans": [
{
"start": 108,
"end": 138,
"text": "(Heinzerling and Strube, 2018)",
"ref_id": "BIBREF8"
},
{
"start": 160,
"end": 180,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings",
"sec_num": "3.1"
},
{
"text": "As we experiment with multiple word embeddings, we compare two combination methods: a simple concatenation e CON CAT and attentionbased meta-embeddings e AT T as shown in Figure 1b and 1c, respectively, and described next.",
"cite_spans": [],
"ref_spans": [
{
"start": 171,
"end": 177,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Combination of Embeddings",
"sec_num": "3.2"
},
{
"text": "In both cases, the input are n embeddings e i , 1 \u2264 i \u2264 n that should be combined. In our experiments, we use embeddings from n different languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combination of Embeddings",
"sec_num": "3.2"
},
{
"text": "For concatenation, we simply stack the individual embeddings into a single vector: e CON CAT = [e 1 , e 2 , .., e n ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combination of Embeddings",
"sec_num": "3.2"
},
{
"text": "In the case of meta-embeddings, we follow Kiela et al. (2018) and compute the combination as a weighted sum of embeddings. For this, all n embeddings e i need to be mapped to the same size first. In contrast to previous work, we use a nonlinear mapping as this yielded better performance in our experiments. Thus, we compute x i = tanh(Q i \u2022 e i + b i ) with weight matrix Q i , bias b i and x i \u2208 R E being the i-th embedding e i mapped to the size E of the largest embedding. The attention weight \u03b1 i for each embedding x i is then computed with a fully-connected hidden layer of size H with parameters W \u2208 R H\u00d7E and V \u2208 R 1\u00d7H , followed by a softmax layer. Finally, the embeddings x i are weighted using the attention weights \u03b1 i resulting in the word representation e AT T = i \u03b1 i \u2022 x i",
"cite_spans": [
{
"start": 42,
"end": 61,
"text": "Kiela et al. (2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combination of Embeddings",
"sec_num": "3.2"
},
{
"text": "\u03b1 i = exp(V \u2022 tanh(W x i )) n l=1 exp(V \u2022 tanh(W x l )) The parameters of the meta-embedding layer (Q 1 , ..., Q n , b 1 , ..., b n , W, V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combination of Embeddings",
"sec_num": "3.2"
},
{
"text": "We perform NER and POS experiments on five languages: German (De), English (En), Spanish (Es), Finnish (Fi), and Dutch (Nl). Note that we assume at least a character overlap to use auxiliary embeddings from another language. Thus, languages with a different character set, e.g., Asian languages, cannot be used, in this setting. Future work could investigate the inclusion of languages with different character sets, e.g., by using bilingual dictionaries or machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "For NER, we use the CoNLL 2002/03 datasets (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) and the FiNER corpus (Ruokolainen et al., 2019). For POS tagging, we experiment with the universal dependencies treebanks. 2 For each language, we report results for the following methods:",
"cite_spans": [
{
"start": 43,
"end": 65,
"text": "(Tjong Kim Sang, 2002;",
"ref_id": "BIBREF18"
},
{
"start": 66,
"end": 102,
"text": "Tjong Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "Monolingual (Mono). Only embeddings from the source language were taken for the experiments. This is the baseline setting. We also compare our results to multilingual embeddings (Multi) which have been successfully used in monolingual settings as well (Heinzerling and Strube, 2019) . To ensure comparability, we use the multilingual versions of BPEmb and FLAIR, which were trained simultaneously on 275 and 300 languages, respectively.",
"cite_spans": [
{
"start": 252,
"end": 282,
"text": "(Heinzerling and Strube, 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "Mono + X. A second set of embeddings from a different language X is concatenated with the original monolingual embeddings. While for this typically embeddings from a related language are chosen, we report results for all language combinations and investigate in particular whether relatedness is necessary for improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "Mono + All & Meta-Embeddings. We also experiment with the combination of all embeddings from all languages from our experiments. In this setting, we use all six embeddings (five monolingual embeddings and the multilingual embeddings) and combine them either using concatenation (Mono + All) or meta-embeddings. We have chosen these settings that are mainly based on monolingual embeddings, as the current state-of-the-art for named entity recognition is based on monolingual FLAIR embeddings (Akbik et al., 2019) . In addition, multilingual embeddings, such as multilingual BERT (Devlin et al., 2019) tend to perform worse than their monolingual counterparts 3 in monolingual experiments. For completeness, we include experiments with multilingual embeddings as mentioned before.",
"cite_spans": [
{
"start": 492,
"end": 512,
"text": "(Akbik et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 579,
"end": 600,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "Following Reimers and Gurevych (2017), we report all experimental results as the mean of five runs and their standard deviation in Table 1 for experiments with byte-pair encoding embeddings. The results with FLAIR embeddings can be found in the appendix. We performed statistical significance testing to check if the concatenation (Mono + All) and meta-embeddings models are better than the best Mono + X model. We used paired permutation testing with 2 20 permutations and a significance level of 0.05 and performed the Fischer correction following Dror et al. (2017) . 4 For meta-embeddings, we found statistically significant differences in 12 out of 20 settings (6 with BPEmb, 6 with FLAIR) against the best monolingual + X model, while we found statistically significant differences for Mono + All in only 7 out of 20 cases. This suggests that metaembeddings are superior to monolingual settings with one auxiliary language as well as to the concatenation of all embeddings. Further, we found that the combination of monolingual and multilingual byte-pair encoding embeddings is always superior to either monolingual or multilingual embeddings alone for both tasks. Even though the multilingual embeddings have seen many languages during pre-training, they can still benefit from the high performance of monolingual embeddings and vice versa. As the meta-embeddings assign attention weights for each embedding, we can inspect the importance the models give to the different embeddings. An analysis for an example sentence can be found in Section D in the appendix. Table 2 provides the results of BPEmb and FLAIR meta-embeddings in comparison to state of the art, showing that we set the new state of the art for POS tagging.",
"cite_spans": [
{
"start": 550,
"end": 568,
"text": "Dror et al. (2017)",
"ref_id": "BIBREF6"
},
{
"start": 571,
"end": 572,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 131,
"end": 138,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1570,
"end": 1577,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1"
},
{
"text": "To evaluate how useful language distances are for predicting the best auxiliary language, we compare rankings based on language distances and the observed performance rankings based on Table 1. For this, we take the language distance from Gamallo et al. (2017) , which is based on language modeling perplexity PP of unigram language models LM applied to texts of foreign languages CH. Lower language model perplexities on a foreign dataset indicate higher language relatedness.",
"cite_spans": [
{
"start": 239,
"end": 260,
"text": "Gamallo et al. (2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Language Distances",
"sec_num": "4.2"
},
{
"text": "d P (L1, L2) = PP(CH L2 , LM L1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Language Distances",
"sec_num": "4.2"
},
{
"text": "We also test language similarities based on vocabulary overlap with W (L1|L2) being the number of words of L1 which are shared with L2 and N (L1) the number of words of L1 shared with other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Language Distances",
"sec_num": "4.2"
},
{
"text": "d V (L1, L2) = W (L1|L2) + W (L2|L1) 2 \u2022 min(N (L1), N (L2))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Language Distances",
"sec_num": "4.2"
},
{
"text": "For our experiments, we further adapt d P to use the perplexity of the FLAIR forward language models on the test data provided by Gamallo et al. (2017) and call it d * P . Similarly, we adapt d * V to compute the overlap of words in our training data. Note that both variants, d * P and d * V , are based on properties from either our model or training data and are, therefore, specific to our setting. Finally, we create a ranking d M V which combines the rankings from",
"cite_spans": [
{
"start": 130,
"end": 151,
"text": "Gamallo et al. (2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Language Distances",
"sec_num": "4.2"
},
{
"text": "d P , d * P , d V , d * V with majority voting. The ranking of d M V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Language Distances",
"sec_num": "4.2"
},
{
"text": "is provided in Table 3, the rankings of the individual distance measures are given in the appendix. To analyze the correlation between language distance measures and the performance of our model, we compute the Spearman's rank correlation coefficient between the real rankings based on performance and predicted rankings from our language distances. The results are shown in Figure 2 . We conclude that predicting the auxiliary language ranking is a hard task and see that the most related language is not always the best auxiliary language in practice (cf., Table 1 ). This holds in particular for POS tagging, where the performance differences of models are quite small.",
"cite_spans": [],
"ref_spans": [
{
"start": 375,
"end": 383,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 559,
"end": 566,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Analysis of Language Distances",
"sec_num": "4.2"
},
{
"text": "In general, d * P shows a higher correlation with model performance than d P and d V , indicating that not only word overlap plays a role but also context information. The majority voting d M V achieves the highest correlation and often predicts the best auxiliary language for NER models using byte-pair encoding. However, the actual ranking of all languages does not match the performance ranking, which results in a relatively low correlation with only a little above 0.5. Finally, we propose a small guide in Figure 3 to decide which auxiliary languages one can use to improve performance over monolingual embeddings. Depending on the available amount of data, it is recommended to train multiple monolingual embeddings and combine them using metaembeddings, which was observed to be the best method in our experiments. If not enough data is available to train monolingual embeddings, the best solution would be the inclusion of multilingual embeddings, assuming the existence of highquality embeddings, such as multilingual byte-pair encoding. If none of the above applies, language distance measures, in particular the combination of multiple distances, can help to identify the most promising auxiliary embeddings. Despite not always predicting the best model, the predicted auxiliary language often led to improvements over the monolingual baseline in our experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 513,
"end": 521,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Analysis of Language Distances",
"sec_num": "4.2"
},
{
"text": "In this paper, we investigated the benefits of auxiliary languages for sequence tagging. We showed that it is hard to predict the best auxiliary language based on language distances. We further showed that meta-embeddings can leverage multiple embeddings effectively for those tasks and set the new state of the art on part-of-speech tagging across different languages. Finally, we proposed a guide on how to decide which method of including auxiliary languages one should use.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We use the Byte-Pair-Encoding embeddings (Heinzerling and Strube, 2018) with 300 dimensions and a vocabulary size of 200k tokens for all languages. For FLAIR, we use the embeddings provided by the FLAIR framework (Akbik et al., 2018) with 2048 dimensions for each language model resulting in a total embedding size of 4096 dimensions. The bidirectional LSTM has a hidden size of 256 units. For training, we use stochastic gradient descent with a learning rate of 0.1 and a batch size of 64 sentences. The learning rate is halved after 3 consecutive epochs without improvement on the development set. We apply dropout with probability 0.1 after the input layer.",
"cite_spans": [
{
"start": 41,
"end": 71,
"text": "(Heinzerling and Strube, 2018)",
"ref_id": "BIBREF8"
},
{
"start": 213,
"end": 233,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Hyperparameters and Training",
"sec_num": null
},
{
"text": "We report the language rankings of the single metrics Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 61,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Language Distances",
"sec_num": null
},
{
"text": "d P , d * P , d V and d * V in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Language Distances",
"sec_num": null
},
{
"text": "C Results on NER and POS tagging with FLAIR embeddings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Language Distances",
"sec_num": null
},
{
"text": "We performed the same experiments as in Section 4.1 with FLAIR embeddings as well and report the results in Table 5 for NER and for POS tagging. In difference to the BPE experiments reported in the paper, we do not include multilingual embeddings in the Mono + All and meta-embedding versions of FLAIR. The reason is prior experiments in which multilingual embeddings led to reduced performance. This is also reflected in the poor performance of the multilingual FLAIR embeddings alone. It seems that the multilingual BPE embeddings are more effective in downstream tasks than the multilingual FLAIR embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 115,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Language Distances",
"sec_num": null
},
{
"text": "As the meta-embeddings assign attention weights for each embedding, we can inspect the importance the models give to the different embeddings. Figure 4 shows the assigned weights for an English sentence. In general, the model assigns most weight to the English embeddings. However, we observe an increased weight for German and the multilingual embedding for the German word Bayerische. Even though Vereinsbank is also a German word, the model focuses more on English for this word, probably because the subword bank has the same meaning in English.",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 151,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "D Analysis of Attention Weights",
"sec_num": null
},
{
"text": "To investigate whether the performance increase comes from the increased number of parameters rather than the inclusion of more embeddings, we also investigated including the same embedding type twice (Mono + Mono). However, we found",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Study: Increased Number of Parameters vs. Auxiliary Language",
"sec_num": null
},
{
"text": "https://github.com/flairNLP/flair https://nlp.h-its.org/bpemb/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We predict the UPOS tag from the following UD v2.0 treebanks: de gsd, en ewt, es gsd, fi tdt, nl alpino.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/google-research/ bert/blob/master/multilingual.md4 We take the model with median performance on the development set for significance testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Dietrich Klakow, Marius Mosbach, Michael A. Hedderich, the members of the BCAI NLP&KRR research group and the anonymous reviewers for their helpful comments and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "de en es fi nl dP d *nl nl en nl nl nl de fi en nl en en de nl en en de de en en 2 en en nl en es fi nl nl nl en de nl nl de de nl en en de de 3 fi fi es * fi de de fi es fi de fi fi en en es * de fi fi es * es 4 es es fi * es fi es es de de fi nl de es es nl * es es es fi * fi that this does not help: The performance is comparable to the monolingual baseline. Thus, the performance increase for Mono + X really comes from additional information provided by the embeddings of the auxiliary language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rank",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Pooled contextualized embeddings for named entity recognition",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Bergmann",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "724--728",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1078"
]
},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019. Pooled contextualized embeddings for named entity recognition. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 724-728, Minneapolis, Minnesota. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Contextual string embeddings for sequence labeling",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Duncan",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1638--1649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638-1649, Santa Fe, New Mexico, USA. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automated classification of the world s languages: a description of the method and preliminary results",
"authors": [
{
"first": "Eric",
"middle": [
"W"
],
"last": "Cecil H Brown",
"suffix": ""
},
{
"first": "S\u00f8ren",
"middle": [],
"last": "Holman",
"suffix": ""
},
{
"first": "Viveka",
"middle": [],
"last": "Wichmann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Velupillai",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "61",
"issue": "",
"pages": "285--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cecil H Brown, Eric W Holman, S\u00f8ren Wichmann, and Viveka Velupillai. 2008. Automated classification of the world s languages: a description of the method and preliminary results. STUF-Language Typology and Universals Sprachtypologie und Universalien- forschung, 61(4):285-308.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Measuring diachronic language distance using perplexity: Application to english, portuguese, and spanish",
"authors": [
{
"first": "Jos Ramom Pichel",
"middle": [],
"last": "Campos",
"suffix": ""
},
{
"first": "Pablo",
"middle": [
"Gamallo"
],
"last": "Otero",
"suffix": ""
},
{
"first": "Iaki Alegria",
"middle": [],
"last": "Loinaz",
"suffix": ""
}
],
"year": 2019,
"venue": "Natural Language Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1017/S1351324919000378"
]
},
"num": null,
"urls": [],
"raw_text": "Jos Ramom Pichel Campos, Pablo Gamallo Otero, and Iaki Alegria Loinaz. 2019. Measuring diachronic language distance using perplexity: Application to english, portuguese, and spanish. Natural Language Engineering, page 122.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Frustratingly easy meta-embedding -computing metaembeddings by averaging source word embeddings",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Coates",
"suffix": ""
},
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "194--198",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2031"
]
},
"num": null,
"urls": [],
"raw_text": "Joshua Coates and Danushka Bollegala. 2018. Frus- tratingly easy meta-embedding -computing meta- embeddings by averaging source word embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 2 (Short Papers), pages 194-198, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Replicability analysis for natural language processing: Testing significance with multiple datasets",
"authors": [
{
"first": "Rotem",
"middle": [],
"last": "Dror",
"suffix": ""
},
{
"first": "Gili",
"middle": [],
"last": "Baumer",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Bogomolov",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "471--486",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00074"
]
},
"num": null,
"urls": [],
"raw_text": "Rotem Dror, Gili Baumer, Marina Bogomolov, and Roi Reichart. 2017. Replicability analysis for natural language processing: Testing significance with mul- tiple datasets. Transactions of the Association for Computational Linguistics, 5:471-486.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "From language identification to language distance",
"authors": [
{
"first": "Pablo",
"middle": [],
"last": "Gamallo",
"suffix": ""
},
{
"first": "I\u00f1aki",
"middle": [],
"last": "Jos\u00e9 Ramom Pichel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Alegria",
"suffix": ""
}
],
"year": 2017,
"venue": "Physica A: Statistical Mechanics and its Applications",
"volume": "484",
"issue": "",
"pages": "152--162",
"other_ids": {
"DOI": [
"10.1016/j.physa.2017.05.011"
]
},
"num": null,
"urls": [],
"raw_text": "Pablo Gamallo, Jos\u00e9 Ramom Pichel, and I\u00f1aki Alegria. 2017. From language identification to language dis- tance. Physica A: Statistical Mechanics and its Ap- plications, 484:152-162.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "BPEmb: Tokenization-free pre-trained subword embeddings in 275 languages",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Heinzerling",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Heinzerling and Michael Strube. 2018. BPEmb: Tokenization-free pre-trained subword em- beddings in 275 languages. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sequence tagging with contextual and non-contextual subword representations: A multilingual evaluation",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Heinzerling",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "273--291",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1027"
]
},
"num": null,
"urls": [],
"raw_text": "Benjamin Heinzerling and Michael Strube. 2019. Se- quence tagging with contextual and non-contextual subword representations: A multilingual evaluation. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 273- 291, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computing",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computing, 9(8):1735- 1780.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dynamic meta-embeddings for improved sentence representations",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Changhan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1466--1477",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1176"
]
},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Changhan Wang, and Kyunghyun Cho. 2018. Dynamic meta-embeddings for improved sen- tence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1466-1477, Brussels, Bel- gium. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [
"C N"
],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth Inter- national Conference on Machine Learning, ICML '01, pages 282-289, San Francisco, CA, USA. Mor- gan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "260--270",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1030"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Pre-trained multi-view word embedding using two-side neural network",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, AAAI'14",
"volume": "",
"issue": "",
"pages": "1982--1988",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yong Luo, Jian Tang, Jun Yan, Chao Xu, and Zheng Chen. 2014. Pre-trained multi-view word embed- ding using two-side neural network. In Proceed- ings of the Twenty-Eighth AAAI Conference on Ar- tificial Intelligence, AAAI'14, pages 1982-1988. AAAI Press.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "338--348",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1035"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338-348, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A finnish news corpus for named entity recognition. Language Resources and Evaluation",
"authors": [
{
"first": "Pekka",
"middle": [],
"last": "Teemu Ruokolainen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kauppinen",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--26",
"other_ids": {
"DOI": [
"10.1007/s10579-019-09471-7'"
]
},
"num": null,
"urls": [],
"raw_text": "Teemu Ruokolainen, Pekka Kauppinen, Miikka Sil- fverberg, and Krister Lind\u00e9n. 2019. A finnish news corpus for named entity recognition. Language Re- sources and Evaluation, pages 1-26.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Neural Architectures for Nested NER through Linearization",
"authors": [
{
"first": "Jana",
"middle": [],
"last": "Strakov\u00e1",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5326--5331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jana Strakov\u00e1, Milan Straka, and Jan Haji\u010d. 2019. Neural Architectures for Nested NER through Lin- earization. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 5326-5331, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "",
"suffix": ""
},
{
"first": "Tjong Kim",
"middle": [],
"last": "Sang",
"suffix": ""
}
],
"year": 2002,
"venue": "COLING-02: The 6th Conference on Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natu- ral Language Learning at HLT-NAACL 2003, pages 142-147.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Neural networks leverage corpuswide information for part-of-speech tagging",
"authors": [
{
"first": "Yuta",
"middle": [],
"last": "Tsuboi",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "938--950",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Yuta Tsuboi. 2014. Neural networks leverage corpus- wide information for part-of-speech tagging. In Pro- ceedings of the 2014 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 938-950, Doha, Qatar. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning multilingual meta-embeddings for code-switching named entity recognition",
"authors": [
{
"first": "Zhaojiang",
"middle": [],
"last": "Genta Indra Winata",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 4th Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "181--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Genta Indra Winata, Zhaojiang Lin, and Pascale Fung. 2019a. Learning multilingual meta-embeddings for code-switching named entity recognition. In Pro- ceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 181- 186, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Hierarchical metaembeddings for code-switching named entity recognition",
"authors": [
{
"first": "Zhaojiang",
"middle": [],
"last": "Genta Indra Winata",
"suffix": ""
},
{
"first": "Jamin",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zihan",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3541--3547",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1360"
]
},
"num": null,
"urls": [],
"raw_text": "Genta Indra Winata, Zhaojiang Lin, Jamin Shin, Zihan Liu, and Pascale Fung. 2019b. Hierarchical meta- embeddings for code-switching named entity recog- nition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Process- ing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3541-3547, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Robust multilingual part-of-speech tagging via adversarial training",
"authors": [
{
"first": "Michihiro",
"middle": [],
"last": "Yasunaga",
"suffix": ""
},
{
"first": "Jungo",
"middle": [],
"last": "Kasai",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "976--986",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1089"
]
},
"num": null,
"urls": [],
"raw_text": "Michihiro Yasunaga, Jungo Kasai, and Dragomir Radev. 2018. Robust multilingual part-of-speech tagging via adversarial training. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 976-986, New Orleans, Louisiana. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning word meta-embeddings",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1351--1360",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1128"
]
},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin and Hinrich Sch\u00fctze. 2016. Learning word meta-embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 1351-1360, Berlin, Germany. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "MGNC-CNN: A simple approach to exploiting multiple word embeddings for sentence classification",
"authors": [
{
"first": "Ye",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1522--1527",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1178"
]
},
"num": null,
"urls": [],
"raw_text": "Ye Zhang, Stephen Roller, and Byron C. Wallace. 2016. MGNC-CNN: A simple approach to exploiting mul- tiple word embeddings for sentence classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1522-1527, San Diego, California. Associa- tion for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Auxiliary Languages. Winata et al. (2019a) proposed to include embeddings from closelyrelated languages to improve NER performance (a) BiLSTM-CRF. (b) Concatenation. (c) Meta Embedding."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Figure 1: Overview of our model architecture (left). The embedding combination e can be either computed using the concatenation e CON CAT (middle) or the meta embedding method e AT T (right)."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Spearman's rank correlation between language distance and model performance rankings for NER and POS tasks for different language distances."
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Proposal for auxiliary embedding selection."
},
"TABREF0": {
"text": ") are randomly initialized and learnt during training. \u00b1 .49 86.78 \u00b1 .15 78.99 \u00b1 .91 78.00 \u00b1 .87 78.91 \u00b1 .42 Multilingual 75.37 \u00b1 .87 86.52 \u00b1 .34 78.33 \u00b1 .47 77.41 \u00b1 .86 77.49 \u00b1 .45 Mono + Multi 81.13 \u00b1 .46 88.01 \u00b1 .27 80.32 \u00b1 .50 81.44 \u00b1 .36 81.15 \u00b1 .43 Mono + DE -87.46 \u00b1 .19 79.79 \u00b1 .74 80.31 \u00b1 .21 81.31 \u00b1 .15 Mono + EN 80.92 \u00b1 .29 -80.48 \u00b1 .56 81.22 \u00b1 .26 80.84 \u00b1 .30 Mono + ES 80.29 \u00b1 .20 87.37 \u00b1 .30 -80.80 \u00b1 .83 80.62 \u00b1 .39 \u00b1 .33 \u2020 81.73 \u00b1 .26 \u2020 Meta-Embeddings 81.75 \u00b1 .50 \u2020 87.87 \u00b1 .23 80.84 \u00b1 .52 83.12 \u00b1 .12 \u2020 82.13 \u00b1 .50 \u2020 \u00b1 .14 \u2020 95.40 \u00b1 .04 \u2020 96.46 \u00b1 .09 95.61 \u00b1 .08 \u2020 95.31 \u00b1 .08 Meta-Embeddings 93.51 \u00b1 .08 95.36 \u00b1 .10 \u2020 96.48 \u00b1 .06 95.61 \u00b1 .11 \u2020 95.34 \u00b1 .14 \u2020",
"content": "<table><tr><td>NER</td><td>De</td><td>En</td><td>Es</td><td/><td>Fi</td><td>Nl</td></tr><tr><td colspan=\"2\">Monolingual 79.78 Mono + FI 81.10 \u00b1 .36</td><td>87.94 \u00b1 .17</td><td>79.91 \u00b1 .82</td><td/><td>-</td><td>80.65 \u00b1 .48</td></tr><tr><td>Mono + NL</td><td>81.25 \u00b1 .14</td><td>87.38 \u00b1 .22</td><td>80.93 \u00b1 .25</td><td colspan=\"2\">80.67 \u00b1 .49</td><td>-</td></tr><tr><td colspan=\"5\">Mono + All 82.07 POS 81.52 \u00b1 .33 87.70 \u00b1 .06 80.63 \u00b1 .34 De En Es</td><td>Fi</td><td>Nl</td></tr><tr><td>Monolingual</td><td>93.02 \u00b1 .11</td><td>94.17 \u00b1 .09</td><td>96.23 \u00b1 .04</td><td colspan=\"2\">92.84 \u00b1 .13</td><td>94.01 \u00b1 .21</td></tr><tr><td>Multilingual</td><td>92.19 \u00b1 .20</td><td>94.10 \u00b1 .06</td><td>96.01 \u00b1 .07</td><td colspan=\"2\">91.95 \u00b1 .11</td><td>93.35 \u00b1 .22</td></tr><tr><td>Mono + Multi</td><td>93.40 \u00b1 .08</td><td>95.11 \u00b1 .07</td><td>96.54 \u00b1 .03</td><td colspan=\"2\">94.70 \u00b1 .12</td><td>94.94 \u00b1 .13</td></tr><tr><td>Mono + DE</td><td>-</td><td>95.11 \u00b1 .09</td><td>96.43 \u00b1 .13</td><td colspan=\"2\">94.43 \u00b1 .18</td><td>94.70 \u00b1 .09</td></tr><tr><td>Mono + EN</td><td>93.26 \u00b1 .11</td><td>-</td><td>96.52 \u00b1 .06</td><td colspan=\"2\">94.45 \u00b1 .14</td><td>94.80 \u00b1 .12</td></tr><tr><td>Mono + ES</td><td>93.31 \u00b1 .13</td><td>95.03 \u00b1 .09</td><td>-</td><td colspan=\"2\">94.48 \u00b1 .14</td><td>94.79 \u00b1 .17</td></tr><tr><td>Mono + FI</td><td>93.41 \u00b1 .12</td><td>94.97 \u00b1 .04</td><td>96.34 \u00b1 .08</td><td/><td>-</td><td>94.92 \u00b1 .13</td></tr><tr><td>Mono + NL</td><td>93.52 \u00b1 .10</td><td>94.99 \u00b1 .08</td><td>96.41 \u00b1 .07</td><td colspan=\"2\">94.42 \u00b1 .08</td><td>-</td></tr><tr><td>Mono + All</td><td>93.60</td><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF1": {
"text": "Results of NER (F 1 , above) and POS (Accuracy, below) experiments with BPE embeddings. \u2020 marks models that are statistically significant to the best Mono + X model. box highlights the closest auxiliary language according to language distance measure d M V , and box the best auxiliary language according to performance.",
"content": "<table><tr><td/><td>De En Es</td><td>Fi</td><td>Nl</td></tr><tr><td>NER</td><td colspan=\"3\">Strakov\u00e1 et al. (2019) 85.1 93.3 88.8 -Meta-Emb. (BPEmb) 81.8 87.9 80.8 83.1 82.1 92.7 Meta-Emb. (FLAIR) 83.9 90.7 86.2 85.1 86.6</td></tr><tr><td>POS</td><td colspan=\"3\">Yasunaga et al. (2018) 94.4 95.8 96.8 95.4 93.1 Meta-Emb. (BPEmb) 93.5 95.4 96.5 95.6 95.3 Meta-Emb. (FLAIR) 94.8 96.5 97.2 97.8 96.8</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF2": {
"text": "",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF4": {
"text": "Language ranking according to the majority voting distance d M V .",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}