|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:14:18.042918Z" |
|
}, |
|
"title": "Discriminating Between Similar Nordic Languages", |
|
"authors": [ |
|
{ |
|
"first": "Ren\u00e9", |
|
"middle": [], |
|
"last": "Haas", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IT University of Copenhagen", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Derczynski", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IT University of Copenhagen", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Automatic language identification is a challenging problem. Discriminating between closely related languages is especially difficult. This paper presents a machine learning approach for automatic language identification for the Nordic languages, which often suffer miscategorisation by existing state-of-the-art tools. Concretely we will focus on discrimination between six Nordic languages: Danish, Swedish, Norwegian (Nynorsk), Norwegian (Bokm\u00e5l), Faroese and Icelandic.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Automatic language identification is a challenging problem. Discriminating between closely related languages is especially difficult. This paper presents a machine learning approach for automatic language identification for the Nordic languages, which often suffer miscategorisation by existing state-of-the-art tools. Concretely we will focus on discrimination between six Nordic languages: Danish, Swedish, Norwegian (Nynorsk), Norwegian (Bokm\u00e5l), Faroese and Icelandic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Automatic language identification is a core problem in NLP but remains a difficult task (Caswell et al., 2020) , especially across domains (Lui and Baldwin, 2012; Derczynski et al., 2013) . Discriminating between closely related languages is often a particularly difficult subtask of this problem (Zampieri et al., 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 110, |
|
"text": "(Caswell et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 139, |
|
"end": 162, |
|
"text": "(Lui and Baldwin, 2012;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 163, |
|
"end": 187, |
|
"text": "Derczynski et al., 2013)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 320, |
|
"text": "(Zampieri et al., 2014)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Language technology for Scandinavian languages is in a nascent phase (e.g. Kirkedal et al. (2019) ). One problem is acquiring enough text with which to train e.g. large language models. Good quality language ID is critical to this data sourcing, though leading models often confuse similar Nordic languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 97, |
|
"text": "Kirkedal et al. (2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper presents data and baselines for automatic language identification between six closelyrelated Nordic languages: Danish, Swedish, Norwegian (Nynorsk), Norwegian (Bokm\u00e5l), Faroese and Icelandic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Further, we investigate feature extraction methods for Nordic language identification and evaluates the performance of a selection of baseline models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Finally, we test the models on a data set from a different domain in order to investigate how well the models generalize in distinguishing sim-ilar Nordic languages when classifying sentences across domains.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The problem of discriminating between similar languages has been investigated in recent work (Goutte et al., 2016; Zampieri et al., 2015) which discuss the results from two editions of the \"Discriminating between Similar Languages (DSL) shared task\". Over the two editions of the DSL shared task different teams competed to develop the best machine learning algorithms to discriminate between the languages in a corpus consisting of 20K sentences in each of the languages: Bosnian, Croatian, Serbian, Indonesian, Malaysian, Czech, Slovak, Brazil Portuguese, European Portuguese, Argentine Spanish, Peninsular Spanish, Bulgarian and Macedonian.", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 114, |
|
"text": "(Goutte et al., 2016;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 115, |
|
"end": 137, |
|
"text": "Zampieri et al., 2015)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Similar work has included (Toftrup et al., 2021) , who include Nordic languages in a larger exercise in reproducing a commercial language ID system; and (Rangel et al., 2018) , who attempt native language extraction, a task complex in the Nordic context which is rich in cognates and shared etymologies.", |
|
"cite_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 48, |
|
"text": "(Toftrup et al., 2021)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 153, |
|
"end": 174, |
|
"text": "(Rangel et al., 2018)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "However, no prior work has focused specifically on the group of Nordic languages, leaving users of those languages without high quality automatically-extracted single language corpora (Derczynski et al., 2020) . This is particularly disadvantageous for some Nordic language pairs, such as Danish/Norwegian and Faroese/Icelandic, where general-purpose many-language systems fall down (Toftrup et al., 2021) . Thus, we focus specifically on data for this language and baseline methods.", |
|
"cite_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 209, |
|
"text": "(Derczynski et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 383, |
|
"end": 405, |
|
"text": "(Toftrup et al., 2021)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "This section describes the construction of the Nordic DSL (Distinguishing Similar Lanugages) data set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Nordic DSL data set", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Data was scraped from Wikipedia. We downloaded summaries for randomly chosen Wikipedia articles in each of the languages, saved as raw text to six .txt files of about 10MB each. While Bornholmsk would be a welcome addition (Derczynski and Kjeldsen, 2019) , exhibiting some similarity to Faroese and Danish, there is not yet enough digital text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 223, |
|
"end": 254, |
|
"text": "(Derczynski and Kjeldsen, 2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Nordic DSL data set", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "After the initial cleaning (described in the next section) the data set contained just over 50K sentences in each of the language categories. From this, two data sets with exactly 10K and 50K sentences respectively were drawn from the raw data set. In this way the data sets are stratified, containing the same number of sentences for each language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Nordic DSL data set", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We split these data sets, reserving 80% for the training set and 20% for the test set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Nordic DSL data set", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "This section describes how the data set is initially cleaned and how sentences are extracted from the raw data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Cleaning and encoding.", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Extracting Sentences The first pass in sentence tokenisation is splitting by line breaks. We then extract shorter sentences with the sentence tokenizer (sent_tokenize) function from NLTK (Loper and Bird, 2002) . This does a better job than just splitting by '.' due to the fact that abbreviations, which can appear in a legitimate sentence, typically include a period symbol.", |
|
"cite_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 209, |
|
"text": "(Loper and Bird, 2002)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Cleaning and encoding.", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Cleaning characters The initial data set has many characters that do not belong to the alphabets of the languages we work with. Often the Wikipedia pages for people or places contain names in foreign languages. For example a summary might contain Chinese or Russian characters which are not strong signals for the purpose of discriminating between the target languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Cleaning and encoding.", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Further, it can be that some characters in the target languages are mis-encoded. These misencodings are also not likely to be intrinsically strong or stable signals.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Cleaning and encoding.", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To simplify feature extraction, and to reduce the size of the vocabulary, the raw data is converted to lowercase and stripped of all characters which are not part of the standard alphabet of the six languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Cleaning and encoding.", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In this way we only accept the characters:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Cleaning and encoding.", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "'abcdefghijklmnopqr stuvwxyz\u00e1\u00e4\u00e5ae\u00e9\u00ed\u00f0\u00f3\u00f6\u00f8\u00fa\u00fd\u00fe '", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Cleaning and encoding.", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "and replace everything else with white space before continuing to extract the features. For example the raw sentence 'Hesbjerg er dannet ved sammenlaegning af de 2 g\u00e5rde Store Hesbjerg og Lille Hesbjerg i 1822.' will be reduced to 'hesbjerg er dannet ved sammenlaegning af de g\u00e5rde store hesbjerg og lille hesbjerg i ',", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Cleaning and encoding.", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We thus make the assumption that capitalisation, numbers and characters outside this character set do not contribute much information relevant for language classification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Cleaning and encoding.", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Feature encoding After the initial cleaning of the data we consider two methods for feature encoding: 1) Character level n-grams and 2) Skipgram and CBOW encodings created by training an unsupervised fasttext model on the dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Cleaning and encoding.", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The Skipgram and CBOW methods encode sentences into fixed-length vectors by considering world level n-grams which are augmented with character level n-gram information .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Cleaning and encoding.", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In the experiments presented in this paper, the CBOW and Skipgram encodings have the following settings: We use individual words (uni-grams) augmented with character level n-grams of size 2-5 with a context window of 5. The encoding result in fixed-length vectors in R 100 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Cleaning and encoding.", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We compare with an off-the-shelf language identification system, langid.py (Lui and Baldwin, 2012) . langid.py comes with a pretrained model which covers 97 languages. The data for langid.py comes from five different domains: government documents, software documentation, newswire, online encyclopedia and an internet crawl. Features are selected for cross-domain stability using the LD heuristic (Lui and Baldwin, 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 98, |
|
"text": "(Lui and Baldwin, 2012)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 397, |
|
"end": 420, |
|
"text": "(Lui and Baldwin, 2011)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines 4.1 langid.py", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We evaluated how well langid.py performed on the Nordic DSL data set. It is a peculiar feature of the Norwegian language that there exist two different written languages but three different language codes. Since langid.py also returned the language id \"no\" (Norwegian) on some of the data points we restrict langid.py to only be able to return either \"nn\" (Nynorsk) or \"nb\" (Bokm\u00e5l) as predictions. Figure 1 shows the confusion matrix for the langid.py classifier which achieved an accuracy of 78.3% on the data set. The largest confusions were between Danish and Bokm\u00e5l, and between Faroese and Icelandic. langid.py was able to correctly classify most of the Danish instances; however, approximately a quarter of the instance in Bokm\u00e5l were incorrectly classified as Danish and just under an eighth was misclassified as Nynorsk.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 399, |
|
"end": 407, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baselines 4.1 langid.py", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Furthermore, langid.py correctly classified most of the Icelandic data points; however, over half of the data points in Faroese were incorrectly classified as Icelandic. Table 1 shows results for running the models on a data set with 10K sentences in each language category. Models tend to perform better if we use character bi-grams instead of single characters. Logistic regression and SVM outperform Naive Bayes and K-nearest neighbors in all cases. Furthermore, for all models, we get the best performance if we use the skipgram model from FastText.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 170, |
|
"end": 177, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baselines 4.1 langid.py", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Comparing the CBOW mode from FastText with character bi-grams, the CBOW model is on par with bi-grams for the KNN and Naive Bayes classifiers, while bi-grams outperform CBOW for Logistic Regression and support vector machines. 5 Our Approach", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline with linear models", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The methods described above are quite simple. We also compared the above method with FastText, which is a library for creating word embeddings developed by Facebook . explain how FastText extracts feature vectors from raw text data. Fast-Text makes word embeddings using one of two model architectures: continuous bag of words (CBOW) or the continuous skipgram model. The skipgram and CBOW models are first proposed in (Mikolov et al., 2013) which is the paper introducing the word2vec model for word embeddings. FastText builds upon this work by proposing an extension to the skipgram model which takes into account sub-word information.", |
|
"cite_spans": [ |
|
{ |
|
"start": 419, |
|
"end": 441, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using FastText", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Both models use a neural network to learn word embedding from using a context windows consisting of the words surrounding the current target word. The CBOW architecture predicts the current word based on the context, and the skipgram predicts surrounding words given the current word (Mikolov et al., 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 284, |
|
"end": 306, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using FastText", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "While every layer in a classic multilayer perceptron is densely connected, such that each of the nodes in a layer are connected to all nodes in the next layer, in a convolutional neural network we use one or more convolutional layers. Convolutional Neural Networks have an established use for text classification (Jacovi et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 313, |
|
"end": 334, |
|
"text": "(Jacovi et al., 2018)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using A Convolutional Neural Network", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Our CNN is implemented with keras. We use an embedding layer followed by two blocks with a 1D convolutional layer and dropout. The first block has 128 filters while the second has 64. Both convolutional blocks use a kernel size of 5 and stride of 1. The two blocks are followed by a dense layer with 32 hidden nodes before the output layer which has 6 nodes. We use ReLu activation in the convolutional and fully connected layers and SoftMax in the output layer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using A Convolutional Neural Network", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Results for the neural network architectures are in Table 2 . Here we compare the result of doing character level uni-and bi-grams using Multilayer Perceptron and Convolutional neural networks. The CNN performs the best, achieving an accuracy of 95.6% when using character bi-grams. Both models perform better using bi-grams than individual characters as features while the relative increase in performance is greater for the MLP model. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 59, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results with neural networks", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Often the performance of supervised classification models increases with more training data. To measure this effect we increase the amount of training data to 50K sentences in each of the language categories. Due to longer training times only the baseline models were included, with the skipgram encoding from FastText which we saw achieved the highest accuracy. Table 3 shows that the performance of the logistic regression model and the K-nearest-neighbors algorithm improved slightly by including more data. Unexpectedly, performance of the support vector machine and Na\u00efve Bayes dropped slightly with extra data. Even when including five times the amount of data, the best result, logistic regression with an accuracy of 93.3%, is still worse than for the Convolutional Neural Network trained on 10K data points in each language. Table 4 shows results for running the neural networks on the larger data set. Both models improve by increasing the amount of data and the Convolutional Neural Network reached an accuracy of 97% which is the best so far. Figure 2 shows performance of the CNN trained on the Wikipedia data set with 50K data points per language. The model achieved an accuracy of 97% on the data set. The largest classification errors are between Danish, Bokm\u00e5l and Nynorsk as well as between Icelandic and Faroese.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 363, |
|
"end": 370, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 834, |
|
"end": 841, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 1055, |
|
"end": 1063, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Increasing the size of the data set", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "FastText can also be used for supervised classification. In the authors show that FastText can obtain performance on par with meth- ods inspired by deep learning, while being much faster on a selection of different tasks, e.g. tag prediction and sentiment analysis. We apply FastText classification to the Nordic DSL task. The confusion matrix from running the FastText supervised classifier can be seen in Figure 3 . The supervised FastText model achieved an accuracy of 97.1% and thus the performance is similar to that of the CNN.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 407, |
|
"end": 415, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Using FastText supervised", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Training on single-domain data can lead to classifiers that only work well on a single domain. To see how the two best performing models generalize, we tested on a non-Wikipedia data set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cross-domain evaluation", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "For this, we used Tatoeba, 1 a large database of user-provided sentences and translations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cross-domain evaluation", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "The language style used in the Tatoeba data set is different from the language used in Wikipedia. Performance drops when shifting to Tatoeba conversations. For reference the accuracy of langid.py on this data set is 80.9% so FastText actually performs worse than the baseline with an accuracy of 75.5% while the CNN is better than the baseline with an accuracy of 83.8%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cross-domain evaluation", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "One explanation for the drop in performance is that the sentences in the Tatoeba data are significantly shorter than the sentences in the Wikipedia data set as seen in Figure 4b . Both models tend to mis-classify shorter sentences more often than longer sentences. This and the fact that the text genre is different might explain why the models trained on the Wikipedia data set does not generalise to the Tatoeba data set without a drop on performance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 177, |
|
"text": "Figure 4b", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Cross-domain evaluation", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "The CNN uses character bi-grams as features while, with the standard settings, FastText uses only individual words to train. The better performance of the CNN might indicate that character level n-grams are more useful features for language identification than words alone.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cross-domain evaluation", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "To test this we changed the setting of FastText to train using only character level n-grams in the range 1-5 instead of individual words. Figure 5 shows the confusion matrix for this version of the FastText model. This version still achieved 97.8% on the Wikipedia test set while improving the accuracy on the Tatoeba data set from 75.4% to 85.8% which is a substantial increase.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 146, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Cross-domain evaluation", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "Thus, using character-level features seems to improve the FastText models' ability to generalize to sentences belonging to a domain different from the one they have been trained on, supporting findings in prior work (Lui and Baldwin, 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 239, |
|
"text": "(Lui and Baldwin, 2011)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cross-domain evaluation", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "To improve the accuracy over the Tatoeba data set, we retrained the FastText model on a combined data set consisting of data points from both Wikipedia and Tatoeba data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Retraining on the combined data set", |
|
"sec_num": "6.5" |
|
}, |
|
{ |
|
"text": "The FastText model achieved an accuracy of 97.2% on this combined data set and an accuracy of 93.2% when evaluating this model on the Tatoeba test set alone -the confusion matrix is Figure 6 . As was the case with the Wikipedia data set the mis-classified sentences tend to be shorter than the average sentence in the data set. Figure 7 shows the distribution of sentence lengths for the Tatoeba test set along with the mis-classified sentences. In the Tatoeba test set the mean length of sentences is 37.66 characters with a standard deviation of 17.91 while the mean length is only 29.70 characters for the mis-classified sentences with a standard deviation of 9.65. This again indicates that shorter sentences are harder to classify.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 190, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF7" |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 336, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Retraining on the combined data set", |
|
"sec_num": "6.5" |
|
}, |
|
{ |
|
"text": "To gain additional insight on how the different word embedding capture important information about each of the language classes, we visualized the embeddings using two different techniques for dimensionality reduction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis 7.1 Visualisation", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We used two different methods: Principal Component Analysis (PCA) and T-distributed Stochastic Neighbor Embedding (t-SNE). We begin with a brief explanation of the two techniques and proceed with an analysis of the results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis 7.1 Visualisation", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Principal Component Analysis The first step is to calculate the covariance matrix of the data set, with components:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis 7.1 Visualisation", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "K X i ,X j = E[(X i \u2212 \u00b5 i )(X j \u2212 \u00b5 j )]", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Analysis 7.1 Visualisation", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "where X i is the ith component of the feature vector and \u00b5 i is the mean of that component.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis 7.1 Visualisation", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The next step is to calculate the eigenvectors and eigenvalues of the covariance matrix by solving the eigenvalue equation. The eigenvalues are the variances along the direction of the eigenvectors or \"Principal Components\". To project our data set onto 2D space we select the two eigenvectors' largest associated eigenvalue and project our data set onto this subspace.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis 7.1 Visualisation", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In Figure 8 we see the result of running PCA on the wikipedia data set where we have used character level bi-grams as features, as well as the CBOW and skipgram models from FastText.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 11, |
|
"text": "Figure 8", |
|
"ref_id": "FIGREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis 7.1 Visualisation", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In the figure for encoding with character level bi-grams, the PCA algorithm resulted in two elongated clusters. Without giving any prior information about the language of each sentences, PCA is apparently able to discriminate between Danish, Swedish, Nynorsk and Bokm\u00e5l on one side, and Faroese and Icelandic on the other, since the majority of the sentences in each language belong to either of these two clusters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis 7.1 Visualisation", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "With the FastText implementations we observe three clusters. For both CBOW and skipgram we see a distinct cluster of Swedish sentences. When comparing the two FastText models we see that the t-SNE algorithm with skipgrams seems to be able to separate Faroese and Icelandic data points to a high degree compared with the CBOW model. For the cluster of Danish, Bokm\u00e5l, and Nynorsk sentences the skipgram models seem to give a better separation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis 7.1 Visualisation", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Embedding method (van der Maaten and Hinton, 2008) favours retaining local relationships over remote ones.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "t-SNE The T-distributed Stochastic Neighbor", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In t-SNE, for a given data point x i , the probability of picking another data point x j as a neighbor to x i is given by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "t-SNE The T-distributed Stochastic Neighbor", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p ji = exp(||x i \u2212 x j || 2 /2\u03c3 2 i ) k =i exp(||x i \u2212 x k || 2 /2\u03c3 2 i )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "t-SNE The T-distributed Stochastic Neighbor", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Given this probability distribution the goal is to find the low-dimensional mapping of the data points x i which we denote y i follow a similar distribution. To solve what is referred to as the \"crowding problem\", t-SNE uses the Student t-distribution which is given by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "t-SNE The T-distributed Stochastic Neighbor", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "q ij = (1 + ||y i \u2212 y j || 2 ) \u22121 k =l (1 + ||y k \u2212 y l || 2 ) \u22121", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "t-SNE The T-distributed Stochastic Neighbor", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Optimization of this distribution is done using gradient decent on the Kullback-Leibler divergence which is given by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "t-SNE The T-distributed Stochastic Neighbor", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u03b4C \u03b4y i = 4 j (p ij \u2212 q ij )(y i \u2212 y j )(1 + ||y i \u2212 y j || 2 ) \u22121", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "t-SNE The T-distributed Stochastic Neighbor", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(4) t-SNE results over the Wikipedia data sets can be seen in Figure 9 . As was the case with PCA, it appears that the encoding with FastText seem to capture the most relevant information to discriminate between the languages; especially the skipgram mode does well at capturing information relevant to this task.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 70, |
|
"text": "Figure 9", |
|
"ref_id": "FIGREF12" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "t-SNE The T-distributed Stochastic Neighbor", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here we recover some interesting information about the similarity of the languages. The data points in Bokm\u00e5l lie between those in Danish and This fits speaker intuitions about these languages. Interestingly the Swedish data points and quite scattered, and t-SNE does not make a coherent Swedish cluster.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "t-SNE The T-distributed Stochastic Neighbor", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This does not however mean that the Swedish data points are not close in the original space. Some care is needed when interpreting the plot since t-SNE groups together data points such that neighboring points in the input space will tend to be neighbors in the low dimensional space.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "t-SNE The T-distributed Stochastic Neighbor", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The dimensionality reduction techniques applied, PCA and t-SNE, were able to cluster the input sen-tences into three main language categories: (1) Danish-Nynorsk-Bokm\u00e5l; (2) Faroese-Icelandic;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "(3) Swedish. Generally the supervised models made the most errors when discriminating between languages belonging to either of these language groups.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "For the \"classical\" models, Logistic Regression and SVMs achieved better performance than KNN and Naive Bayes, where the latter performed worst. This was true in all cases irrespective of the method of feature extraction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "Additionally we saw that when we used feature vectors from the FastText skipgram model the classification models achieved better results than when using either FastText CBOW or character n-grams.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "Generally we saw that increasing the number of data points lead to better performance. When comparing the CNN with the \"classical\" models however the CNN performed better than any of the other models even when trained on less data points. In this way it seems that the CNN achieves higher sample efficiency compared to the other models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "This paper presented dataset, baseline approaches, and analyses on automatically distinguishing similar Nordic languages. We visualized embeddings produced by character level bi-grams, CBOW and skipgram. We argue that, of these, FastText's skipgram embeddings capture most information for discriminating between languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Data and code are available at https://github. com/renhaa/NordicDSL.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "As baselines, we compared four different classical models: KNN, Logistic regression, Naive Bayes and a linear SVM with two neural network architectures: Multilayer perceptron and a convolutional neural network. The two best performing models, FastText supervised and CNN, saw reduced performance when going off-domain. Using character n-grams as features instead of words increased the performance for the FastText supervised classifier. Training on multiple domains resulted in an expected performance increase.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Enriching word vectors with subword information", |
|
"authors": [ |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1607.04606" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vec- tors with subword information. arXiv preprint arXiv:1607.04606.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "2020. Language id in the wild: Unexpected challenges on the path to a thousand-language web text corpus", |
|
"authors": [ |
|
{ |
|
"first": "Isaac", |
|
"middle": [], |
|
"last": "Caswell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Theresa", |
|
"middle": [], |
|
"last": "Breiner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankur", |
|
"middle": [], |
|
"last": "Daan Van Esch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bapna", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2010.14571" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Isaac Caswell, Theresa Breiner, Daan van Esch, and Ankur Bapna. 2020. Language id in the wild: Unexpected challenges on the path to a thousand-language web text corpus. arXiv preprint arXiv:2010.14571.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Bornholmsk natural language processing: Resources and tools", |
|
"authors": [ |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Derczynski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Speed Kjeldsen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 22nd Nordic Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "338--344", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leon Derczynski and Alex Speed Kjeldsen. 2019. Bornholmsk natural language processing: Re- sources and tools. In Proceedings of the 22nd Nordic Conference on Computational Linguistics, pages 338-344.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Microblog-genre noise and impact on semantic annotation accuracy", |
|
"authors": [ |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Derczynski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Maynard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niraj", |
|
"middle": [], |
|
"last": "Aswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kalina", |
|
"middle": [], |
|
"last": "Bontcheva", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 24th ACM Conference on Hypertext and Social Media", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "21--30", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leon Derczynski, Diana Maynard, Niraj Aswani, and Kalina Bontcheva. 2013. Microblog-genre noise and impact on semantic annotation accuracy. In Pro- ceedings of the 24th ACM Conference on Hypertext and Social Media, pages 21-30.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Discriminating similar languages: Evaluations and explorations", |
|
"authors": [ |
|
{ |
|
"first": "Cyril", |
|
"middle": [], |
|
"last": "Goutte", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Serge", |
|
"middle": [], |
|
"last": "L\u00e9ger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of Language Resources and Evaluation (LREC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cyril Goutte, Serge L\u00e9ger, Shervin Malmasi, and Mar- cos Zampieri. 2016. Discriminating similar lan- guages: Evaluations and explorations. In Proceed- ings of Language Resources and Evaluation (LREC).", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Understanding convolutional neural networks for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Jacovi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [ |
|
"Sar" |
|
], |
|
"last": "Shalom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "56--65", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-5408" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alon Jacovi, Oren Sar Shalom, and Yoav Goldberg. 2018. Understanding convolutional neural networks for text classification. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and In- terpreting Neural Networks for NLP, pages 56-65, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Bag of tricks for efficient text classification", |
|
"authors": [ |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1607.01759" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The lacunae of danish natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Kirkedal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Derczynski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalie", |
|
"middle": [], |
|
"last": "Schluter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 22nd Nordic Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "356--362", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andreas Kirkedal, Barbara Plank, Leon Derczynski, and Natalie Schluter. 2019. The lacunae of danish natural language processing. In Proceedings of the 22nd Nordic Conference on Computational Linguis- tics, pages 356-362.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Nltk: The natural language toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Loper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bird", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward Loper and Steven Bird. 2002. Nltk: The natu- ral language toolkit. In In Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Compu- tational Linguistics. Philadelphia: Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Cross-domain feature selection for language identification", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Lui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of 5th international joint conference on natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "553--561", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Lui and Timothy Baldwin. 2011. Cross-domain feature selection for language identification. In Pro- ceedings of 5th international joint conference on nat- ural language processing, pages 553-561.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Langid.py: An off-the-shelf language identification tool", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Lui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the ACL 2012 System Demonstrations, ACL '12", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--30", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Lui and Timothy Baldwin. 2012. Langid.py: An off-the-shelf language identification tool. In Pro- ceedings of the ACL 2012 System Demonstrations, ACL '12, pages 25-30, Stroudsburg, PA, USA. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Visualizing data using t-SNE", |
|
"authors": [ |
|
{ |
|
"first": "Laurens", |
|
"middle": [], |
|
"last": "Van Der Maaten", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "2579--2605", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579-2605.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1301.3781" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Cross-corpus native language identification via statistical embedding", |
|
"authors": [ |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Rangel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Rosso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Brooke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Uitdenbogerd", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Second Workshop on Stylistic Variation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "39--43", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Francisco Rangel, Paolo Rosso, Julian Brooke, and Alexandra L Uitdenbogerd. 2018. Cross-corpus native language identification via statistical embed- ding. In Proceedings of the Second Workshop on Stylistic Variation, pages 39-43.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "A reproduction of Apple's bi-directional LSTM models for language identification in short strings", |
|
"authors": [ |
|
{ |
|
"first": "Mads", |
|
"middle": [], |
|
"last": "Toftrup", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Asger", |
|
"middle": [], |
|
"last": "S\u00f8ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manuel", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "S\u00f8rensen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ira", |
|
"middle": [], |
|
"last": "Ciosici", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Assent", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the EACL Student Research Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mads Toftrup, S\u00f8ren Asger S\u00f8rensen, Manuel R. Ciosici, and Ira Assent. 2021. A reproduction of Apple's bi-directional LSTM models for language identification in short strings. In Proceedings of the EACL Student Research Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A report on the dsl shared task", |
|
"authors": [ |
|
{ |
|
"first": "Marcos", |
|
"middle": [], |
|
"last": "Zampieri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liling", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikola", |
|
"middle": [], |
|
"last": "Ljubesic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcos Zampieri, Liling Tan, Nikola Ljubesic, and J\u00f6rg Tiedemann. 2014. A report on the dsl shared task 2014. In VarDial@COLING.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Overview of the dsl shared task 2015", |
|
"authors": [ |
|
{ |
|
"first": "Marcos", |
|
"middle": [], |
|
"last": "Zampieri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liling", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikola", |
|
"middle": [], |
|
"last": "Ljube\u0161ic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Joint Workshop on Language Technology for Closely Related Languages, Varieties and Dialects", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcos Zampieri, Liling Tan, Nikola Ljube\u0161ic, J\u00f6rg Tiedemann, and Preslav Nakov. 2015. Overview of the dsl shared task 2015. In Joint Workshop on Lan- guage Technology for Closely Related Languages, Varieties and Dialects, page 1.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Confusion matrix with results from langid.py on the full Wikipedia data set", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Confusion matrix with results from the CNN on the full Wikipedia data set.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"text": "Confusion matrix with results from a supervised FastText model on the full Wikipedia data set.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"text": "The Tatoeba data set mainly consists of sentences written in everyday language. Below are some examples from the Danish part of Tatoeba. Hvordan har du det? (How are you?) P\u00e5 trods af al sin rigdom og ber\u00f8mmelse, er han ulykkelig. (Despite all his riches and renown, he is unlucky.) Vi fl\u00f8j over Atlanterhavet. (We flew over the Atlantic Ocean.) Jeg kan ikke lide aeg. (I don't like eggs.) Folk som ikke synes at latin er det smukkeste sprog, har intet forst\u00e5et. (People who don't think Latin is the most beautiful language have understood nothing.) 1 tatoeba.org/ (a) Distribution of the number of sentences in each language in the Tatoeba data set. (b) Distribution of the length of sentences in the Tatoeba data set.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"text": "Distribution of the lengths and language classes of Tatoeba sentences.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF5": { |
|
"text": "4a shows the number of sentences in each language in the Tatoeba data set.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF6": { |
|
"text": "Confusion matrix for FastText trained using only character level n-grams on the Wikipedia data set and evaluated on the Tatoeba data set.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF7": { |
|
"text": "Results for FastText trained w. char n-grams on Wikipedia+Tatoeba and evaluated on Tatoeba.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF8": { |
|
"text": "Distribution of sentence lengths Tatoeba test set along with the mis-classified sentences.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF10": { |
|
"text": "Dimensionality reduction using PCA", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF12": { |
|
"text": "Dimensionality reduction using t-SNENynorsk, while Icelandic and Faroese have their own two separate clusters.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"text": "Overview of results for the data set with 10K data points in each language.", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF3": { |
|
"text": "Overview of results for the neural network models for the data set with 10K data points in each language.", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF5": { |
|
"text": "Overview of results for the \"classical\" ML models on the Wikipedia data set with 50K data points in each language.", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>Model</td><td>Encoding</td><td>Accuracy</td></tr><tr><td>MLP</td><td>char bi-gram</td><td>0.918</td></tr><tr><td>CNN</td><td>char bi-gram</td><td>0.970</td></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"text": "Overview of results for the MLP and CNN on the Wikipedia data set with 50K data points in each language.", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |