|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:14:17.372537Z" |
|
}, |
|
"title": "N-gram and Neural Models for Uralic Language Identification: NRC at VarDial 2021", |
|
"authors": [ |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Bernier-Colborne", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Research Council", |
|
"location": { |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Serge", |
|
"middle": [], |
|
"last": "L\u00e9ger", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Research Council", |
|
"location": { |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Cyril", |
|
"middle": [], |
|
"last": "Goutte", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Research Council", |
|
"location": { |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We describe the systems developed by the National Research Council Canada for the Uralic language identification shared task at the 2021 VarDial evaluation campaign. We evaluated two different approaches to this task: a probabilistic classifier exploiting only character 5grams as features, and a character-based neural network pre-trained through self-supervision, then fine-tuned on the language identification task. The former method turned out to perform better, which casts doubt on the usefulness of deep learning methods for language identification, where they have yet to convincingly and consistently outperform simpler and less costly classification algorithms exploiting n-gram features.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We describe the systems developed by the National Research Council Canada for the Uralic language identification shared task at the 2021 VarDial evaluation campaign. We evaluated two different approaches to this task: a probabilistic classifier exploiting only character 5grams as features, and a character-based neural network pre-trained through self-supervision, then fine-tuned on the language identification task. The former method turned out to perform better, which casts doubt on the usefulness of deep learning methods for language identification, where they have yet to convincingly and consistently outperform simpler and less costly classification algorithms exploiting n-gram features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The goal of the Uralic language identification (ULI) shared task at VarDial 2021 (Chakravarthi et al., 2021) was to identify and discriminate 29 Uralic language varieties, among a total of 178 language varieties from various families, given a short text (typically a sentence). This was a re-run of the ULI task at VarDial 2020 (G\u0203man et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 108, |
|
"text": "(Chakravarthi et al., 2021)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 348, |
|
"text": "(G\u0203man et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We experimented with two different approaches to the ULI task. The first is a probabilistic classifier that exploits only character 5-grams as features. The second is a deep learning approach based on character embeddings and a transformer network (Vaswani et al., 2017) , which is first pretrained through self-supervision, then fine-tuned on the ULI task, in a similar fashion to BERT (Devlin et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 270, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 387, |
|
"end": 408, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This second approach is essentially an improved version of the one developed by the NRC team for the first run of the ULI task (Bernier-Colborne and Goutte, 2020) . By improving the sampling functions used to sample training and evaluation data, and making a few other small changes to the model, we were able to achieve much better results. However, our best results were achieved using the simpler approach, based on character 5-grams.", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 162, |
|
"text": "Goutte, 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we explain these two approaches to language identification and compare the results obtained on the ULI task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The ULI data and task are described by Jauhiainen et al. (2020) . The goal of the task is to identify the language of a short text, typically a single sentence. If more than one language is used (e.g. code switching), the main language must be identified. This was a closed task, so the only data that could be used was the training set provided. The data contain both relevant and non-relevant languages. The 29 relevant languages are part of the Uralic group of languages, spoken mainly in northern Eurasia. Some are very under-resourced, and some are already extinct. Besides these, there are 149 non-relevant languages, belonging to various language families. These include the three largest Uralic languages, i.e. Estonian, Finnish, and Hungarian. In total, the data covers 178 language varieties, which is the highest number covered so far in a language identification shared task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 39, |
|
"end": 63, |
|
"text": "Jauhiainen et al. (2020)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Task Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The training set contains 646,043 examples for the relevant languages and 63,772,445 examples for the non-relevant ones. So there is about 100 times more data for the non-relevant languages, and there are 5 times more non-relevant languages than relevant ones, therefore the number of examples is about 20 times greater for non-relevant languages, on average. Both parts of the training set are highly unbalanced. The class frequencies of relevant languages range from 19 to 214,225, and those of non-relevant ones range from 10,000 to 3,000,000.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Task Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The task was divided into three tracks, the difference being the way in which the evaluation metric is computed. For tracks 1 and 2, the evaluation metrics are the macro-averaged F1 and micro-averaged F1 respectively, and it only considers examples where either the predicted label or the true label is a relevant language. For track 3, the metric is macro-averaged F1, computed over all examples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Task Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Some of the challenges of this task were highlighted by Bernier-Colborne and Goutte (2020) and Jauhiainen et al. (2020) . These include: the presence of low-resource and closely-related language varieties; the large size of the training set; the large and complex class imbalances in the training set; and the absence of an official development set, which, while a valid design choice for this competition, added a degree of complexity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 90, |
|
"text": "Goutte (2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 95, |
|
"end": 119, |
|
"text": "Jauhiainen et al. (2020)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Task Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We evaluated two approaches to this task, which we describe in this section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The first approach employs a probabilistic classifier (Gaussier et al., 2002) , that we call Probcat, which we trained using character 5-grams as features. The classifier is similar to multinomial Naive Bayes except that it does not assume that all n-grams in a given text are generated from a single class. It has been used in the past to obtain state-of-the-art results on language identification tasks (Goutte and L\u00e9ger, 2016) . For more details on this classification algorithm, refer to Goutte et al. (2014, Sec. 2.2) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 77, |
|
"text": "(Gaussier et al., 2002)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 405, |
|
"end": 429, |
|
"text": "(Goutte and L\u00e9ger, 2016)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 492, |
|
"end": 522, |
|
"text": "Goutte et al. (2014, Sec. 2.2)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probcat", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For reference, running inference on the official test, which contains over 1.5 million examples, takes about four and a half hours using four Intel Xeon CPUs @ 2.6 GHz, and could be reduced by using more CPUs. Training takes about five and a half hours. Memory requirement is below 32 GB.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probcat", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The second approach is a deep learning approach, which the NRC team has previously applied, with very different levels of success, to Cuneiform language identification (Bernier-Colborne et al., 2019) and Uralic language identification (Bernier-Colborne and Goutte, 2020) . In the former case, it produced a winning submission, whereas in the latter, it was well under the baseline, because of a flaw in the methods used to sample training and evaluation data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 199, |
|
"text": "(Bernier-Colborne et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 270, |
|
"text": "Goutte, 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The model is a deep neural network which takes sequences of characters as input. Characters are embedded and fed through a stack of bidirectional transformers (Vaswani et al., 2017) , which encodes the sequence. The output of this encoder is a sequence of hidden state vectors (one per input character), which is then fed to various output heads (or modules) during training. The model is trained in two stages: selfsupervised pre-training on a masked language modeling (MLM) task (Devlin et al., 2019) , followed by supervised fine-tuning on the target task, i.e. language identification. For MLM, the objective is to predict one or more randomly chosen characters in the input sequence, which are replaced with a special masking symbol before the embedding stage. This produces a model that can predict characters in context for any of the languages used for training, and must therefore have learned some of the specific surface regularities of each. The output head for this task is a softmax over the vocabulary (or alphabet), which takes as input the encoding (i.e. final hidden state) of a masked character, and the loss is cross-entropy.", |
|
"cite_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 181, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 481, |
|
"end": 502, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "After pre-training, we fine-tune for language identification by keeping the pre-trained encoder, discarding the MLM head, and replacing it with a head containing: an average pooling layer, that averages the final hidden states of the encoder to produce a fixed-length encoding of the sentence (Reimers and Gurevych, 2019) ; followed by a relu activation; and finally a softmax over the 178 languages. Again, the loss is cross-entropy.", |
|
"cite_spans": [ |
|
{ |
|
"start": 293, |
|
"end": 321, |
|
"text": "(Reimers and Gurevych, 2019)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The vocabulary (or alphabet) contains every character that appears more than once in the training portion of the train/dev/test split we created (see Section 4.1). Characters that are not in this vocab are replaced with a special symbol reserved for unknown characters, before the encoding stage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The hyperparameter settings we used largely follow the recommendations of Devlin et al. (2019) , except that we use fewer layers than their base architecture (to reduce training time and memory requirements): During pre-training, examples were sampled from our training set using the frequency-based sampling function described by Bernier-Colborne and Goutte (2020) . We refer the reader to this paper for the full equations. In essence, we compute the relative frequencies of relevant and nonrelevant languages separately, damp each of the two resulting distributions using exponent \u03b1, multiply the distribution of relevant languages by coefficient \u03b3, then combine the two distributions and re-normalize. We arbitrarily set \u03b1 = 1 and \u03b3 = 1, which means that relevant and non-relevant examples are sampled in approximately equal proportion, so the relevant languages will end up being sampled about 5 times more frequently than non-relevant ones, on average, as there are about 5 times fewer relevant languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 94, |
|
"text": "Devlin et al. (2019)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 352, |
|
"end": 365, |
|
"text": "Goutte (2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "During fine-tuning, we experimented with various functions to sample the labeled training examples from the training set, including this frequencybased function. The other functions we experimented with were:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Class-wise uniform sampling", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Accuracy-based sampling using a dynamic estimate of class-wise accuracy. We tested two different functions to estimate this, both based on the same score, which is the class-wise, one-vs-rest binary accuracy on the dev set. If we call the set of class-wise scores m, then the two functions we tested to compute sampling probabilities can be formulated as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "p inv (m i ) = 1 \u2212 m i j (1 \u2212 m j ) p rank (m i ) = rank(m i , m) j rank(m j , m)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where rank(m i , m) returns the rank of m i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "During fine-tuning, the maximum input length was increased to 256; the extra position embeddings are still randomly initialized at this point, and are learned during fine-tuning. To reduce useless computation, rather than padding or truncating all sequences in a given batch to the maximum length of 256, we pad or truncate them to the length of the longest sequence in the batch. This change reduced computation time significantly with respect to our previous implementation, as the vast majority (over 99%) of examples in the ULI training data are shorter than 256 characters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Our code, which exploits the Transformers library by HuggingFace (Wolf et al., 2020) , is available is at https://www.github.com/ gbcolborne/vardial2021.", |
|
"cite_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 84, |
|
"text": "(Wolf et al., 2020)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "All experiments were conducted on a single GPU with 12 GB of memory. Inference on the official test set takes about 70 minutes using a v100. Pretraining took about 10 days, and we fine-tuned for about 3 days.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To evaluate and optimize our two approaches, we first had to create a development set, as none was provided for this shared task, as mentioned in Section 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To sample our development sets, we used the frequency-based sampling function described in Section 3.2, with \u03b1 = 1 and \u03b3 = 1, to sample a 'dev' set containing 20,000 examples and a 'devtest' set containing 100,000 examples. The idea was to use the dev set to tune the hyperparameters of the models, then use the dev-test set to get an unbiased estimate of the accuracy of the fully tuned models. However, due to time constraints, in the case of the transformer model, we ended up adding the dev-test set to the training set, and using the dev set for model selection. We will come back to this in the following section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Making a Development Set", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To optimize Probcat, we compared 4 different ngram lengths, from 2 to 5. The models were trained on our training set and evaluated on both our development sets. Results showed that 5-grams produced the best results, as shown in Table 1 . Note that after the official evaluation, we tried combining various n-gram lengths, but did not observe any improvements on our development sets. We made a single submission using this approach, using only 5-grams as features. The entire training set was used to train the classifier for this submission.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 228, |
|
"end": 235, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Selection", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "As for the transformer model, we conducted various experiments involving: optimizing the architecture (e.g. replacing the tanh activation in the pooling layer with a relu activation); developing better sampling functions for training; and tuning a few hyperparameters, such as the learning rate and batch size. We will not present the full results of these experiments here, because of their ad hoc nature.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Selection", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We ended up making a first set of two submissions, and a final set of four submissions. As the results of the latter were better, we will not go into the model selection tests that led to the first set of submissions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Selection", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "For our final model selection experiment, we decided to add the dev-test set to the training set (for lack of time to re-train models on the entire training set), and rely on the dev set evaluation to select models. Note that, because of this reason, the results of these tests are not directly comparable to the dev scores of Probcat.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Selection", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We focused on the sampling function used to fine-tune the model, and compared the functions described in Section 3.2:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Selection", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u2022 Frequency-based sampling, with a few different values of \u03b1 and \u03b3, i.e. (\u03b1 = 1, \u03b3 = 1), (\u03b1 = 0.75, \u03b3 = 1), and (\u03b1 = 0.75, \u03b3 = 0.5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Selection", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u2022 Uniform sampling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Selection", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u2022 Accuracy-based sampling, using either p inv or p rank to convert class-wise dev-scores to sampling probabilities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Selection", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We compared a couple different learning rates (i.e. 3e-6 and 1e-5). Models were fine-tuned for a maximum of 820K steps, and validated every 20K steps, with early stopping on the dev set, using either the track 1 or track 3 score as stopping criterion. The results, summarized in Table 2 , suggested frequency-based sampling worked best, but all sampling functions achieved high scores, with accuracy-based sampling working slightly better than uniform sampling. The optimal learning rate was 1e-5.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 279, |
|
"end": 286, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Selection", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We submitted four runs, two of which were tuned (in terms of the sampling function and early stopping) to track 1, and two of which were tuned to track 3:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Track 1, run 1: frequency-based sampling (with \u03b1 = 0.75, \u03b3 = 0.5), early stopped for track 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Track 1, run 2: accuracy-based sampling (with p rank ), early stopped for track 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Track 3, run 1: frequency-based sampling (with \u03b1 = 0.75, \u03b3 = 0.5), early stopped for track 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Track 3, run 2: accuracy-based sampling (with p inv ), early stopped for track 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The scores of our best runs on the official test set are shown in Table 3 . The baseline scores were computed using the HeLI method (Jauhiainen et al., 2017) . Probcat was the only system to beat the baseline on track 2, and one of two systems that beat the baseline on track 1 (along with an ensemble of SVM and naive Bayes models exploiting 3-, 4-, and 5-grams, which scored 0.809 on track 1, but only 0.593 on track 2 according to the leaderboard 1 at the time of writing). The baseline on track 3 has yet to be beaten.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 157, |
|
"text": "(Jauhiainen et al., 2017)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 73, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Official Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our results using the transformer model are competitive on tracks 2 and 3, but less so on track 1, which focuses on the macro-averaged F-score on relevant languages. The scores of our final 4 runs of the transformer model are shown in Table 4 . These results suggest our development sets were not very good estimators for model selection, as a model that we (slightly) tuned for track 1 was best on tracks 2 and 3, whereas a model we tuned for track 3 was best on track 1. If we check how many training steps each of the four models did before early stopping, we see that the model that stopped earliest produced the poorest results on the test set: We also see that the model that stopped the latest produced the best results on track 3; since the batch size was 32, that model observed about 23 million samples from the training set (including duplicate samples from rare classes), which is less than the total number of available training samples. Given these observations, it seems likely that we could have obtained better results by fine-tuning longer. Other ways of improving the accuracy of this model include: trying multiple random initializations; exploring the hyperparameter space more extensively or efficiently; fine-tuning on the complete training set; or adapting the model to the unlabeled test examples (e.g. through self-supervised MLM or self-training on the language identification task).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 235, |
|
"end": 242, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Official Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We could not conduct extensive error analysis because we did not have access to the test labels at the time of writing, but we did inspect the predictions of our systems to try to gain insights on their behaviour or the properties of the data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Run", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "If we look at the cumulative predicted frequency of the 29 relevant languages, we find that Probcat predicted a relevant language for 21,051 or 1.4% of the 1,510,315 test cases. Since Probcat achieved 0.967 on track 2, this must be very close to the actual number of relevant examples in the test set. Thus, the distribution of relevant vs. non-relevant examples is very different than the one we assumed when we constructed our dev set. Furthermore, the worst of our four transformer runs seems to have over-detected relevant examples, as it predicted a relevant language for 27,901 test cases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Run", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "If we look at the top off-diagonal values in the confusion matrix between Probcat and our best transformer run (i.e. track 1, run 2, which performed best overall) for all 178 languages, assuming Probcat's predictions as ground truth (as it performed better), we find that the 10 most frequently confused pairs (Probcat prediction, transformer prediction) are: If we restrict this analysis to pairs involving a relevant language, we find that the top 10 most confused pairs are: Uralic language, i.e. Standard Estonian or Finnish, which raises the question as to whether the Standard Estonian and Finnish training corpora contain noisy examples actually belonging to a relevant, low-resource language variety.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Run", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We manually inspected some of the test cases for which Probcat predicted a language that we understand, i.e. English, and looked for potential sources of errors. Without access to the gold labels, we could not glean much, but we did observe examples where English is used within sentences that are mainly in another language:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Run", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Kanta horien artean ezagunenak \"Cocaine\", \"After Midnight\", \"Call Me the Breeze\", \"Travelling Light\" eta \"Sensitive Kind\" dira.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Run", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 \u5916 \u56fd \u4eba \u4ee5 \u4e3a \u76ae \u86cb \u8981 \u814c \u4ea4 \u5173 \u5e74 \uff0c \u6240 \u4ee5 \u53eb \u6e20Century egg(\u4e16\u7eaa\u86cb)\u3002", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Run", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "And examples that are not in Standard English, but a closely related language or variety:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Run", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 The GT wis sauld alangside the GTi for a few months, but wis eventually phased oot.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Run", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 The population increased frae 76,254 inhabitants (1992 census) tae 77,566 (2001 census), an increase o 1.7 %.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Run", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Ships o the Hudson's Bay Company wur regular visitors, as wur whalin fleets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Run", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We plan on conducting a more extensive error analysis once the gold labels of the test set are made available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Run", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Thorough surveys of research on language identification are provided by and Zampieri et al. (2020) . Language identification is one of the few tasks in natural language processing where deep learning methods have yet to provide convincing gains in accuracy or robustness. Simpler classifiers based on character n-grams continue to provide state-of-the-art results on many benchmarks. The winning submission by the NRC team to the Cuneiform language identification task at VarDial 2019 was the first time a neural system was ranked first on a language identification shared task . That task involved 7 language varieties, and less complex class imbalances. The results of this Uralic language identification shared task cast doubt on whether a character-based deep neural network can advance the state of the art in settings more representative of real-world applications of language identification, such as crawling the web to automatically compile monolingual corpora for low-resource languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 98, |
|
"text": "Zampieri et al. (2020)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "For the Uralic language identification shared task at the 2021 VarDial evaluation campaign, the NRC team evaluated two different approaches: a probabilistic classifier exploiting only character 5grams as features, and a character-based neural network pre-trained through self-supervision, then fine-tuned on the language identification task. The former method ended up being ranked first on two of the three tracks, and outperformed our neural approach on all three tracks. We do not exclude the possibility that deep learning approaches could improve accuracy or robustness on this task, but the results we were able to obtain within the limited time constraints of this shared task suggest that the simpler, n-gram based approach is still a very strong baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "http://urn.fi/urn:nbn:fi: lb-2020102201", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the organizers for their work developing and running this shared task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Note that several of these involve a non-relevant References Gabriel Bernier-Colborne and Cyril Goutte. 2020. Challenges in neural language identification: NRC at VarDial 2020", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "273--282", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Note that several of these involve a non-relevant References Gabriel Bernier-Colborne and Cyril Goutte. 2020. Challenges in neural language identification: NRC at VarDial 2020. In Proceedings of the 7th Work- shop on NLP for Similar Languages, Varieties and Dialects, pages 273-282, Barcelona, Spain (Online). International Committee on Computational Linguis- tics (ICCL).", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Improving cuneiform language identification with BERT", |
|
"authors": [ |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Bernier-Colborne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cyril", |
|
"middle": [], |
|
"last": "Goutte", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Serge", |
|
"middle": [], |
|
"last": "L\u00e9ger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "17--25", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-1402" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabriel Bernier-Colborne, Cyril Goutte, and Serge L\u00e9ger. 2019. Improving cuneiform language iden- tification with BERT. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 17-25, Ann Arbor, Michigan. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Eswari Rajagopal, Yves Scherrer, and Marcos Zampieri. 2021. Findings of the VarDial Evaluation Campaign 2021", |
|
"authors": [ |
|
{ |
|
"first": "Mihaela", |
|
"middle": [], |
|
"last": "Bharathi Raja Chakravarthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "G\u0203man", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tudor", |
|
"middle": [], |
|
"last": "Radu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heidi", |
|
"middle": [], |
|
"last": "Ionescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommi", |
|
"middle": [], |
|
"last": "Jauhiainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krister", |
|
"middle": [], |
|
"last": "Jauhiainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikola", |
|
"middle": [], |
|
"last": "Lind\u00e9n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niko", |
|
"middle": [], |
|
"last": "Ljube\u0161i\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruba", |
|
"middle": [], |
|
"last": "Partanen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Priyadharshini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Purschke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bharathi Raja Chakravarthi, Mihaela G\u0203man, Radu Tu- dor Ionescu, Heidi Jauhiainen, Tommi Jauhiainen, Krister Lind\u00e9n, Nikola Ljube\u0161i\u0107, Niko Partanen, Ruba Priyadharshini, Christoph Purschke, Eswari Rajagopal, Yves Scherrer, and Marcos Zampieri. 2021. Findings of the VarDial Evaluation Campaign 2021. In Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial).", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of NAACL, pages 4171- 4186.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "A hierarchical model for clustering and categorising documents", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Gaussier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cyril", |
|
"middle": [], |
|
"last": "Goutte", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kris", |
|
"middle": [], |
|
"last": "Popat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francine", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 24th BCS-IRSG European Colloquium on IR Research: Advances in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "229--247", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Gaussier, Cyril Goutte, Kris Popat, and Francine Chen. 2002. A hierarchical model for clustering and categorising documents. In Proceedings of the 24th BCS-IRSG European Colloquium on IR Research: Advances in Information Retrieval, pages 229-247. Springer-Verlag.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Advances in Ngram-based Discrimination of Similar Languages", |
|
"authors": [ |
|
{ |
|
"first": "Cyril", |
|
"middle": [], |
|
"last": "Goutte", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Serge", |
|
"middle": [], |
|
"last": "L\u00e9ger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (Var-Dial3)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "178--184", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cyril Goutte and Serge L\u00e9ger. 2016. Advances in Ngram-based Discrimination of Similar Languages. In Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (Var- Dial3), pages 178-184, Osaka, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "The NRC system for discriminating similar languages", |
|
"authors": [ |
|
{ |
|
"first": "Cyril", |
|
"middle": [], |
|
"last": "Goutte", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Serge", |
|
"middle": [], |
|
"last": "L\u00e9ger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marine", |
|
"middle": [], |
|
"last": "Carpuat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the First Workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects (VarDial)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "139--145", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cyril Goutte, Serge L\u00e9ger, and Marine Carpuat. 2014. The NRC system for discriminating similar lan- guages. In Proceedings of the First Workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects (VarDial), pages 139-145, Dublin, Ire- land.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Yves Scherrer, and Marcos Zampieri. 2020. A Report on the VarDial Evaluation Campaign 2020", |
|
"authors": [ |
|
{ |
|
"first": "Mihaela", |
|
"middle": [], |
|
"last": "G\u0203man", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tudor", |
|
"middle": [], |
|
"last": "Radu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heidi", |
|
"middle": [], |
|
"last": "Ionescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommi", |
|
"middle": [], |
|
"last": "Jauhiainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krister", |
|
"middle": [], |
|
"last": "Jauhiainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikola", |
|
"middle": [], |
|
"last": "Lind\u00e9n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niko", |
|
"middle": [], |
|
"last": "Ljube\u0161i\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Partanen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Purschke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the Seventh Workshop on NLP for Similar Languages, Varieties and Dialects", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mihaela G\u0203man, Dirk Hovy, Radu Tudor Ionescu, Heidi Jauhiainen, Tommi Jauhiainen, Krister Lind\u00e9n, Nikola Ljube\u0161i\u0107, Niko Partanen, Christoph Purschke, Yves Scherrer, and Marcos Zampieri. 2020. A Report on the VarDial Evaluation Cam- paign 2020. In Proceedings of the Seventh Work- shop on NLP for Similar Languages, Varieties and Dialects (VarDial).", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Uralic Language Identification (ULI) 2020 shared task dataset and the Wanca 2017 corpus", |
|
"authors": [ |
|
{ |
|
"first": "Tommi", |
|
"middle": [], |
|
"last": "Jauhiainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heidi", |
|
"middle": [], |
|
"last": "Jauhiainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niko", |
|
"middle": [], |
|
"last": "Partanen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krister", |
|
"middle": [], |
|
"last": "Lind\u00e9n", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Seventh Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "688--698", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tommi Jauhiainen, Heidi Jauhiainen, Niko Partanen, and Krister Lind\u00e9n. 2020. Uralic Language Identifi- cation (ULI) 2020 shared task dataset and the Wanca 2017 corpus. In Proceedings of the Seventh Work- shop on NLP for Similar Languages, Varieties and Dialects (VarDial), pages 688-698.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Evaluating HeLI with non-linear mappings", |
|
"authors": [ |
|
{ |
|
"first": "Tommi", |
|
"middle": [], |
|
"last": "Jauhiainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krister", |
|
"middle": [], |
|
"last": "Lind\u00e9n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heidi", |
|
"middle": [], |
|
"last": "Jauhiainen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "102--108", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tommi Jauhiainen, Krister Lind\u00e9n, and Heidi Jauhi- ainen. 2017. Evaluating HeLI with non-linear map- pings. In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial), pages 102-108, Valencia, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Automatic Language Identification in Texts: A Survey", |
|
"authors": [ |
|
{ |
|
"first": "Tommi", |
|
"middle": [], |
|
"last": "Jauhiainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Lui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcos", |
|
"middle": [], |
|
"last": "Zampieri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krister", |
|
"middle": [], |
|
"last": "Lind\u00e9n", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "65", |
|
"issue": "", |
|
"pages": "675--782", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tommi Jauhiainen, Marco Lui, Marcos Zampieri, Tim- othy Baldwin, and Krister Lind\u00e9n. 2019. Automatic Language Identification in Texts: A Survey. Journal of Artificial Intelligence Research, 65:675-782.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", |
|
"authors": [ |
|
{ |
|
"first": "Nils", |
|
"middle": [], |
|
"last": "Reimers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3982--3992", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1410" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "HuggingFace's Transformers: State-of-the", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9mi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Davison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Shleifer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Patrick Von Platen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Canwen", |
|
"middle": [], |
|
"last": "Plu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teven", |
|
"middle": [ |
|
"Le" |
|
], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Scao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariama", |
|
"middle": [], |
|
"last": "Gugger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quentin", |
|
"middle": [], |
|
"last": "Drame", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Lhoest", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. HuggingFace's Transformers: State-of-the-art natu- ral language processing.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "A Report on the Third VarDial Evaluation Campaign", |
|
"authors": [ |
|
{ |
|
"first": "Marcos", |
|
"middle": [], |
|
"last": "Zampieri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shervin", |
|
"middle": [], |
|
"last": "Malmasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Scherrer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tanja", |
|
"middle": [], |
|
"last": "Samard\u017ei\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Tyers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miikka", |
|
"middle": [], |
|
"last": "Silfverberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalia", |
|
"middle": [], |
|
"last": "Klyueva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tung-Le", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chu-Ren", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radu", |
|
"middle": [ |
|
"Tudor" |
|
], |
|
"last": "Ionescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrei", |
|
"middle": [], |
|
"last": "Butnaru", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommi", |
|
"middle": [], |
|
"last": "Jauhiainen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial). Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcos Zampieri, Shervin Malmasi, Yves Scherrer, Tanja Samard\u017ei\u0107, Francis Tyers, Miikka Silfverberg, Natalia Klyueva, Tung-Le Pan, Chu-Ren Huang, Radu Tudor Ionescu, Andrei Butnaru, and Tommi Jauhiainen. 2019. A Report on the Third VarDial Evaluation Campaign. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial). Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Natural language processing for similar languages, varieties, and dialects: A survey", |
|
"authors": [ |
|
{ |
|
"first": "Marcos", |
|
"middle": [], |
|
"last": "Zampieri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Scherrer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Natural Language Engineering", |
|
"volume": "26", |
|
"issue": "6", |
|
"pages": "595--612", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcos Zampieri, Preslav Nakov, and Yves Scherrer. 2020. Natural language processing for similar lan- guages, varieties, and dialects: A survey. Natural Language Engineering, 26(6):595-612.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Nb attention heads: 12 \u2022 Hidden layer size: 768 \u2022 Feed forward/filter size: 3072 \u2022 Hidden activation: gelu \u2022 Dropout probability: 0.1 \u2022 Optimizer: Adam \u2022 Learning rate (pre-training): 1e-Nb steps (pre-training): 1M \u2022 Warmup steps (pre-training): 10K \u2022 Batch size (pre-training): 64 \u2022 Maximum input length (pre-training): 128 \u2022 Learning rate (fine-tuning): 1e-Batch size (fine-tuning): 32 \u2022 Maximum input length (fine-tuning): 256" |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "" |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "that are notoriously hard to distinguish, e.g. Croatian and Bosnian." |
|
}, |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Best F-scores of transformer model on dev set, with respect to sampling function.", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table><tr><td>: F-scores of our best runs and the baseline sys-</td></tr><tr><td>tem (i.e. HeLI) on the offical test set.</td></tr></table>", |
|
"type_str": "table", |
|
"num": null, |
|
"text": "", |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null, |
|
"text": "F-scores of our 4 final runs of the transformer model on the offical test set.", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |