ACL-OCL / Base_JSON /prefixG /json /globalex /2020.globalex-1.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:05:51.008566Z"
},
"title": "Implementation of Supervised Training Approaches for Monolingual Word Sense Alignment: ACDH-CH System Description for the MWSA Shared Task at GlobaLex 2020",
"authors": [
{
"first": "Baj\u010deti\u0107",
"middle": [],
"last": "Lenka",
"suffix": "",
"affiliation": {
"laboratory": "Austrian Centre for Digital Humanities and Cultural Heritage Vienna",
"institution": "",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Yim",
"middle": [],
"last": "Seung-Bin",
"suffix": "",
"affiliation": {
"laboratory": "Austrian Centre for Digital Humanities and Cultural Heritage Vienna",
"institution": "",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our system for monolingual sense alignment across dictionaries. The task of monolingual word sense alignment is presented as a task of predicting the relationship between two senses. We will present two solutions, one based on supervised machine learning, and the other based on pre-trained neural network language model, specifically BERT. Our models perform competitively for binary classification, reporting high scores for almost all languages.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our system for monolingual sense alignment across dictionaries. The task of monolingual word sense alignment is presented as a task of predicting the relationship between two senses. We will present two solutions, one based on supervised machine learning, and the other based on pre-trained neural network language model, specifically BERT. Our models perform competitively for binary classification, reporting high scores for almost all languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper presents our submission for the shared task on monolingual word sense alignment across dictionaries as part of the GLOBALEX 2020 -Linked Lexicography workshop at the 12th Language Resources and Evaluation Conference (LREC). Monolingual word sense alignment (MWSA) is the task of aligning word senses across resources in the same language. Lexical-semantic resources (LSR) such as dictionaries form valuable foundation of numerous natural language processing (NLP) tasks. Since they are created manually by experts, dictionaries can be considered among the resources of highest quality and importance. However, the existing LSRs in machine readable form are small in scope or missing altogether. Thus, it would be extremely beneficial if the existing lexical resources could be connected and expanded. Lexical resources display considerable variation in the number of word senses that lexicographers assign to a given entry in a dictionary. This is because the identification and differentiation of word senses is one of the harder tasks that lexicographers face. Hence, the task of combining dictionaries from different sources is difficult, especially for the case of mapping the senses of entries, which often differ significantly in granularity and coverage. (Ahmadi et al., 2020) There are three different angles from which the problem of word sense alignment can be addressed: approaches based on the similarity of textual descriptions of word senses, approaches based on structural properties of lexical-semantic resources, and a combination of both. (Matuschek, 2014) In this paper we focus on the similarity of textual descriptions. This is a common approach as the majority of previous work used some notion of similarity between senses, mostly gloss overlap or semantic relatedness based on glosses. This makes sense, as glosses are a prerequisite for humans to recognize the meaning of an encoded sense, and thus also an intuitive way of judging the similarity of senses. (Matuschek, 2014) The paper is structured as follows: we provide a brief overview of related work in Section 2, and a description of the corpus in Section 3. In Section 4 we explain all important aspects of our model implementation, while the results are presented in Section 5. Finally, we end the paper with the discussion in Section 6 and conclusion in Section 7.",
"cite_spans": [
{
"start": 1273,
"end": 1294,
"text": "(Ahmadi et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 1568,
"end": 1585,
"text": "(Matuschek, 2014)",
"ref_id": "BIBREF6"
},
{
"start": 1994,
"end": 2011,
"text": "(Matuschek, 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Similar work in monolingual word sense alignment has previously been done mostly for one language in mind, for example (Henrich et al., 2014) , (Sultan et al., 2015) and (Caselli et al., 2014) . Researchers avoid modeling features according to a specific resource pair, but aim to combine generic features which are applicable to a variety of resources. One example is the work of (Matuschek and Gurevych, 2014) on alignment between Wiktionary and Wikipedia using distances calculated with Dijkstra-WSA, an algorithm which works on graph representations of resources, as well as gloss similarity values. Recent work in monolingual corpora linking includes (Mc-Crae and Buitelaar, 2018) which utilizes state-of-the-art methods from the NLP task of semantic textual similarity and combines them with structural similarity of ontology alignment. Since our work is focusing on similarity of textual descriptions, it is worth mentioning that there have been lots of advances in natural language processing with pre-trained contextualized language representations relying on large corpora (Devlin et al., 2018) , which have been delivering improvements in a variety of related downstream tasks, such as word sense disambiguation (Scarlini et al., 2020) and question answering (Yang et al., 2019) . However, we could not find any related work leveraging the newest advances with neural network language models (NNLM) for monolingual word sense alignment. For this reason we have chosen to implement our classifiers based on two approaches: one which is feature-based, and the other one using pretrained NNLMs.",
"cite_spans": [
{
"start": 119,
"end": 141,
"text": "(Henrich et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 144,
"end": 165,
"text": "(Sultan et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 170,
"end": 192,
"text": "(Caselli et al., 2014)",
"ref_id": "BIBREF1"
},
{
"start": 1083,
"end": 1104,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 1223,
"end": 1246,
"text": "(Scarlini et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 1270,
"end": 1289,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "The dataset used to train and test our models was compiled specifically with this purpose in mind (Ahmadi et al., 2020) . The complete corpus for the shared task consists of sixteen datasets from fifteen European languages. 1 The gold standard was obtained by manually classifying the level of semantic similarity between two definitions from two resources for the same lemma. The data was given in four columns: lemma, part-of-speech (POS) tag and two definitions for the lemma. The fifth column which the system aims to predict contains the semantic relationship between definitions. This falls in one of the five following categories: EXACT, BROADER, NARROWER, RELATED, NONE. The data was collected as follows: a subset of entries with the same lemma is chosen from the two dictionaries and a spreadsheet is created containing all the possible combinations of definitions from the entries. Experts are then asked to go through the list and choose the level of semantic similarity between each pair. This has created a huge number of pairs which have no relation, and thus the dataset is heavily imbalanced in favor of NONE class. Two challenges caused by the skewness of data were identified. Firstly, the models should be able to deal with underrepresented semantic relations. Secondly, evaluation metrics should consider the imbalanced distribution. Table 1 displays the distribution of relations between two word definitions and the imbalance of the labels in the training data. We have implemented several ways to battle this, such as undersampling and oversampling, as well as doubling the broader, narrower, exact and related class by relying on their property of symmetry, or applying ensemble learning methods, such as random forest.",
"cite_spans": [
{
"start": 98,
"end": 119,
"text": "(Ahmadi et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 224,
"end": 225,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1355,
"end": 1362,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3."
},
{
"text": "We aimed to explore the advantages of two different approaches, so we created two different versions of our system. One is the more standard, feature-based approach, and the other is a more novel approach with pre-trained neural language models, specifically BERT (Devlin et al., 2018) . The novel approach was used for English and German dataset, in addition to the feature based approach. 4.1. Feature-based models 4.1.1. Preprocessing Firstly, we loaded the datasets and mitigated imbalanced distribution of relation labels by swapping the two definitions and thus doubling the data samples for related labels, i.e. BROADER, NARROWER, EXACT, RELATED. For example, one English data sample for English head word follow has the definition pair \"keep to\" and \"to copy after; to take as an example\" and the relation \"narrower\". We swap the order of definition pair and change the relation to \"broader\". An outcome of this swapping process is the generalisation of the dataset. Since two definitions are from different dictionaries, features derived by comparing the two sets of definitions is dependent on the dictionaries.",
"cite_spans": [
{
"start": 264,
"end": 285,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Implementation",
"sec_num": "4."
},
{
"text": "By swapping the definitions, more general features can be calculated, since the columns contain definitions of two dictionaries, instead of one. This aspect could make the trained feature-based models more robust against new dictionaries. After doubling the data samples, we applied upsampling to match the number of samples of NONE category. For linguistic preprocessing, the definitions were tokenized using Spacy 2 for English and German, and NLTK 3 for other languages. For languages other than English and German, stopwords were removed from the definitions, in order to create word embedding models. Word vectors included in Spacy language models were used for English and German. We have compiled stopword lists for all languages using several resources found on the Web. 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Implementation",
"sec_num": "4."
},
{
"text": "Since many of the languages in the dataset have very few open-source resources and tools, and of uncertain quality, the features used are mostly based on word embeddings. The word embeddings were trained using the sets of definitions provided and the Word2Vec (Mikolov et al., 2013) model from gensim(\u0158eh\u016f\u0159ek and Sojka, 2010) Python library. To calculate the vector of a definition we used the average of word embeddings of consisting tokens. Sentence similarity was calculated with different similarity measures, namely cosine distance, Jaccard similarity, and word mover distance (WMD). For English and German, we used Spacy's built-in language models for word embeddings. The English language model used, en core web lg has 685k unique vectors over 300 dimensions, while the German model, de core news md has 20k unique vectors over 300 dimensions. Additionally, similarity calculation based on contextualized word representation ELMo (Peters et al., 2018) was used for English to model semantic differences depending on the context. We selected a different set of features for each classification model from the features described below. Complete list of features used by each classification model is shown in Table 4 . Overall, we used the following features:",
"cite_spans": [
{
"start": 260,
"end": 282,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF8"
},
{
"start": 938,
"end": 959,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 1214,
"end": 1222,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "4.1.2."
},
{
"text": "\u2022 Statistical features: Difference in length of definitions was added as a feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "4.1.2."
},
{
"text": "\u2022 Similarity measures based features: In addition to the word embedding comparisons between the word definition pair, we calculated similarity of the most similar word to the headword by calculating cosine similarity for list of word embeddings of tokens of definitions excluding stopwords and headword word embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "4.1.2."
},
{
"text": "\u2022 Part-of-speech based features: We included one-hot encoded POS of the headword, as well as difference in POS count of two definitions as features. The POS count was not done for most languages as we were not certain in the quality of existing POS-taggers. \u2022 Lexico-syntactic features: One feature exploiting the structure of definitions was to compare the first token of definitions for equality. We also counted matching lemma in the pair of sentences and normalized by the combined length of sentences. Normalization was applied, because we wanted how much overlap exists between two definitions with respect to the length. Without normalization, longer definitions might tend to have higher number of matching lemma. Depth of dependency tree was calculated to add information about structural complexity of definitions. Occurrences of semicolons were also added, since lots of definitions were comprised of multiple short definitions concatenated by semicolon. Additionally, Root word of dependency trees were compared for each definition pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "4.1.2."
},
{
"text": "\u2022 Word sense based features: WordNet 5 was used to count the number of synsets of headwords. Average count of synsets were also added as feature. It was calculated by simply counting synsets for each token of definitions in wordnet and taking the average. These features were used for English only, due to the availability of its primary resource, WordNet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "4.1.2."
},
{
"text": "Standardization was applied for some features,length difference, pos count difference, and cosine simlarities prior to training some machine learning models in order to bring the features to similar scale to the other features. Standardization was done by applying Scikit-learn Standard-scaler, which calculates the standardized value of feature by taking the difference of the feature value to the mean value and dividing it by standard deviation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "4.1.2."
},
{
"text": "We tried several machine learning models, mostly from scikit learn 6 library for Python: logistic regression, support vector machine, random forest classifier, and decision tree. Classification models were trained by tuning hyperparameters with grid search over 5-fold cross-validation. The hyperparameters used for the submitted models are listed in Table 6 . Due to imbalanced nature of the datasets, we have used balanced accuracy and weighted f1-measure for model evaluation. For languages other than English and German, we have ultimately settled for the random forest classifier as it has consistently given the best results.",
"cite_spans": [],
"ref_spans": [
{
"start": 351,
"end": 358,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Classification Models",
"sec_num": "4.1.3."
},
{
"text": "For English and German, we additionally fine-tuned pre-trained neural network language models(NNLM), BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) in particular, using simpletransformers 7 on top of pretrained models provided by transformers python 8 libraries on Google Cloud Platform 9 . In general, applications of pre-trained language models to downstream tasks can be categorized into feature-based and fine-tuning based approaches. Recently, BERT (Devlin et al., 2018) , which stands for Bidirectional Encoder Representations from Transformers, have been proven to be beneficial for improving different downstream NLP tasks.",
"cite_spans": [
{
"start": 106,
"end": 127,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 140,
"end": 158,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 465,
"end": 486,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning of Pre-trained Neural Network Language Models",
"sec_num": "4.2."
},
{
"text": "BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers and is trained on masked word prediction and next sentence prediction tasks. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks (Devlin et al., 2018) . Sun et al. (2020) present different approaches to fine-tune BERT for downstream tasks, including pre-training on indomain data, multi-task fine-tuning and different layers and learning rates. MWSA task can be ultimately regarded as sentence pair classification task and BERT can be easily fine-tuned for it, since its use of self-attention mechanism (Vaswani et al., 2017) to encode concatenated text pair effectively includes bidirectional cross attention between two sentences. We follow the fine-tuning approach presented in the original paper (Devlin et al., 2018) We have experimented with different pre-trained models, such as BERT Base, BERT Large and RoBERTa for English, which claims to have improved original BERT models by tweaking different aspects of pre-training, such as bigger data and batches, omitting of next sentence prediction, training on longer sequences and changing the masking pattern (Liu et al., 2019) . For German, we used the models published by deepset.ai 10 and Bavarian State Library 11 . The training was done on NVIDIA Tesla P100 GPU, different parameter settings have been tried out to find the best performing model for each NNLM. Due to the size of the pre-trained language models and limitations in computation powers, we were only able to explore hyperparameter combinations selectively. Different pre-trained language models were used and were evaluated in the early phase of the experiments, to limit the parameter exploration space. Evaluation of the models were done by comparing Matthews Correlation Coefficient, accuracy and cross entropy. We monitored the three metrics also during training to determine when the model starts to overfit and adjusted hyperparameters for further tuning. It quickly turned out that bigger pre-trained models deliver better results. The tendency that bigger pre-trained models perform better on MWSA is in line with observations made by the original BERT paper authors by comparing BERT Base and Large for different downstream tasks (Devlin et al., 2018) , or RoBERTa performing better than original BERT on selected downstream tasks (Liu et al., 2019) . For this reason, we have conducted more hyperparameter test combinations for those models(RoBERTa Large for English, and DBMDZ for German). When using bigger models, such as RoBERTa or BERT Large, smaller train-batch-size was selected due to resource limitation. Original BERT models were trained with 512 sequence length, but since the MWSA datasets mostly have short sentence pairs, we experimented with shorter sequence length of 128 and 256 to save memory usage and be more flexible with respect to batch size. Complete list of parameter values tested and the values of the submitted models are shown in Table 5 .",
"cite_spans": [
{
"start": 387,
"end": 408,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 411,
"end": 428,
"text": "Sun et al. (2020)",
"ref_id": "BIBREF13"
},
{
"start": 761,
"end": 783,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 958,
"end": 979,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 1322,
"end": 1340,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 2421,
"end": 2442,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 2522,
"end": 2540,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 3151,
"end": 3158,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Fine-tuning of Pre-trained Neural Network Language Models",
"sec_num": "4.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w c = total # of samples # labels \u00d7 # datasamples of c",
"eq_num": "(1)"
}
],
"section": "Fine-tuning of Pre-trained Neural Network Language Models",
"sec_num": "4.2."
},
{
"text": "With appropriate hyperparameters, English and German classifiers based on BERT (German) and RoBERTa (English) showed convergence with repsect to the Cross-entropy loss function. Classes were weighted according to the distribution for loss calculation. The weight for label class C, w c is determined inversely proportional to label frequencies shown in equation 1. The values used for training is listed in Table 5 5. Results",
"cite_spans": [],
"ref_spans": [
{
"start": 407,
"end": 414,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Fine-tuning of Pre-trained Neural Network Language Models",
"sec_num": "4.2."
},
{
"text": "Results of our MWSA models are presented in Table 2 , including baseline models for each language provided by the organizers. In this section we explain the evaluation measures proposed by the organizers for model evaluation and review the results of the two approaches we have explored, feature-based MWSA and fine-tuning NNLM.",
"cite_spans": [],
"ref_spans": [
{
"start": 44,
"end": 51,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Fine-tuning of Pre-trained Neural Network Language Models",
"sec_num": "4.2."
},
{
"text": "The final submission was evaluated in terms of five class prediction accuracy, as well as binary classification scored with precision, recall, and F-measure. Binary evaluation metrics are calculated by considering relation labels BROADER, NARROWER, RELATED and EXACT as one class of label and NONE classified pairs as the other class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "5.1."
},
{
"text": "In addition, the organizers provide an average grade over all languages participated in. Our system participated for all languages excluding Hungarian and Spanish, and the results can be seen in Table 1 . We argue that due to the imbalanced datasets, 5-class accuracy without balancing cannot adequately represent the model qualities and should only be interpreted holistically together with binary evaluation measures. For example, English baseline model has 5-class accuracy of 0.752, but 2-class F1-measure of 0.0 which indicates that the model is classifying the most of the definition pairs as none-related. The ratio of none related pairs in English training dataset(85%) supports this interpretation. While our both English models show similar 5-class accuracy with respect to the base classifier, they have higher 2-class f1-score, thus higher 2-class precision and recall. Table 3 additionally shows the result of our feature-based English model and RoBERTa based model in comparison with NONE classifier, which classifies all pairs as NONE. It shows that all three models have similar (5-class) accuracy with 0.76, 0.77 and 0.76. Thus, the measure is not sufficient to represent the difference in quality of the models, which can be assumed to exist when looking into the precision and recall for each label. Macro averaged or weighted averaged metrics show that our models perform better. We argue that for future work of MWSA weighted f1-measure or balanced accuracy should be used for adequate evaluation of imbalanced 5-class datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 882,
"end": 889,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "5.1."
},
{
"text": "Our interpretation of the evaluation metrics indicates that our monolingual word sense alignment models show best overall performance for majority of languages. English and German pre-trained NNLM based models perform particularly well, while feature-based models delivered competitive overall results. Feature-based models showed good results especially in terms of binary recall and f1-measure. However, they perform poorly when it comes to binary precision and the results vary for five-class accuracy. Aside from the peculiar aspect of 5-class accuracy for this task described above, there are several reasons for this variety in results. All the models are dependent on the quality and size of their corresponding datasets. Also our sampling strategies to deal with imbalanced data may have caused the models to overfit certain patterns of definitions pairs having some kind of relations (BROADER, NARROWER, EXACT, RELATED) and classified some of NONE-related pairs as being related, which could explain high recall and low precision. Another important aspect is the availability and quality of tools for semantic parsing and lexical resources for all the languages.",
"cite_spans": [
{
"start": 893,
"end": 928,
"text": "(BROADER, NARROWER, EXACT, RELATED)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Result Interpretation and Model Comparison",
"sec_num": "5.2."
},
{
"text": "To investigate the results in more detail we present precision, recall, f1-measure for label predictions of English model in Table 3 . We can see that the model fails in detecting BROADER, NARROWER, and RELATED class, while performing moderately in detecting EXACT relations. The BERT based models for English and German performed well in all binary evaluation measures, with English RoBERTa model placing first out of five teams in all three binary evaluation measures. There was no submission from other teams for German, thus no detailed analysis was possible. Nevertheless the German BERT based model outperformed the base model and achieved relatively high scores in binary precision and f-measure. For both languages the neural language model based approaches outperformed feature-based classifiers in all binary evaluation metrics. The English RoBERTa model is on par with the random forest classifier in terms of 5-class accuracy and precision, but outperforms it when it comes to binary recall and binary 2-class f-measure by significant margins. Different to the feature-based classifier, the NNLM based model manages to classify some of the NARROWER relations correctly (Table 3 , but precision and recall are still very low. Confusion matrix showed that the model tends to classify NARROWER relations as EXACT. In contrast to English random forest model, German feature-based classifier cannot compete with the neural language model in all evaluation metrics, lack of more sophisticated features used by English feature-based classifier, such as ELMo sentence embedding or wordnet based features are possible reasons. However, the pre-trained German language model is pretrained on smaller dataset ( 16GB of data) than English (RoBERTa: 160GB), thus it is to assume there might be room for improvement of both approaches. For English models, which we have investigated more in detail, we can clearly see the correlation between number of data samples in each category and the performance of the models on those categories. BROADER and RELATED relations were only trained on 10 and 20 samples respectively, which we believe is too little to model pattern variety of complex natural language expressions.",
"cite_spans": [],
"ref_spans": [
{
"start": 125,
"end": 132,
"text": "Table 3",
"ref_id": null
},
{
"start": 1181,
"end": 1189,
"text": "(Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Result Interpretation and Model Comparison",
"sec_num": "5.2."
},
{
"text": "As previously mentioned, an important property of the provided datasets is the extreme imbalance in the favor of NONE class. For future work, it would be useful to acquire more examples of the classes less represented in the dataset. Since classifiers are prone to overfitting, it would be useful to expand the datasets with definitions extracted from more dictionaries. This way it would be easier to get a more gen-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "Features-based RoBERTa-based Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 Support BROADER 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3 NARROWER 0.00 0.00 0.00 0.00 0.00 0.00 0.14 0.17 0.16 29 EXACT 0.00 0.00 0.00 0.44 0.60 0.51 0.47 0.74 0.58 85 RELATED 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Table 3 : Evaluation results of test set prediction by English models. NONE classifier predicts all labels to NONE eral and robust classifier. Our feature-based models showed that differentiating exact semantic relation is a difficult task, especially NARROWER and EXACT relations get mixed up by the English model, more work on methodologies to distinguish these relations will help to improve 5-class accuracy. A different idea to consider would be to opt for specific classifiers for each pairing of two dictionaries, where features used could be dictionary-dependant and possibly more precise, e.g. numbers of semicolons or other formatting aspects which are dictionary-specific. Another possible issue we identified for this task is that dictionary definitions have different or atypical language usage in terms of structure of sentences, term occurrences, additional information expressed with symbols, such as semicolons, hyphens. For this reason, we think that building language models based on multiple dictionaries might help to further increase accuracy of the models. For German and English we demonstrated that fine-tuning neural network language models outperform the featurebased approaches. Considering that the pre-trained models were trained on more general corpora, further studies involving pre-training on dictionary data and further fine-tuning different aspects described in (Sun et al., 2020) might lead to improvements of the models.",
"cite_spans": [
{
"start": 1696,
"end": 1714,
"text": "(Sun et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 298,
"end": 305,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "NONE classifier",
"sec_num": null
},
{
"text": "In this paper we describe our system submission for the Monolingual Word Sense Alignment shared task at Globalex 2020. Our solution consists of a separate random forest classifier trained for each language, while a BERTbased solution is implemented for English and German. The feature-based classifiers perform competitively for binary classification and employing fine-tuning of pre-trained BERT models for monolingual word sense alignment is showing promising results and should be investigated further. 1e-6, 8e-6, 9e-6, 1e-5, 3e-5, 4e-5,5e-5 9e-6 3e-5 3 3 3 auto log2 2 auto 3 3 3 3 3 3 max-depth 10 10 10 30 10 10 30 10 10 7 10 10 10 min-samples-leaf 3 3 5 5 2 3 3 4 3 3 3 3 3 min-samples-split 10 2 10 8 5 2 8 2 8 5 2 5 8 n-estimators 100 100 100 500 300 50 500 100 200 50 50 100 100 ",
"cite_spans": [],
"ref_spans": [
{
"start": 556,
"end": 794,
"text": "3 3 3 auto log2 2 auto 3 3 3 3 3 3 max-depth 10 10 10 30 10 10 30 10 10 7 10 10 10 min-samples-leaf 3 3 5 5 2 3 3 4 3 3 3 3 3 min-samples-split 10 2 10 8 5 2 8 2 8 5 2 5 8 n-estimators",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "The dataset is still growing, and the current version can be found here: https://github.com/elexis-eu/MWSA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://spacy.io/ 3 https://www.nltk.org/ 4 https://github.com/Xangis/extra-stopwords and https://www.rdocumentation.org/packages/stopwords/versions/0.1.0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://wordnet.princeton.edu/ 6 https://scikit-learn.org/stable/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/ThilinaRajapakse/simpletransformers 8 https://huggingface.co/transformers/index.html 9 https://cloud.google.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://deepset.ai/german-bert 11 https://github.com/dbmdz/berts",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Multilingual Evaluation Dataset for Monolingual Word Sense Alignment",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ahmadi",
"suffix": ""
},
{
"first": "J",
"middle": [
"P"
],
"last": "Mccrae",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Nimb",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Troelsgard",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Olsen",
"suffix": ""
},
{
"first": "B",
"middle": [
"S"
],
"last": "Pedersen",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Declerck",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Wissik",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Monachini",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bellandi",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Khan",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Pisani",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Krek",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Lipp",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Varadi",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Simon",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gyorffy",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tiberius",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Schoonheim",
"suffix": ""
},
{
"first": "Y",
"middle": [
"B"
],
"last": "Moshe",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rudich",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Ahmad",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lonke",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kovalenko",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Langemets",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kallas",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Dereza",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Fransen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cillessen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lindemann",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Alonso",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Salgado",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Sancho",
"suffix": ""
},
{
"first": "R.-J",
"middle": [],
"last": "Urena-Ruiz",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Simov",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Osenova",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Kancheva",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Stankovic",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Krstev",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Lazic",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Markovic",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Perdih",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gabrovsek",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resource and Evaluation Conference (LREC 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmadi, S., McCrae, J. P., Nimb, S., Troelsgard, T., Olsen, S., Pedersen, B. S., Declerck, T., Wissik, T., Mona- chini, M., Bellandi, A., Khan, F., Pisani, I., Krek, S., Lipp, V., Varadi, T., Simon, L., Gyorffy, A., Tiberius, C., Schoonheim, T., Moshe, Y. B., Rudich, M., Ah- mad, R. A., Lonke, D., Kovalenko, K., Langemets, M., Kallas, J., Dereza, O., Fransen, T., Cillessen, D., Linde- mann, D., Alonso, M., Salgado, A., Sancho, J. L., Urena- Ruiz, R.-J., Simov, K., Osenova, P., Kancheva, Z., Radev, I., Stankovic, R., Krstev, C., Lazic, B., Markovic, A., Perdih, A., and Gabrovsek, D. (2020). A Multilingual Evaluation Dataset for Monolingual Word Sense Align- ment. In Proceedings of the 12th Language Resource and Evaluation Conference (LREC 2020).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Aligning an italian wordnet with a lexicographic dictionary: Coping with limited data",
"authors": [
{
"first": "T",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Strapparava",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Vieu",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Vetere",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caselli, T., Strapparava, C., Vieu, L., and Vetere, G. (2014). Aligning an italian wordnet with a lexicographic dictio- nary: Coping with limited data.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M.-W",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional trans- formers for language understanding.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Aligning word senses in germanet and the dwds dictionary of the german language",
"authors": [
{
"first": "V",
"middle": [],
"last": "Henrich",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hinrichs",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Barkey",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henrich, V., Hinrichs, E., and Barkey, R. (2014). Aligning word senses in germanet and the dwds dictionary of the german language.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). Roberta: A robustly optimized BERT pretrain- ing approach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "High performance word sense alignment by joint modeling of sense distance and gloss similarity",
"authors": [
{
"first": "M",
"middle": [],
"last": "Matuschek",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COL-ING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matuschek, M. and Gurevych, I. (2014). High perfor- mance word sense alignment by joint modeling of sense distance and gloss similarity. In Proceedings of COL- ING 2014, the 25th International Conference on Com- putational Linguistics: Technical Papers.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Word Sense Alignment of Lexical Resources",
"authors": [
{
"first": "M",
"middle": [],
"last": "Matuschek",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matuschek, M. (2014). Word Sense Alignment of Lexical Resources. Ph.D. thesis, Technischen Universitat Darm- stadt.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Linking datasets using semantic textual similarity. Cybernetics and information technologies",
"authors": [
{
"first": "J",
"middle": [
"P"
],
"last": "Mccrae",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Buitelaar",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McCrae, J. P. and Buitelaar, P. (2018). Linking datasets using semantic textual similarity. Cybernetics and infor- mation technologies, 18.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, et al., editors, Advances in Neural Information Processing Systems 26, pages 3111-3119. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "M",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. (2018). Deep contextu- alized word representations. In Proc. of NAACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Software Framework for Topic Modelling with Large Corpora",
"authors": [
{
"first": "R",
"middle": [],
"last": "Reh\u016f\u0159ek",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reh\u016f\u0159ek, R. and Sojka, P. (2010). Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Val- letta, Malta, May. ELRA. http://is.muni.cz/ publication/884893/en.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "SensEm-BERT: Context-Enhanced Sense Embeddings for Multilingual Word Sense Disambiguation",
"authors": [
{
"first": "B",
"middle": [],
"last": "Scarlini",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Pasini",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scarlini, B., Pasini, T., and Navigli, R. (2020). SensEm- BERT: Context-Enhanced Sense Embeddings for Multi- lingual Word Sense Disambiguation.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Featurerich two-stage logistic regression for monolingual alignment",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Sultan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Sumner",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sultan, M. A., Bethard, S., and Sumner, T. (2015). Feature- rich two-stage logistic regression for monolingual align- ment. In Proceedings of the 2015 Conference on Empir- ical Methods in Natural Language Processing.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "How to Fine-Tune BERT for Text Classification?",
"authors": [
{
"first": "C",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.05583"
]
},
"num": null,
"urls": [],
"raw_text": "Sun, C., Qiu, X., Xu, Y., and Huang, X. (2020). How to Fine-Tune BERT for Text Classification? arXiv:1905.05583 [cs], February. arXiv: 1905.05583.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "End-to-End Open-Domain Question Answering with BERTserini",
"authors": [
{
"first": "W",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North",
"volume": "",
"issue": "",
"pages": "72--77",
"other_ids": {
"arXiv": [
"arXiv:1902.01718"
]
},
"num": null,
"urls": [],
"raw_text": "Yang, W., Xie, Y., Lin, A., Li, X., Tan, L., Xiong, K., Li, M., and Lin, J. (2019). End-to-End Open-Domain Question Answering with BERTserini. Proceedings of the 2019 Conference of the North, pages 72-77. arXiv: 1902.01718.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "Label distribution of training datasets",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF3": {
"text": "Comparison of evaluation Results of MWSA from the final evaluation",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF5": {
"text": "FeatureEU BG DA NL EN ET DE GA IT PT RU SR SL",
"num": null,
"html": null,
"content": "<table><tr><td>cosine sim</td><td>O</td><td/><td>O</td><td>O</td><td/><td>O</td><td>O</td><td>O</td><td/><td>O</td><td/><td>O</td><td/></tr><tr><td>jaccard sim</td><td>O</td><td/><td>O</td><td>O</td><td/><td>O</td><td>O</td><td>O</td><td/><td>O</td><td/><td>O</td><td/></tr><tr><td>tfidf similarity</td><td>O</td><td>O</td><td>O</td><td>O</td><td/><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td></tr><tr><td>elmo similarity</td><td/><td/><td/><td/><td>O</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>similarity diff to target</td><td/><td/><td/><td/><td>O</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>first word same</td><td>O</td><td>O</td><td>O</td><td>O</td><td/><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td></tr><tr><td>root word same</td><td>O</td><td>O</td><td>O</td><td>O</td><td/><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td></tr><tr><td>length difference</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td></tr><tr><td>pos count difference</td><td/><td/><td/><td>O</td><td>O</td><td/><td>O</td><td/><td/><td/><td/><td/><td/></tr><tr><td>target pos</td><td/><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td></tr><tr><td>lemma match count</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td><td>O</td></tr><tr><td>pos count</td><td/><td/><td/><td>O</td><td>O</td><td/><td>O</td><td/><td/><td/><td/><td/><td/></tr><tr><td>dep. tree depth</td><td/><td/><td/><td/><td>O</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>target word synset count</td><td/><td/><td/><td/><td>O</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>average synset count</td><td/><td/><td/><td/><td>O</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>semicolon count</td><td/><td/><td/><td/><td>O</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table"
},
"TABREF6": {
"text": "Features used for each classifier, with language codes according to ISO 639-1",
"num": null,
"html": null,
"content": "<table><tr><td>Parameter</td><td>value set</td><td>English</td><td>German</td></tr><tr><td>used model</td><td>BERT English(Large) German BERT(deepset.ai, DBMDZ cased)</td><td>RoBERTa(Large)</td><td>DBMDZ German BERT</td></tr><tr><td/><td/><td>NONE: 0.23</td><td>NONE: 0.27</td></tr><tr><td/><td/><td>EXACT: 2.08</td><td>EXACT: 2.74</td></tr><tr><td>label weights</td><td/><td>BROADER: 42.05</td><td>BROADER: 2.31</td></tr><tr><td/><td/><td>NARROWER:5.37</td><td>NARROWER:3.13</td></tr><tr><td/><td/><td>RELATED:32.69</td><td>RELATED:8.32</td></tr><tr><td>max-seq-length</td><td>64, 128, 256, 512</td><td>256</td><td>256</td></tr><tr><td>train-batch-size</td><td>8, 16, 32</td><td>16</td><td>32</td></tr><tr><td>num-train-epochs</td><td>2,3,5,7,10,15</td><td>2</td><td>7</td></tr><tr><td>weight-decay</td><td>0.3, 0.5</td><td>0.3</td><td>0.3</td></tr><tr><td>learning-rate</td><td/><td/><td/></tr></table>",
"type_str": "table"
},
"TABREF7": {
"text": "Language model and Hyperparameters used for fine-tuning NNLM to MWSA",
"num": null,
"html": null,
"content": "<table><tr><td>Parameter</td><td>EU BG DA NL</td><td>EN ET DE GA</td><td>IT</td><td>PT RU SR</td><td>SL</td></tr><tr><td>max-features</td><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table"
},
"TABREF8": {
"text": "",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}