ACL-OCL / Base_JSON /prefixG /json /globalex /2020.globalex-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:05:45.158322Z"
},
"title": "MWSA Task at GlobaLex 2020: RACAI's Word Sense Alignment System using a Similarity Measurement of Dictionary Definitions",
"authors": [
{
"first": "Vasile",
"middle": [],
"last": "P\u0103i\u0219",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Romanian Academy CASA ACADEMIEI",
"location": {
"addrLine": "13 \"Calea 13 Septembrie\"",
"postCode": "050711",
"settlement": "Bucharest",
"country": "ROMANIA"
}
},
"email": "[email protected]"
},
{
"first": "Dan",
"middle": [],
"last": "Tufi\u0219",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Romanian Academy CASA ACADEMIEI",
"location": {
"addrLine": "13 \"Calea 13 Septembrie\"",
"postCode": "050711",
"settlement": "Bucharest",
"country": "ROMANIA"
}
},
"email": "[email protected]"
},
{
"first": "Radu",
"middle": [],
"last": "Ion",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Romanian Academy CASA ACADEMIEI",
"location": {
"addrLine": "13 \"Calea 13 Septembrie\"",
"postCode": "050711",
"settlement": "Bucharest",
"country": "ROMANIA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes RACAI's word sense alignment system, which participated in the Monolingual Word Sense Alignment shared task organized at GlobaLex 2020 workshop. We discuss the system architecture, some of the challenges that we faced as well as present our results on several of the languages available for the task.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes RACAI's word sense alignment system, which participated in the Monolingual Word Sense Alignment shared task organized at GlobaLex 2020 workshop. We discuss the system architecture, some of the challenges that we faced as well as present our results on several of the languages available for the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The \"Monolingual Word Sense Alignment\" (MWSA) task aimed at identifying a degree of similarity between word definitions across multiple dictionaries, in the same language. For this purpose, a corpus (Ahmadi et al., 2020) was provided for multiple languages. For each language, word senses from two distinct dictionaries were extracted and participating systems had to classify the relationship between the senses in one of five categories: \"exact\", \"broader\", \"narrower\", \"related\" or \"none\".",
"cite_spans": [
{
"start": 199,
"end": 220,
"text": "(Ahmadi et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Each provided entry in the evaluation set contains the following information: the lemma associated with the two definitions (the definiendum), the part of speech, two fields corresponding to the first and second dictionary entries (the definientia). Additionally, in the training set the relationship label is also provided.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Given this information, the task can be seen either as a word sense disambiguation problem, considering the sense of the definiendum in each of the definitions, or as a sentence similarity problem, considering the relatedness of the two definitions if they were sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Word sense disambiguation (WSD) is the ability to identify the meaning of words in context in a computational manner (Navigli, 2009) . This is an extremely hard problem, previously described as an AIcomplete problem (Mallery, 1988) , equivalent to solving central problems of artificial intelligence. This happens because difficult disambiguation issues can be resolved only based on knowledge. For the purpose of the MWSA task, a WSD approach will consider at each step the definiendum and its two contexts as expressed by the dictionary definitions.",
"cite_spans": [
{
"start": 117,
"end": 132,
"text": "(Navigli, 2009)",
"ref_id": "BIBREF19"
},
{
"start": 216,
"end": 231,
"text": "(Mallery, 1988)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Sentence similarity aims at computing a similarity measure between two sentences based on meanings and semantic content. For this purpose, the two definitions are treated like sentences and their meaning is compared. In this case the definiendum is not directly used, only the meaning expressed by the definiens being considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The present paper presents our system developed in the context of the MWSA shared task. We start by presenting related research, then continue with the implementation of our system and finally present concluding remarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Word sense disambiguation is a very old task in natural language processing. Already in 1940s it is viewed as a fundamental task of machine translation (Weaver, 1949) . Early systems employed manually created lists of disambiguation rules (Rivest, 1987) . The power of these systems was demonstrated in the first Senseval competition (Kilgarriff, 2000) , where decision lists were the most successful techniques employed (Yarowsky, 2000) .",
"cite_spans": [
{
"start": 152,
"end": 166,
"text": "(Weaver, 1949)",
"ref_id": "BIBREF30"
},
{
"start": 239,
"end": 253,
"text": "(Rivest, 1987)",
"ref_id": "BIBREF28"
},
{
"start": 334,
"end": 352,
"text": "(Kilgarriff, 2000)",
"ref_id": null
},
{
"start": 421,
"end": 437,
"text": "(Yarowsky, 2000)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "One of the earliest attempts at using additional digital resources in the form of machine-readable dictionaries is known as the Lesk algorithm, after its author (Lesk, 1986) . In this case, the dictionary sense of a word having the highest overlap with its context (the most words in common) is considered to be the correct one. A Leskbased similarity measure can also be computed for entire sentences. A survey of different semantic text similarity methods is given in Islam and Inkpen (2008) .",
"cite_spans": [
{
"start": 161,
"end": 173,
"text": "(Lesk, 1986)",
"ref_id": "BIBREF14"
},
{
"start": 470,
"end": 493,
"text": "Islam and Inkpen (2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "With the introduction of the unsupervised distributional representation of words, new sentence similarity measures have become available. These representations are also known as \"word embeddings\" and include GloVe (Pennington et al., 2014) , Skip-gram and CBOW (Bengio et al., 2003) and further refinements such as those described in Bojanowski et al. (2016) . In all of these variants, a unique representation is computed for each word based on all the contexts it appears in. This is not directly usable for WSD since the representation remains the same regardless of the word context. However, short text or sentence similarity measures can be computed by using the word embeddings representation of each word (Kenter and Rijke, 2015) . One of the advantages of using word embeddings representations is the availability of such pre-computed vectors for many languages (Grave et al., 2018) , trained on a mixture of Wikipedia and Common Crawl data. Additionally, on certain languages there are pre-computed vectors available computed on more language representative corpora, such as (P\u0103i\u0219 and Tufi\u0219, 2018) .",
"cite_spans": [
{
"start": 214,
"end": 239,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 261,
"end": 282,
"text": "(Bengio et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 334,
"end": 358,
"text": "Bojanowski et al. (2016)",
"ref_id": "BIBREF3"
},
{
"start": 713,
"end": 737,
"text": "(Kenter and Rijke, 2015)",
"ref_id": "BIBREF12"
},
{
"start": 871,
"end": 891,
"text": "(Grave et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 1085,
"end": 1107,
"text": "(P\u0103i\u0219 and Tufi\u0219, 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "A more recent representation of words is represented by their contextual embeddings. Well-known models of this type are ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) . They provide a word representation in context. Therefore, as opposed to previous embedding models, the word representation is not fixed, but determined based on the actual context the word appears in at runtime.",
"cite_spans": [
{
"start": 125,
"end": 146,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF23"
},
{
"start": 156,
"end": 177,
"text": "(Devlin et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Currently such pre-trained representations are not yet available for all languages, but multilingual models do exist, covering multiple languages in the same model, such as (Artetxe and Schwenk, 2019) . Recent studies have confirmed that BERT multilingual models seem to create good representations usable in a large number of experiments, even though concerns have been expressed regarding certain language pairs (Pires et al., 2019) .",
"cite_spans": [
{
"start": 173,
"end": 200,
"text": "(Artetxe and Schwenk, 2019)",
"ref_id": "BIBREF1"
},
{
"start": 414,
"end": 434,
"text": "(Pires et al., 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Sentence-BERT (Reimers and Gurevych, 2019) is a system for determining sentence embeddings. These are representations of entire sentences that can be used to assess sentence similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "The dataset proposed for the MWSA task is comprised of training and test data for 15 languages. For each of the languages, a tab separated file is available for evaluation containing 4 columns (lemma, part-of-speech, first definition, second definition) with one additional column in the training data (the relatedness of the two definitions). The definitions come from two distinct sources and are related to the word presented in the first column.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Basic Processing",
"sec_num": "3."
},
{
"text": "As mentioned in the introduction, the definition similarity issue can be considered a sentence similarity problem. However, definitions are usually not regular sentences. Considering the \"English_nuig\" portion of the dataset, which consists of definitions taken from the Princeton English WordNet (Miller, 1995) and the Webster's 1913 dictionary, the following types of definitions can be identified:",
"cite_spans": [
{
"start": 297,
"end": 311,
"text": "(Miller, 1995)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Basic Processing",
"sec_num": "3."
},
{
"text": "\u2022 A list of synonyms (example: \"a pennant; a flag or streamer\", \"a wing; a pinion\") \u2022 One or more expressions detailing the word (example: \"not having a material body\", \"wild or intractable; disposed to break away from duty; untamed\") \u2022 Entire sentences (example: \"a tower built by Noah's descendants (probably in Babylon) who intended it to reach up to heaven; God foiled them by confusing their language so they could no longer understand one another\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Basic Processing",
"sec_num": "3."
},
{
"text": "Other characteristics of definitions include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Basic Processing",
"sec_num": "3."
},
{
"text": "\u2022 Further clarifications given in parentheses (example: \"(Genesis 11:1-11)\", \"(probably in Babylon)\", \"(approximately)\") \u2022 Definitions tend to use a simpler language, out of more common words (usually explaining a less common word by means of common words) \u2022 There can be additional clarifications or examples at the end of the definitions starting with \"--\" (example: \"-usually used of people, especially women;\", \"-contrary to\") \u2022 For things like proper names or historical events there can be years or periods given in parentheses (example: \"(1805)\", \" \").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Basic Processing",
"sec_num": "3."
},
{
"text": "For other languages in the dataset similar observations can be made. Nevertheless, some specifics can also be identified. For example, in the Dutch part of the corpus first definitions usually start with a number (example: \"1.a/|\\Van personen\", \"II.6.c/|\\(Onz.) Zonder nadere bep.\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Basic Processing",
"sec_num": "3."
},
{
"text": "Given these corpus characteristics, a first phase before any actual algorithm implementation must consist in cleaning the definitions and pre-processing towards obtaining actual definientia. Since in most cases a single definition text actually groups together multiple simpler definitions our goal for pre-processing is to actually split them into individual ones (will also reference to them as \"subdefinitions\"). A first step is to split the definition text by \";\" characters. However, since some of the sub-definitions may still be complex, we followed some of the approaches for sentence decomposition described in Haussmann (2011) . We paid special attention to cases where multiple alternatives were given in the definition text, usually by means of coordinating conjunctions.",
"cite_spans": [
{
"start": 620,
"end": 636,
"text": "Haussmann (2011)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Basic Processing",
"sec_num": "3."
},
{
"text": "Taking an example definition \"of plain or coarse features; uncomely; ugly; --usually used of people, especially women\" this would be expanded into 4 sub-definitions: \"of plain features\", \"of coarse features\", \"uncomely\" and \"ugly\". The final part, after the \"--\" is removed during the cleaning phase. Even though this final part could provide some information, it appears only in one of the definition pairs and therefore it was deemed not useful for the analysis algorithms.Further primary processing operations include lemmatization and part-of-speech tagging. Given the observations presented previously and the examples shown, we considered that a regular annotation pipeline would not produce good results, since these are usually trained on regular text, containing complete sentences. Therefore, we decided to employ a statistical based annotation, considering the most frequent lemma and partof-speech that appears in a large enough corpus. For this purpose, we used the Open American National Corpus (Ide and Macleod, 2001 ) for the English language, the Spoken Dutch Corpus (Corpus Gesproken Nederlands -CGN) (Hoekstra et al., 2000) for the Dutch language, the PAISA corpus (Lyding et al., 2014) for the Italian language and the available Universal Dependencies treebanks for the Spanish language.",
"cite_spans": [
{
"start": 1009,
"end": 1031,
"text": "(Ide and Macleod, 2001",
"ref_id": "BIBREF10"
},
{
"start": 1119,
"end": 1142,
"text": "(Hoekstra et al., 2000)",
"ref_id": "BIBREF9"
},
{
"start": 1184,
"end": 1205,
"text": "(Lyding et al., 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Basic Processing",
"sec_num": "3."
},
{
"text": "The choice of the aforementioned resources for lemmas and part-of-speech was justified by their public availability online as well as the relatively short timeframe allocated for the purpose of the MWSA task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Basic Processing",
"sec_num": "3."
},
{
"text": "Dataset structure for the languages in which our system participated is presented in Tables 1-4 for the training part and in Table 5 for the test part. The part of speech is associated with the defined word and the relation categories \"exact\", \"narrower\", \"broader\", \"related\" and \"none\" are presented as they appear in the training set. Table 5 . Dataset structure for test sets Some common observations can be extracted from the above tables. In all the analyzed languages the predominant parts-of-speech associated with the entries are nouns and verbs, in both training and test sets. Additional part of speech words present are usually adjectives and adverbs. For the Italian dataset only nouns and verbs are provided while the Spanish data set also has a few entries (a total of 112) with other part of speech tags, present only in the training set: conjunction, adposition, affix, interjection.",
"cite_spans": [],
"ref_spans": [
{
"start": 125,
"end": 132,
"text": "Table 5",
"ref_id": "TABREF2"
},
{
"start": 338,
"end": 345,
"text": "Table 5",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset and Basic Processing",
"sec_num": "3."
},
{
"text": "Considering the English dataset alone, the nouns and verbs together total 7449 entries while the rest account for only 888 entries. From this point of view, it is expected that any system trained on the training set and making use of part-of-speech information will probably work better on nouns and verbs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS",
"sec_num": null
},
{
"text": "With regard to relationship classes, for all datasets it seems the \"none\" class is the most used, followed by the \"exact\" class. For the English dataset, the \"none\" class accounts for 7137 entries, the \"exact\" class has 800 entries and all the other classes account for 400 entries. Given this huge difference between the available examples associated with each class, it is expected that a system trained on this dataset will perform better on \"none\" and \"exact\" and less on the other classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS",
"sec_num": null
},
{
"text": "The overall system is constructed as a series of modules that can be turned on or off depending on what resources are available for a certain language. Each module produces one or more features that can be finally fed into a decision tree or random forest classifier, thus producing the final result. The overall system diagram is presented in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 344,
"end": 352,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "4."
},
{
"text": "The first two modules \"Cleanup\" and \"Definition decomposition\" were already presented in the previous section. Their functionality is about obtaining clean subdefinitions. The following modules usually make use of these sub-definitions, but there are also features computed on the entire definition directly after the cleanup pre-processing. Modules using sub-definitions, as detailed below, will compute a score for each sub-definition pair. Finally, the scores are combined by selecting the maximum score between all sub-definition pairs. The first series of features is based on variants of the Lesk algorithm. We use three types of algorithms based on complete words, lemmas and stems. For each subdefinition pair (the first taken from the first definition and the second from the second definition) we compute a score based on the common indicators between the two. Finally, the algorithm keeps the maximum number of words in common as well as the maximum and minimum number of words in the sub-definitions corresponding to the first and second definition. For stemming we used a Porter stemmer algorithm (Rijsbergen, 1980; Porter, 1980) . (Habash and Dorr, 2003) . Catvar is a database of clusters of uninflected words (lexemes) and their categorial (i.e. part-of-speech) variants.",
"cite_spans": [
{
"start": 1110,
"end": 1128,
"text": "(Rijsbergen, 1980;",
"ref_id": "BIBREF27"
},
{
"start": 1129,
"end": 1142,
"text": "Porter, 1980)",
"ref_id": "BIBREF25"
},
{
"start": 1145,
"end": 1168,
"text": "(Habash and Dorr, 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "4."
},
{
"text": "As mentioned in the \"Related work\" section, BERT is a word embeddings model allowing for word representation in context and this representation was used in Sentence-BERT (Reimers and Gurevych, 2019) for obtaining sentence-level representations. We exploited this by incorporating a series Sentence-BERT based features. Thus, for each sub-definition pair we computed the Sentence-BERT representation and obtained the cosine distance between those. Finally, the minimum, maximum and average distances were computed and used as features. Also, a complete embedding was computed on the entire definition and the cosine distance between the two definitions was used as another feature.",
"cite_spans": [
{
"start": 170,
"end": 198,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "4."
},
{
"text": "A novel algorithm was implemented using a graph representation. For each sub-definition pair, the component words were added to the graph. Then, the lemmas of the words were added. Finally, synonyms and related words (see below) were added as well. These were extracted from WordNet. The extraction process involves a further sense disambiguation in order to detect relevant synsets. This was achieved using a basic Lesk-based disambiguation algorithm between the synset definition available in WordNet and the input sub-definition. In order to exploit the word order within the sub-definitions and allow for missing words, additional edges were added between adjacent words in the sub-definitions. An example is given in Figure 2 for the sub-definitions \"refuse to accept\" and \"refuse to receive\". This is a very simple example in which a word appears in both subdefinitions and the remaining words are actually detected as being synonyms. Figure 2 . Example graph-based representation for \"refuse to accept\" and \"refuse to receive\" Finally, a score was computed based on the distance between words belonging to the two sub-definitions.",
"cite_spans": [],
"ref_spans": [
{
"start": 722,
"end": 730,
"text": "Figure 2",
"ref_id": null
},
{
"start": 941,
"end": 949,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "4."
},
{
"text": "Since all these algorithms make use of statistics or pretrained word vectors without further optimization on the training corpus, we present results from each algorithm alone in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 185,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "4."
},
{
"text": "Accuracy Table 6 it can be seen that the BERT average calculation on the sub-definitions seems to produce the best accuracy score. However, by comparing the different algorithms it seems that each algorithm produces good results in different contexts (considering the observations from section 3, above). Therefore, the final classification module becomes very important, especially combined with other features that could allow a decision between different scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Algorithm",
"sec_num": null
},
{
"text": "Statistical features which were computed included the total number of words, minimum and maximum number of words in sub-definitions, number of comma characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": null
},
{
"text": "Furthermore, from several manual investigations on the training data it was deemed useful to have a comparison between the first words of sub-definitions having the same part of speech as the defined word. This comparison is realized by means of synonyms and is further used as a feature. For example, let's consider the sub-definitions associated with the word \"holograph\" which has the indicated part of speech \"noun\": \"handwritten book\" and \"a document\". In this case we are interested in comparing \"book\" and \"document\" since these have the same part of speech (\"noun\") as the defined word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": null
},
{
"text": "Furthermore, considering the observations regarding definition structure from section 3, an additional feature was created with 3 possible values: 0, if both subdefinitions are single word (not considering stop words); 1, if one of the sub-definitions is a single word and the other is a more complex expression; 2, if both subdefinitions are complex expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": null
},
{
"text": "A total of 17 features were finally used in a Random Forest Classifier (Ho, 1995) . The classifier hyperparameters were trained and optimized using a grid search approach with cross validation on the training set.",
"cite_spans": [
{
"start": 71,
"end": 81,
"text": "(Ho, 1995)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": null
},
{
"text": "The final cross validation measurement of mean accuracy on the training set indicated a value of 0.881 with a variation of +/-0.02. This is above the score obtained on the test set, thus indicating some potentially significant variations in the data used. Nevertheless, our system obtained a final score of 0.798 on the 5-class accuracy evaluation, thus positioning the system on the first place for the English language competition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": null
},
{
"text": "For the other languages in which we participated (Dutch, Italian and Spanish) we deactivated the modules using WordNet based synonyms. We acknowledge the existence of wordnets for the aforementioned languages, however due to the short amount of time available for the task we were not able to technically integrate these resources into our system. Nevertheless, this was an exercise proving the modularity of the developed system and the possibility to adapt to different available resources. Furthermore, even with this disadvantage, the system was able to be on the first place for the Dutch language and on second place for Italian and Spanish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": null
},
{
"text": "Once the test set annotations were released, we were able to evaluate our system, including all the other algorithms on the final data. Test dataset similarity tags follow a distribution like that of the training set. However, the distinction between \"exact\" and \"none\" classes is emphasized even more. In the English, Dutch and Spanish datasets there are cases where the number of \"narrower\", \"broader\" or \"related\" tags is equal to zero for certain parts of speech. By looking at the total numbers of tags in each category in the English data set, it can be observed that there are only three of type \"broader\". Similarly, for the other languages analyzed there are tags for which the total number is equal to or less than 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Evaluation",
"sec_num": "5."
},
{
"text": "The official evaluation was performed using the CodaLab website 1 . Results on the test datasets for our system are presented in Table 11 . This evaluation contains 4 indicators: accuracy (the percentage of scores for which the predicted label matches the reference label, considering all five classes), precision, recall and Fmeasure (taking into account accuracy in predicting the link but not the type of the link, thus considering only 2 classes: none and non-none). Table 11 . System evaluation on the test dataset Our system obtained first place for the English and Dutch accuracy score (considering all 5 classes) and second place for the Italian and Spanish accuracy. Probably the lower score for Italian and Spanish is due to the fewer language resources that we used and thus to the fewer modules of the system that were involved, as described in section 4.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 137,
"text": "Table 11",
"ref_id": null
},
{
"start": 471,
"end": 479,
"text": "Table 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Evaluation",
"sec_num": "5."
},
{
"text": "Looking at the 2-class measures, our system reached high precision and was on the first place for English and Dutch and on the second place for Italian and Spanish. Compared to other systems our recall was lower resulting in a F-measure that situated our system on second and third place with regard to this metric. Table 6 on the training set, we provide accuracies on the test set for the English language in Table 12 .",
"cite_spans": [],
"ref_spans": [
{
"start": 316,
"end": 323,
"text": "Table 6",
"ref_id": null
},
{
"start": 411,
"end": 419,
"text": "Table 12",
"ref_id": null
}
],
"eq_spans": [],
"section": "2-Class Precision",
"sec_num": null
},
{
"text": "As mentioned in section 4, these algorithms are not dependent on the training set, being statistical in nature, therefore we would expect seeing similar scores. However, a slightly lower score than the one on the training set could be attributed to a potential difference between the two sets. Tables 1 and 7 provide comparison between the training and test sets for the English language and one of the possible differences is the high number of nouns in the training set as compared to the more balanced number of nouns and verbs in the test set. Another difference is the reduced number of \"narrower\", \"broader\" and \"related\" definitions.",
"cite_spans": [],
"ref_spans": [
{
"start": 294,
"end": 327,
"text": "Tables 1 and 7 provide comparison",
"ref_id": null
}
],
"eq_spans": [],
"section": "Similar to the individual algorithm evaluation provided in",
"sec_num": null
},
{
"text": "Accuracy The addition of a Random Forest classifier combining all the available features improved the overall accuracy from 0.744 (in the case of the Graph-based algorithm, which obtained the highest individual score) to 0.798, which was the final score achieved by our system on the English language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": null
},
{
"text": "This paper presented our system proposal 2 for the Monolingual Word Sense Alignment 2020 shared task. The system is composed of multiple modules which can be enabled or not depending on the linguistic resources available for a particular language. Finally, a random forest classifier is trained on the provided training dataset using the features produced by the different modules. The system was able to achieve state-of-the-art performance for the English language, by using all the implemented modules, as described in section 4 above. Furthermore, with a reduced set of modules, due to the resources available to us in the short amount of time for this competition, we were able to achieve first place in the Dutch language competition and second place in the Italian and Spanish competitions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6."
},
{
"text": "The overall system contains both language independent modules (like some of the Lesk based approaches and purely statistical features) and modules requiring the presence of language resources. In the second case, these range from basic resources (synonyms, stemming algorithms) to more advanced resources (WordNet, lemmatization, part of speech tagging) and even the presence of a BERT model (either multilingual or language specific).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6."
},
{
"text": "Having a modular architecture means the system can be used on any language and it can adapt itself (also its results) to the available resources. As always, having more language resources available translates into a better system performance. Of course, integrating resources for additional languages requires manual intervention on the system to allow it to process the new resources in their respective formats. This also explains our limited participation in the task's languages since we had to integrate different resources (with different formats) available for the different languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6."
},
{
"text": "Implemented modules can be used individually, even without a training set. This set was needed in the last stage when training the final classifier together with additional statistical features. Therefore, it is our hope that this implementation can be adapted for Romanian language as well. Currently a large annotated Reference Corpus of Contemporary Romanian Language (CoRoLa) (Mititelu et al., 2018) is available for our research together with the Romanian WordNet (Tufi\u0219 et al, 2008) . Currently, as far as we know, there is no monolingual BERT model available for Romanian language. However, multilingual models, similar to the one used for the purpose of the MWSA task, are available. Finally, we envisage to further include such a system in the RELATE platform (P\u0103i\u0219 et al., 2019) dedicated to processing Romanian language.",
"cite_spans": [
{
"start": 380,
"end": 403,
"text": "(Mititelu et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 469,
"end": 488,
"text": "(Tufi\u0219 et al, 2008)",
"ref_id": "BIBREF29"
},
{
"start": 769,
"end": 788,
"text": "(P\u0103i\u0219 et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6."
},
{
"text": "Part of this work was conducted in the context of the ReTeRom project. Part of this work was conducted in the context of the Marcell project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "7."
},
{
"text": "https://competitions.codalab.org/competitions/22163",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/racai-ai/MWSA2020",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Multilingual Evaluation Dataset for Monolingual Word Sense Alignment",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ahmadi",
"suffix": ""
},
{
"first": "P",
"middle": [
"J"
],
"last": "Mccrae",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Nimb",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Troelsg\u00e5rd",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Olsen",
"suffix": ""
},
{
"first": "S",
"middle": [
"B"
],
"last": "Pedersen",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Declerck",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Wissik",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Monachini",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bellandi",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Khan",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Pisani",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Krek",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Lipp",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "V\u00e1radi",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Simon",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gy\u0151rffy",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tiberius",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Schoonheim",
"suffix": ""
},
{
"first": "B",
"middle": [
"Y"
],
"last": "Moshe",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rudich",
"suffix": ""
},
{
"first": "A",
"middle": [
"R"
],
"last": "Ahmad",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lonke",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kovalenko",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Langemets",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kallas",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Dereza",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Fransen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cillessen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lindemann",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Alonso",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Salgado",
"suffix": ""
},
{
"first": "L",
"middle": [
"J"
],
"last": "Sancho",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ure\u00f1a-Ruiz",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Simov",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Osenova",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Kancheva",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Stankovi\u0107",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Krstev",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Lazi\u0107",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Markovi\u0107",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Perdih",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gabrov\u0161ek",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resource and Evaluation Conference (LREC 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmadi, S., McCrae, P.J., Nimb, S., Troelsg\u00e5rd, T., Olsen, S., Pedersen, S.B., Declerck, T., Wissik, T., Monachini, M., Bellandi, A., Khan, F., Pisani, I., Krek, S., Lipp, V., V\u00e1radi, T., Simon, L., Gy\u0151rffy, A., Tiberius, C., Schoonheim, T., Moshe, B.Y., Rudich, M., Ahmad, A.R., Lonke, D., Kovalenko, K., Langemets, M., Kallas, J., Dereza, O., Fransen, T., Cillessen, D., Lindemann, D., Alonso, M., Salgado, A., Sancho, L.J., Ure\u00f1a-Ruiz, R., Simov, K., Osenova, P., Kancheva, Z., Radev, I., Stankovi\u0107, R., Krstev, C., Lazi\u0107, B., Markovi\u0107, A., Perdih, A. and Gabrov\u0161ek, D. (2020). A Multilingual Evaluation Dataset for Monolingual Word Sense Alignment. Proceedings of the 12th Language Resource and Evaluation Conference (LREC 2020).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond",
"authors": [
{
"first": "M",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"597-610.10.1162/tacl_a_00288"
]
},
"num": null,
"urls": [],
"raw_text": "Artetxe, M. and Schwenk, H. (2019). Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond. Transactions of the Association for Computational Linguistics. 7. 597-610. 10.1162/tacl_a_00288.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Vincent",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bengio, Y., Ducharme, R., Vincent, P. (2003). A neural probabilistic language model, Journal of Machine Learning Research, 3, pp.1137-1155.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Enriching Word Vectors with Subword Information",
"authors": [
{
"first": "P",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.04606"
]
},
"num": null,
"urls": [],
"raw_text": "Bojanowski, P., Grave, E., Joulin, A., Mikolov, T. (2016). Enriching Word Vectors with Subword Information, arXiv:1607.04606.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning Word Vectors for 157 Languages",
"authors": [
{
"first": "E",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.06893"
]
},
"num": null,
"urls": [],
"raw_text": "Grave, E., Bojanowski, P., Gupta, P., Joulin, A. and Mikolov, T. (2018). Learning Word Vectors for 157 Languages. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018), arXiv:1802.06893.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M",
"middle": [
"W"
],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL-HLT 2019",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devlin J., Chang, M.W., Lee, K. and Toutanova K. (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL-HLT 2019, pages 4171-4186.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Categorial Variation Database for English",
"authors": [
{
"first": "N",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Dorr",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the North American Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "96--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Habash, N. and Dorr, B. (2003). A Categorial Variation Database for English. In Proceedings of the North American Association for Computational Linguistics, Edmonton, Canada, pp. 96 -102.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Contextual sentence decomposition with applications to semantic full-text search",
"authors": [
{
"first": "E",
"middle": [],
"last": "Haussmann",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haussmann, E. (2011). Contextual sentence decomposition with applications to semantic full-text search. Master's thesis, University of Freiburg.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Random Decision Forests",
"authors": [
{
"first": "T",
"middle": [
"K"
],
"last": "Ho",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 3rd International Conference on Document Analysis and Recognition",
"volume": "",
"issue": "",
"pages": "278--282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ho, T. K. (1995). Random Decision Forests. In Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, 14-16 August 1995. pp. 278-282.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Syntactic Annotation for the Spoken Dutch Corpus Project (CGN)",
"authors": [
{
"first": "H",
"middle": [],
"last": "Hoekstra",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Moortgat",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Schuurman",
"suffix": ""
},
{
"first": "&",
"middle": [
"T"
],
"last": "Van Der Wouden",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics in the Netherlands",
"volume": "",
"issue": "",
"pages": "73--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoekstra, H., M. Moortgat, I. Schuurman & T. van der Wouden (2000). Syntactic Annotation for the Spoken Dutch Corpus Project (CGN). In W. Daelemans, K. Sima'an, J. Veenstra & J. Zavrel (Eds.), Computational Linguistics in the Netherlands 2000. 73-87. Amsterdam: Rodopi.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The American National Corpus: A Standardized Resource of American English",
"authors": [
{
"first": "N",
"middle": [],
"last": "Ide",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Macleod",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of Corpus Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ide, N., Macleod, C. (2001). The American National Corpus: A Standardized Resource of American English. Proceedings of Corpus Linguistics 2001, Lancaster UK.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semantic text similarity using corpus-based word similarity and string similarity",
"authors": [
{
"first": "A",
"middle": [],
"last": "Islam",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2008,
"venue": "ACM Trans. Knowl. Discov. Data",
"volume": "2",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Islam, A. and Inkpen, D. (2008). Semantic text similarity using corpus-based word similarity and string similarity. ACM Trans. Knowl. Discov. Data. 2, 2, Article 10 (July 2008), 25 pages.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Short text similarity with word embeddings",
"authors": [
{
"first": "T",
"middle": [],
"last": "Kenter",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rijke",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th ACM international on conference on information and knowledge management",
"volume": "",
"issue": "",
"pages": "1411--1420",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenter, T. and Rijke, M. (2015). Short text similarity with word embeddings. In Proceedings of the 24th ACM international on conference on information and knowledge management, pp 1411-1420.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Senseval98: Evaluating Word Sense Disambiguation Systems",
"authors": [],
"year": 2000,
"venue": "",
"volume": "34",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kilgarriff, A., Palmer, M. (eds., 2000): Senseval98: Evaluating Word Sense Disambiguation Systems, vol. 34 (1-2). Kluwer, Dordrecht, the Netherlands.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lesk",
"suffix": ""
}
],
"year": 1986,
"venue": "Proceedings of the 5th SIGDOC",
"volume": "",
"issue": "",
"pages": "24--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lesk, M. (1986). Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone. In Proceedings of the 5th SIGDOC (New York, NY). 24-26.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Proceedings of the 9th Web as Corpus Workshop (WaC-9",
"authors": [
{
"first": "V",
"middle": [],
"last": "Lyding",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Stemle",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Borghetti",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brunello",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Castagnoli",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Dell'orletta",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Dittmann",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lenci",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Pirrelli",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "36--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lyding, V., Stemle, E., Borghetti, C., Brunello, M., Castagnoli, S., Dell'Orletta, F., Dittmann, H., Lenci, A., Pirrelli, V. (2014): \"The PAIS\u00c0 Corpus of Italian Web Texts\" In Proceedings of the 9th Web as Corpus Workshop (WaC-9), Association for Computational Linguistics, Gothenburg, Sweden, April 2014. pp. 36- 43.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Thinking about foreign policy: Finding an appropriate role for artificial intelligence computers",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "Mallery",
"suffix": ""
}
],
"year": 1988,
"venue": "Ph.D. dissertation. MIT Political Science Department",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mallery, J. C. (1988). Thinking about foreign policy: Finding an appropriate role for artificial intelligence computers. Ph.D. dissertation. MIT Political Science Department, Cambridge, MA.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "WordNet: A Lexical Database for English",
"authors": [
{
"first": "G",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller, G.A. (1995). WordNet: A Lexical Database for English. Communications of the ACM Vol. 38, No. 11: 39-4.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The Reference Corpus of Contemporary Romanian Language (CoRoLa)",
"authors": [
{
"first": "B",
"middle": [
"V"
],
"last": "Mititelu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Tufi\u0219",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Irimia",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th Language Resources and Evaluation Conference -LREC'18",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mititelu, B.V., Tufi\u0219, D. and Irimia, E. (2018). The Reference Corpus of Contemporary Romanian Language (CoRoLa). In Proceedings of the 11th Language Resources and Evaluation Conference - LREC'18, Miyazaki, Japan, European Language Resources Association (ELRA).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Word Sense Disambiguation: A Survey",
"authors": [
{
"first": "R",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM Computing Surveys",
"volume": "41",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Navigli, R. (2009). Word Sense Disambiguation: A Survey. ACM Computing Surveys. Vol. 41, No. 2.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Computing distributed representations of words using the COROLA corpus",
"authors": [
{
"first": "V",
"middle": [],
"last": "P\u0103i\u0219",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Tufi\u0219",
"suffix": ""
}
],
"year": 2018,
"venue": "In Proceedings of the Romanian Academy, Series A",
"volume": "19",
"issue": "2",
"pages": "403--409",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P\u0103i\u0219, V., Tufi\u0219, D. (2018). Computing distributed representations of words using the COROLA corpus. In Proceedings of the Romanian Academy, Series A, Volume 19, Number 2/2018, pp. 403-409.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Integration of Romanian NLP tools into the RELATE platform",
"authors": [
{
"first": "V",
"middle": [],
"last": "P\u0103i\u0219",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Tufi\u0219",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ion",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference on Linguistic Resources and Tools for Processing Romanian Language -CONSILR 2019",
"volume": "",
"issue": "",
"pages": "181--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P\u0103i\u0219, V., Tufi\u0219, D. and Ion, R. (2019). Integration of Romanian NLP tools into the RELATE platform. In Proceedings of the International Conference on Linguistic Resources and Tools for Processing Romanian Language -CONSILR 2019, pages 181-192.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "GloVe: Global Vectors for Word Representation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pennington, J., Socher, R. and Manning C.D. (2014). GloVe: Global Vectors for Word Representation. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP), pp 1532-1543.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "M",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peters, M.E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K. and Zettlemoyer, L. (2018). Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1, pp. 2227-2237.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "How multilingual is Multilingual BERT",
"authors": [
{
"first": "T",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Garette",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.01502"
]
},
"num": null,
"urls": [],
"raw_text": "Pires, T., Schlinger, E. and Garette, D. (2019). How multilingual is Multilingual BERT? arXiv:1906.01502.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "An algorithm for suffix stripping",
"authors": [
{
"first": "M",
"middle": [
"F"
],
"last": "Porter",
"suffix": ""
}
],
"year": 1980,
"venue": "Program",
"volume": "14",
"issue": "3",
"pages": "130--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Porter, M.F. (1980). An algorithm for suffix stripping, Program, 14(3) pp 130\u2212137.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
"authors": [
{
"first": "N",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3982--3992",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reimers, N. and Gurevych, I. (2019). Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, pp 3982- 3992.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "New models in probabilistic information retrieval",
"authors": [
{
"first": "C",
"middle": [
"J"
],
"last": "Rijsbergen",
"suffix": ""
},
{
"first": "S",
"middle": [
"E"
],
"last": "Robertson",
"suffix": ""
},
{
"first": "M",
"middle": [
"F"
],
"last": "Porter",
"suffix": ""
}
],
"year": 1980,
"venue": "London: British Library. British Library Research and Development Report",
"volume": "",
"issue": "5587",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rijsbergen, C.J., Robertson, S.E. and Porter, M.F. (1980). New models in probabilistic information retrieval. London: British Library. British Library Research and Development Report, no. 5587.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Learning decision lists",
"authors": [
{
"first": "R",
"middle": [
"L"
],
"last": "Rivest",
"suffix": ""
}
],
"year": 1987,
"venue": "Mach. Learn",
"volume": "2",
"issue": "",
"pages": "229--246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rivest, R. L. (1987). Learning decision lists. Mach. Learn. 2, 3, 229-246.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Romanian Wordnet: Current State, New Applications and Prospects",
"authors": [
{
"first": "D",
"middle": [],
"last": "Tufi\u0219",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ion",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Bozianu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ceau\u0219u",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "\u0218tef\u0103nescu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 4th Global WordNet Conference",
"volume": "",
"issue": "",
"pages": "441--452",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tufi\u0219, D., Ion, R., Bozianu, L. Ceau\u0219u, A. and \u0218tef\u0103nescu, D. (2008). Romanian Wordnet: Current State, New Applications and Prospects. In Proceedings of the 4th Global WordNet Conference, GWC-2008, pp. 441-452.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Translation",
"authors": [
{
"first": "W",
"middle": [],
"last": "Weaver",
"suffix": ""
}
],
"year": 1949,
"venue": "Machine Translation of Languages: Fourteen Essays",
"volume": "",
"issue": "",
"pages": "15--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weaver, W. (1949). Translation. In Machine Translation of Languages: Fourteen Essays (written in 1949, published in 1955), W. N. Locke and A. D. Booth, Eds. Technology Press of MIT, Cambridge, MA, and John Wiley & Sons, New York, NY, 15-23.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Hierarchical decision lists for word sense disambiguation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2000,
"venue": "Comput. Human",
"volume": "34",
"issue": "",
"pages": "179--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarowsky, D. (2000). Hierarchical decision lists for word sense disambiguation. Comput. Human. 34, 1-2, 179- 186.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "System architecture An additional enhancement was realized by implementing a Lesk algorithm variant by incorporating the cluster information from the Categorial Variation Database (Catvar)",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"text": "Dataset structure for the Italian training set",
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"6\">Exact Narr. Broad. Rel. None Total</td></tr><tr><td>Noun</td><td>409</td><td>143</td><td>11</td><td>16</td><td colspan=\"2\">2115 2694</td></tr><tr><td>Verb</td><td>230</td><td>100</td><td>19</td><td>25</td><td colspan=\"2\">4381 4755</td></tr><tr><td>Adj</td><td>149</td><td>58</td><td>7</td><td>8</td><td>588</td><td>810</td></tr><tr><td>Adv</td><td>12</td><td>9</td><td>2</td><td>2</td><td>53</td><td>78</td></tr><tr><td colspan=\"6\">Table 1. Dataset structure for the English training set</td><td/></tr><tr><td>POS</td><td colspan=\"6\">Exact Narr. Broad. Rel. None Total</td></tr><tr><td>Noun</td><td>264</td><td>14</td><td>40</td><td>24</td><td colspan=\"2\">8616 8958</td></tr><tr><td>Verb</td><td>77</td><td>9</td><td>7</td><td>7</td><td colspan=\"2\">4664 4766</td></tr><tr><td>Adj</td><td>93</td><td>5</td><td>4</td><td>3</td><td colspan=\"2\">4013 4118</td></tr><tr><td>Adv</td><td>10</td><td>1</td><td>0</td><td>4</td><td colspan=\"2\">1363 1378</td></tr><tr><td colspan=\"6\">Table 2. Dataset structure for the Dutch training set</td><td/></tr></table>"
},
"TABREF2": {
"num": null,
"text": ", above, already contains an analysis of the test dataset part-of-speech structure. Distribution of available gold annotations in the test dataset are presented in tables 7-10 for the English, Dutch, Italian and Spanish languages.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>POS</td><td colspan=\"6\">Exact Narr. Broad. Rel. None Total</td></tr><tr><td>Noun</td><td>39</td><td>18</td><td>0</td><td>2</td><td>118</td><td>177</td></tr><tr><td>Verb</td><td>31</td><td>11</td><td>1</td><td>10</td><td>209</td><td>262</td></tr><tr><td>Adj</td><td>14</td><td>0</td><td>2</td><td>4</td><td>80</td><td>100</td></tr><tr><td>Adv</td><td>1</td><td>0</td><td>0</td><td>0</td><td>4</td><td>5</td></tr><tr><td colspan=\"6\">Table 7. Dataset structure for the English test set</td><td/></tr><tr><td>POS</td><td colspan=\"6\">Exact Narr. Broad. Rel. None Total</td></tr><tr><td>Noun</td><td>40</td><td>1</td><td>10</td><td>1</td><td>782</td><td>834</td></tr><tr><td>Adj</td><td>3</td><td>0</td><td>3</td><td>0</td><td>84</td><td>90</td></tr><tr><td colspan=\"6\">Table 8. Dataset structure for the Dutch test set</td><td/></tr></table>"
}
}
}
}