ACL-OCL / Base_JSON /prefixL /json /lchange /2022.lchange-1.21.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:11:33.332330Z"
},
"title": "HSE at LSCDiscovery in Spanish: Clustering and Profiling for Lexical Semantic Change Discovery",
"authors": [
{
"first": "Kseniia",
"middle": [],
"last": "Kashleva",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HSE University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Alexander",
"middle": [],
"last": "Shein",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HSE University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Elizaveta",
"middle": [],
"last": "Tukhtina",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HSE University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Svetlana",
"middle": [],
"last": "Vydrina",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HSE University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the methods used for lexical semantic change discovery in Spanish. We tried the method based on BERT embeddings with clustering, the method based on grammatical profiles and the grammatical profiles method enhanced with permutation tests. BERT embeddings with clustering turned out to show the best results for both graded and binary semantic change detection outperforming the baseline. Our best submission for graded discovery was the 3 rd best result, while for binary detection it was the 2 nd place (precision) and the 7 th place (both F1-score and recall). Our highest precision for binary detection was 0.75 and it was achieved due to improving grammatical profiling with permutation tests.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the methods used for lexical semantic change discovery in Spanish. We tried the method based on BERT embeddings with clustering, the method based on grammatical profiles and the grammatical profiles method enhanced with permutation tests. BERT embeddings with clustering turned out to show the best results for both graded and binary semantic change detection outperforming the baseline. Our best submission for graded discovery was the 3 rd best result, while for binary detection it was the 2 nd place (precision) and the 7 th place (both F1-score and recall). Our highest precision for binary detection was 0.75 and it was achieved due to improving grammatical profiling with permutation tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Lexical semantic change detection (LSCD) aims to identify which words and how change their meaning over time. LSCD is usually divided into two subtasks: graded change and binary change detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Graded LSCD is a subtask of ranking the intersection of (content-word) vocabularies according to their degree of change between a diachronic corpus pair C1 and C2 (Kurtyigit et al., 2021) . In this shared task, the participants were asked to rank the set of content words in the lemma vocabulary intersection of C1 and C2 according to their degree of semantic change between C1 to C2. Submissions were scored against 60 hidden words from the full target word list which were annotated for semantic change. The total number of target words were more than 4,000 (D. Zamora-Reina et al., 2022) , and, as it was a discovery task, the target words were not preselected, balanced or cleaned. Due to that, discovery is more problematic for models in comparison with semantic change detection, but it is an important task for lexicography.",
"cite_spans": [
{
"start": 163,
"end": 187,
"text": "(Kurtyigit et al., 2021)",
"ref_id": "BIBREF5"
},
{
"start": 564,
"end": 590,
"text": "Zamora-Reina et al., 2022)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Binary LSCD is a subtask of identifying whether a target word lost or gained senses from the first set of its usage to the second, or not (Schlechtweg et al., 2020) .",
"cite_spans": [
{
"start": 138,
"end": 164,
"text": "(Schlechtweg et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous shared tasks on lexical semantic change detection (LSCD) were developed for English, German, Latin, and Swedish (Schlechtweg et al., 2020) , Italian (Basile et al., 2020) , and Russian . This one was in Spanish (D. Zamora-Reina et al., 2022) . Spanish is a fusional Romance language of the Indo-European language family with rich morphology and a lot of national varieties. So far, LSCD in shared tasks were developed for three Romance languages, three German languages, and one Slavic language. Only two of them are analytical (English and Swedish), while others are fusional.",
"cite_spans": [
{
"start": 121,
"end": 147,
"text": "(Schlechtweg et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 158,
"end": 179,
"text": "(Basile et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 212,
"end": 250,
"text": "Spanish (D. Zamora-Reina et al., 2022)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this shared task we tested several methods. For graded change discovery we used BERT embeddings with clustering (Montariol et al., 2021) . For binary change detection we used 3 methods. The first one was word embeddings again. Two others were grammatical profiling and grammatical profiling combined with permutation tests (Liu et al., 2021) .",
"cite_spans": [
{
"start": 115,
"end": 139,
"text": "(Montariol et al., 2021)",
"ref_id": "BIBREF9"
},
{
"start": 326,
"end": 344,
"text": "(Liu et al., 2021)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Though grammatical profiles by themselves yield worse performance than embedding-based method, they could be significantly improved by applying of additional significance tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For this method 1 we used a base version of BERT with 12 attention layers and a hidden layer size of 768. The exact pre-trained model was the one for (Devlin et al., 2019) . All parameters were set to the default as in the Transformers library, version 4.14.1 (Wolf et al., 2020) . The method consisted of several steps. First, we split the corpora into train and test sets. The train/test ratio was 90/10. We used the lemmatized version of the corpora in this method. Then we took the pre-trained BERT model for Spanish and ran a fine-tuning process on the train set of the corpora using the test set for evaluation. The code we used for fine-tuning is provided as one of the examples in the Transformers library repository. 3 After fine-tuning the model we extracted the embeddings for the target words from the full corpora provided. The embeddings were extracted separately for two time periods. To generate a final embedding for each target word, the embeddings from all 12 attention layers of the BERT model were summarized. The embeddings for all entries of every target word were extracted this way.",
"cite_spans": [
{
"start": 150,
"end": 171,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 260,
"end": 279,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 726,
"end": 727,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT embeddings method",
"sec_num": "2.1"
},
{
"text": "As a result, we obtained two matrices for every target word. One matrix represented one time period. The dimension of the resulting matrix was Nx768, where N is the number of occurrences of the target word in the corpus of particular time period.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT embeddings method",
"sec_num": "2.1"
},
{
"text": "The final step was clustering. We ran a k-means clustering algorithm on the rows of the resulting matrices. It should be noted that we also attempted to use the affinity propagation algorithm, but it proved unfeasible at this point, as the number of target words and the number of their embeddings was too large for the affinity propagation approach. So, the final decision was to resort to the k-means algorithm which is much faster. The number of clusters was set as a hyperparameter which we tuned at the development phase. The development phase demonstrated that the results were the best when the number of clusters equaled to a multiple of 7 with the larger numbers showing better results. In order to find a balance between the clustering time and the results we decided that the number of clusters should be 28, as the larger numbers of clusters significantly increased the computational time during the prediction process. The development phase results for different numbers of clusters are shown on the Figure 1 The resulting clusters presumably represented some gradations of word meanings. In order to calculate the graded change between the sets of clusters from two time periods, we used the average of the cosine distances between all pairs of the cluster centroids. The binary change was calculated by clustering the resulting graded changes into two clusters: the words that fall into the cluster with higher centroid value were considered as changed. The other words were considered as unchanged.",
"cite_spans": [],
"ref_spans": [
{
"start": 1013,
"end": 1021,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "BERT embeddings method",
"sec_num": "2.1"
},
{
"text": "To detect binary gain/loss we took the cluster centroids for the contextualised embeddings calculated on the previous step. Those centroids were clustered once again, but this time we used the affinity propagation method that determined the number of clusters automatically. The result clusters presumably represented the basic meanings of the target words. After that we compared the number of resulting clusters for both time periods. If the number of clusters in the first period was larger than that in the second period, we assumed that this word lost a sense. If not, we assumed the word gained a sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT embeddings method",
"sec_num": "2.1"
},
{
"text": "As for the optional COMPARE task, our submission was identical to that for the main Graded task. We did not use any other method for that.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT embeddings method",
"sec_num": "2.1"
},
{
"text": "In terms of performance the large number of target words posed a challenge for this model during embeddings extraction and making predictions. We extracted the embeddings for all target word occurrences, so the resulting pickled file with embeddings had the size of over 40 GB. We used the HSE supercomputer cluster with 4 GPUs to parallelize our calculations (Kostenetskiy et al., 2021) . The process of extracting embeddings took about 13 hours.",
"cite_spans": [
{
"start": 360,
"end": 387,
"text": "(Kostenetskiy et al., 2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT embeddings method",
"sec_num": "2.1"
},
{
"text": "The process of making predictions was also slowed down by the significant number of target words. As was already mentioned above, the first attempt to use affinity propagation failed for this reason. The k-means clustering was also performed on the supercomputer. For that were used 8 CPUs and the process took approximately an hour.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT embeddings method",
"sec_num": "2.1"
},
{
"text": "This approach has the work (Montariol et al., 2021) as a foundation. We made a few changes compared to it. The first change is that we used all embeddings of the target words, while (Montariol et al., 2021) limited the number of embeddings for each word to 200. The second change is about calculating the graded change. In (Montariol et al., 2021) were used the Wasserstein distance and the Jensen-Shannon divergence, while we used the average of all cosine distances between all cluster centroids.",
"cite_spans": [
{
"start": 27,
"end": 51,
"text": "(Montariol et al., 2021)",
"ref_id": "BIBREF9"
},
{
"start": 182,
"end": 206,
"text": "(Montariol et al., 2021)",
"ref_id": "BIBREF9"
},
{
"start": 323,
"end": 347,
"text": "(Montariol et al., 2021)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT embeddings method",
"sec_num": "2.1"
},
{
"text": "All language aspects are strongly interconnected. It means that semantic changes may be tied with grammatical changes. Diachronically, it can be observed through lexicalization and grammaticalization in particular. In Spanish, the modern usage of the verb andar 'to go' can be a good example of grammaticalization:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical profiling",
"sec_num": "2.2"
},
{
"text": "De que Blasillo ande al escuela me e holgado mucho (16th c.). -'Since Blasillo has been going to school, I have been very happy.'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical profiling",
"sec_num": "2.2"
},
{
"text": "-\u00bfY eso es todo el problema? -\u00c1ndale, exactamente eso. (21th c.) -'And that's the whole problem? Yes, yes (lit. walk to it), that's exactly it. ' (Company Company, 2008) So here we can see that this verb changed its meaning while changing its form.",
"cite_spans": [
{
"start": 144,
"end": 169,
"text": "' (Company Company, 2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical profiling",
"sec_num": "2.2"
},
{
"text": "The idea of grammatical profiling is that semantic change can be discovered through significant changes in the distribution of morphosyntactic categories. This method is described in in detail, so here we explain only the main points. To get grammatical profiles, the frequency of morphological and syntactic categories for each target word were counted in both corpora, that were in advance tagged and parsed with UD-Pipe (Straka and Strakov\u00e1, 2017) 4 . We used raw counts for that. Then, for each target word and for both morphological and syntactic dictionaries, a list of features 5 was created by taking the union of keys in the corresponding dictionaries for the two time bins. Then, feature vectors \u20d7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical profiling",
"sec_num": "2.2"
},
{
"text": "x 1 and \u20d7 x 2 were made. Each dimension of these vectors represented a grammatical category and the value it took was the frequency of that category in the corresponding time period . Then, the cosine distance cos( \u20d7 x 1 ; \u20d7 x 2 ) between the vectors were calculated to estimate the change in the grammatical profiles of the target word 6 . It was done separately for morphological and syntactic categories, resulting in two distance scores d morph and d synt . These distances can be used for graded change discovery. For binary detection, the top n target words were classified in the ranking as 'changed' (1) and others as 'stable' (0). The value of n was obtained from the ranking with the help off-the-shelf algorithms of change point detection (Truong et al., 2020) .",
"cite_spans": [
{
"start": 750,
"end": 771,
"text": "(Truong et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical profiling",
"sec_num": "2.2"
},
{
"text": "Earlier statistical significance tests were applied to semantic change detection methods based on contextual word embeddings (Liu et al., 2021) . Permutation-based statistical testing can be applied when data is limited. We used permutation tests to improve the results obtained with grammatical profiling, as the aim of the permutation test is to discover whether the observed test statistic (i.e. the cosine distance) is significantly different from zero (Liu et al., 2021) . Permutation tests reassigned group labels (time periods) to all observations by sampling without replacement. For binary change detection we calculated the default distance between grammar profiles. Then, we took sentence indices from the first and the second corpus for every target word and permute them by randomly splitting them between two time periods. If the number of possible permutations were less than 1000 we used all permutations. Then we calculated cosine distance between grammar profiles generated after shuffling. So, we have 2 sets of distances: the original cosine distance between grammar profiles and the permutated cosine distances between grammar profiles.",
"cite_spans": [
{
"start": 125,
"end": 143,
"text": "(Liu et al., 2021)",
"ref_id": "BIBREF8"
},
{
"start": 457,
"end": 475,
"text": "(Liu et al., 2021)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical profiling enhanced with permutation-based statistical tests",
"sec_num": "2.3"
},
{
"text": "Let us assume, there were 5 permutations, so we got 5 distances, e.g., 0.1, 0.7, 0.4, 0.15, and 0.2, and the original cosine distance was 0.3. We took only those permutated cosine distances that were larger than the default cosine distance. In this example, these are 0.7 and 0.4 (two values). We divided the number of these larger permutated distances by the number of permutations. In this example, this is 2/5 which is a p-value (Liu et al., 2021) .",
"cite_spans": [
{
"start": 432,
"end": 450,
"text": "(Liu et al., 2021)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical profiling enhanced with permutation-based statistical tests",
"sec_num": "2.3"
},
{
"text": "If the number of permutations were greater than 1000, the procedure was the same, but we corrected the p-value for every digit capacity, i.e., we took the first significance threshold as 0.05 and step-bystep reduced it till 0.005 (Liu et al., 2021) . In other words, we first randomly selected 1000 permutations and computed p-value. If this was larger 0.05, we stopped the procedure, otherwise took more permutations for more precise estimations.",
"cite_spans": [
{
"start": 230,
"end": 248,
"text": "(Liu et al., 2021)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical profiling enhanced with permutation-based statistical tests",
"sec_num": "2.3"
},
{
"text": "As a result, we had the cosine distance between grammar profiles and the p-value for every target word. For binary change detection we sorted these values both by the distance and the p-value and labeled top n target words as changed. The coefficient n was derived with a certain set of heuristics and is subject for a further research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical profiling enhanced with permutation-based statistical tests",
"sec_num": "2.3"
},
{
"text": "The submission results are presented in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "Clustering turned out to be the best one among all our methods. In graded change discovery it was proved to be better than both baselines and took the 3rd place in the leaderboard.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "The clustering method was our only method that was applied to the optional Gain/Loss task, however, it did not show good results. While this method surpassed the baseline numbers, it proved to be significantly inferior to the other methods participating in the task. It probably happened because we approached the Gain/Loss task as a separate task. The better approach might have been to somehow use the results we received on the main Binary task in order to calculate the gain/loss values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "There is another problem with the method that we can think of. The method assigned a gain/loss label for the word if the number of clusters in two time epochs differs even by one. A better approach would probably have been to decrease the sensitivity of the method and to ignore the insignificant differences between the number of clusters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "Grammatical profiling demonstrated the worst results among three methods we used (see Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 93,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "However, the results indicate that it was significantly improved by applying a permutation test. It should also be noted that grammatical profiling with a permutation test demonstrated the best precision among all participants and was only outperformed by the baseline 1. We also applied grammatical profiling for graded change discovery after the competition. The result was worse than baseline 1, but better than baseline 2 (see Table 1 ). Table 3 presents the top 10 words with the largest difference between BERT-based predictions and the gold standard. Closer inspection shows that there are two error types. According to the standard, some words (actitud, banco) changed a lot, while our prediction for these words appeared to be much lower. Meanwhile, there were words that did not change, however, our model labeled them as changed (propiamente, fallecimiento, viernes, distribuir, variedad, socialista) . Within the top 10 words, the model fell into errors on the side of changing more often. these words was much lower than the gold standard. Some incorrect predictions are the same with the incorrect predictions obtained with the BERTbased method (actitud, canal, banco). A likely explanation is that these words have a complicated semantic structure and more than one meaning.",
"cite_spans": [
{
"start": 840,
"end": 911,
"text": "(propiamente, fallecimiento, viernes, distribuir, variedad, socialista)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 431,
"end": 438,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 442,
"end": 449,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "Further studies need to be carried out in order to evaluate the combination of profiling with statistical significance testing for other languages. The great advantage of grammatical profiling is that computational resources required for that method are quite low. It is helpful when the number of target words is great, like in this shared task for graded discovery. Although the BERT-based method demonstrated the best results, more detailed error analysis is still required. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://huggingface.co/dccuchile/ bert-base-spanish-wwm-uncased 3 https://github.com/huggingface/ transformers/tree/main/examples/pytorch/ language-modeling",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The model was spanish-gsd-ud-2.5-191206.udpipe 5 These features are Universal Dependencies features https://universaldependencies.org/u/ feat/index.html 6 https://github.com/glnmario/ semchange-profiling",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported in part through computational resources of HPC facilities at HSE University (Kostenetskiy et al., 2021) .",
"cite_spans": [
{
"start": 104,
"end": 131,
"text": "(Kostenetskiy et al., 2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Diacrita @ evalita2020: Overview of the evalita2020 diachronic lexical semantics (diacr-ita) task",
"authors": [
{
"first": "Pierpaolo",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Annalina",
"middle": [],
"last": "Caputo",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "Pierluigi",
"middle": [],
"last": "Cassotti",
"suffix": ""
},
{
"first": "Rossella",
"middle": [],
"last": "Varvara",
"suffix": ""
}
],
"year": 2020,
"venue": "EVALITA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierpaolo Basile, Annalina Caputo, Tommaso Caselli, Pierluigi Cassotti, and Rossella Varvara. 2020. Diacr- ita @ evalita2020: Overview of the evalita2020 diachronic lexical semantics (diacr-ita) task. In EVALITA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The directionality of grammaticalization in spanish",
"authors": [],
"year": 2008,
"venue": "Concepci\u00f3n Company Company",
"volume": "9",
"issue": "",
"pages": "200--224",
"other_ids": {
"DOI": [
"10.1075/jhp.9.2.03com"
]
},
"num": null,
"urls": [],
"raw_text": "Concepci\u00f3n Company Company. 2008. The direction- ality of grammaticalization in spanish. Journal of Historical Pragmatics, 9(2):200-224.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Lscdiscovery: A shared task on semantic change discovery and detection in spanish",
"authors": [
{
"first": "D",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Zamora-Reina",
"suffix": ""
},
{
"first": "Dominik",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schlechtweg",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the 3rd International Workshop on Computational Approaches to Historical Language Change",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank D. Zamora-Reina, Felipe Bravo-Marquez, and Dominik Schlechtweg. 2022. Lscdiscovery: A shared task on semantic change discovery and de- tection in spanish. In Proceedings of the 3rd Inter- national Workshop on Computational Approaches to Historical Language Change, Dublin, Ireland. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "HPC resources of the higher school of economics",
"authors": [
{
"first": "P",
"middle": [
"S"
],
"last": "Kostenetskiy",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Chulkevich",
"suffix": ""
},
{
"first": "V",
"middle": [
"I"
],
"last": "Kozyrev",
"suffix": ""
}
],
"year": 2021,
"venue": "Journal of Physics: Conference Series",
"volume": "1740",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1088/1742-6596/1740/1/012050"
]
},
"num": null,
"urls": [],
"raw_text": "P. S. Kostenetskiy, R. A. Chulkevich, and V. I. Kozyrev. 2021. HPC resources of the higher school of eco- nomics. Journal of Physics: Conference Series, 1740(1):012050.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Lexical semantic change discovery",
"authors": [
{
"first": "Sinan",
"middle": [],
"last": "Kurtyigit",
"suffix": ""
},
{
"first": "Maike",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Dominik",
"middle": [],
"last": "Schlechtweg",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Kuhn",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "6985--6998",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.543"
]
},
"num": null,
"urls": [],
"raw_text": "Sinan Kurtyigit, Maike Park, Dominik Schlechtweg, Jonas Kuhn, and Sabine Schulte im Walde. 2021. Lexical semantic change discovery. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6985-6998, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Threepart diachronic semantic change dataset for Russian",
"authors": [
{
"first": "Andrey",
"middle": [],
"last": "Kutuzov",
"suffix": ""
},
{
"first": "Lidia",
"middle": [],
"last": "Pivovarova",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2nd International Workshop on Computational Approaches to Historical Language Change 2021",
"volume": "",
"issue": "",
"pages": "7--13",
"other_ids": {
"DOI": [
"10.18653/v1/2021.lchange-1.2"
]
},
"num": null,
"urls": [],
"raw_text": "Andrey Kutuzov and Lidia Pivovarova. 2021. Three- part diachronic semantic change dataset for Russian. In Proceedings of the 2nd International Workshop on Computational Approaches to Historical Language Change 2021, pages 7-13, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Grammatical profiling for semantic change detection",
"authors": [
{
"first": "Andrey",
"middle": [],
"last": "Kutuzov",
"suffix": ""
},
{
"first": "Lidia",
"middle": [],
"last": "Pivovarova",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Giulianelli",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 25th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "423--434",
"other_ids": {
"DOI": [
"10.18653/v1/2021.conll-1.33"
]
},
"num": null,
"urls": [],
"raw_text": "Andrey Kutuzov, Lidia Pivovarova, and Mario Giu- lianelli. 2021. Grammatical profiling for semantic change detection. In Proceedings of the 25th Confer- ence on Computational Natural Language Learning, pages 423-434, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Statistically significant detection of semantic shifts using contextual word embeddings",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Medlar",
"suffix": ""
},
{
"first": "Dorota",
"middle": [],
"last": "Glowacka",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems",
"volume": "",
"issue": "",
"pages": "104--113",
"other_ids": {
"DOI": [
"10.18653/v1/2021.eval4nlp-1.11"
]
},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Alan Medlar, and Dorota Glowacka. 2021. Statistically significant detection of semantic shifts using contextual word embeddings. In Proceedings of the 2nd Workshop on Evaluation and Compari- son of NLP Systems, pages 104-113, Punta Cana, Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Scalable and interpretable semantic change detection",
"authors": [
{
"first": "Syrielle",
"middle": [],
"last": "Montariol",
"suffix": ""
},
{
"first": "Matej",
"middle": [],
"last": "Martinc",
"suffix": ""
},
{
"first": "Lidia",
"middle": [],
"last": "Pivovarova",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "4642--4652",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.369"
]
},
"num": null,
"urls": [],
"raw_text": "Syrielle Montariol, Matej Martinc, and Lidia Pivovarova. 2021. Scalable and interpretable semantic change detection. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4642-4652, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "SemEval-2020 task 1: Unsupervised lexical semantic change detection",
"authors": [
{
"first": "Dominik",
"middle": [],
"last": "Schlechtweg",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Mcgillivray",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Hengchen",
"suffix": ""
},
{
"first": "Haim",
"middle": [],
"last": "Dubossarsky",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Tahmasebi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1--23",
"other_ids": {
"DOI": [
"10.18653/v1/2020.semeval-1.1"
]
},
"num": null,
"urls": [],
"raw_text": "Dominik Schlechtweg, Barbara McGillivray, Simon Hengchen, Haim Dubossarsky, and Nina Tahmasebi. 2020. SemEval-2020 task 1: Unsupervised lexical semantic change detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1-23, Barcelona (online). International Committee for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Tokenizing, POS tagging, lemmatizing and parsing UD 2.0 with UDPipe",
"authors": [
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Strakov\u00e1",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "88--99",
"other_ids": {
"DOI": [
"10.18653/v1/K17-3009"
]
},
"num": null,
"urls": [],
"raw_text": "Milan Straka and Jana Strakov\u00e1. 2017. Tokenizing, POS tagging, lemmatizing and parsing UD 2.0 with UDPipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 88-99, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Selective review of offline change point detection methods",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Truong",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Oudre",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Vayatis",
"suffix": ""
}
],
"year": 2020,
"venue": "Signal Processing",
"volume": "167",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.sigpro.2019.107299"
]
},
"num": null,
"urls": [],
"raw_text": "Charles Truong, Laurent Oudre, and Nicolas Vayatis. 2020. Selective review of offline change point detec- tion methods. Signal Processing, 167:107299.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": ".",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "The figure illustrates the change in the F1-score and the Spearman rank correlation depending on the number of clusters used. The colored dots are the best results for graded change discovery. Three of them are achieved when the number of clusters is a multiple of 7. The green dot is the best number of clusters, equal to 28.",
"uris": null
},
"TABREF1": {
"content": "<table><tr><td>presents the top 10 words with the largest</td></tr><tr><td>difference between grammatical profiling predic-</td></tr><tr><td>tions and the gold standard. Our prediction for</td></tr></table>",
"num": null,
"type_str": "table",
"text": "",
"html": null
},
"TABREF2": {
"content": "<table><tr><td/><td>Graded</td><td/></tr><tr><td colspan=\"3\">Method/Team COMPARE Spearman</td></tr><tr><td/><td>Baselines</td><td/></tr><tr><td>Baseline 1</td><td>0.561</td><td>0.543</td></tr><tr><td>Baseline 2</td><td>0.088</td><td>0.092</td></tr><tr><td colspan=\"3\">Our submissions: HSE team</td></tr><tr><td>Clusters</td><td>0.558</td><td>0.553</td></tr><tr><td>Grammar</td><td>-</td><td>0.390</td></tr><tr><td colspan=\"3\">Best submissions of other teams</td></tr><tr><td>GlossReader</td><td>0.842</td><td>0.735</td></tr><tr><td>DeepMistake</td><td>0.829</td><td>0.702</td></tr></table>",
"num": null,
"type_str": "table",
"text": "Submission results for binary task: Clusters means embedding clustering method, Grammar means grammatical profiles and Stats means grammatical profiles combined with a permutation test. Grammatical profiling for graded discovery was made after the competition.",
"html": null
},
"TABREF3": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Submission results for graded task. Grammatical profiling was made after the competition.",
"html": null
},
"TABREF5": {
"content": "<table><tr><td>word</td><td>change graded</td><td>change graded golden</td><td>change graded difference</td></tr><tr><td>marco</td><td>0.018</td><td>1</td><td>0.982</td></tr><tr><td>prima</td><td>0.118</td><td>1</td><td>0.882</td></tr><tr><td>actitud</td><td>0.115</td><td>0.925</td><td>0.810</td></tr><tr><td colspan=\"2\">indicativo 0.202</td><td>1</td><td>0.798</td></tr><tr><td>canal</td><td>0.240</td><td>1</td><td>0.760</td></tr><tr><td>disco</td><td>0.167</td><td>0.915</td><td>0.748</td></tr><tr><td colspan=\"2\">pendiente 0.096</td><td>0.781</td><td>0.685</td></tr><tr><td colspan=\"2\">corriente 0.072</td><td>0.753</td><td>0.681</td></tr><tr><td>banco</td><td>0.246</td><td>0.925</td><td>0.678</td></tr><tr><td>c\u00f3lera</td><td>0.098</td><td>0.741</td><td>0.643</td></tr></table>",
"num": null,
"type_str": "table",
"text": "BERT-based predictions compared with the gold standard.",
"html": null
},
"TABREF6": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Grammatical profiles predictions compared with the gold standard.",
"html": null
}
}
}
}