ACL-OCL / Base_JSON /prefixS /json /semeval /2020.semeval-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:19:50.011926Z"
},
"title": "SChME at SemEval-2020 Task 1: A Model Ensemble for Detecting Lexical Semantic Change",
"authors": [
{
"first": "Maur\u00edcio",
"middle": [],
"last": "Gruppi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rensselaer Polytechnic Institute Troy",
"location": {
"region": "NY",
"country": "USA"
}
},
"email": ""
},
{
"first": "Sibel",
"middle": [],
"last": "Adal\u0131",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rensselaer Polytechnic Institute Troy",
"location": {
"region": "NY",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Yu",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research Yorktown Heights",
"location": {
"region": "NY",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes SChME (Semantic Change Detection with Model Ensemble), a method used in SemEval-2020 Task 1 on unsupervised detection of lexical semantic change. SChME uses a model ensemble combining signals of distributional models (word embeddings) and word frequency models where each model casts a vote indicating the probability that a word suffered semantic change according to that feature. More specifically, we combine cosine distance of word vectors combined with a neighborhood-based metric we named Mapped Neighborhood Distance (MAP), and a word frequency differential metric as input signals to our model. Additionally, we explore alignment-based methods to investigate the importance of the landmarks used in this process. Our results show evidence that the number of landmarks used for alignment has a direct impact on the predictive performance of the model. Moreover, we show that languages that suffer less semantic change tend to benefit from using a large number of landmarks, whereas languages with more semantic change benefit from a more careful choice of landmark number for alignment.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes SChME (Semantic Change Detection with Model Ensemble), a method used in SemEval-2020 Task 1 on unsupervised detection of lexical semantic change. SChME uses a model ensemble combining signals of distributional models (word embeddings) and word frequency models where each model casts a vote indicating the probability that a word suffered semantic change according to that feature. More specifically, we combine cosine distance of word vectors combined with a neighborhood-based metric we named Mapped Neighborhood Distance (MAP), and a word frequency differential metric as input signals to our model. Additionally, we explore alignment-based methods to investigate the importance of the landmarks used in this process. Our results show evidence that the number of landmarks used for alignment has a direct impact on the predictive performance of the model. Moreover, we show that languages that suffer less semantic change tend to benefit from using a large number of landmarks, whereas languages with more semantic change benefit from a more careful choice of landmark number for alignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The problem of detecting Lexical Semantic Change (LSC) consists of measuring and identifying change in word sense across time, such as in the study of language evolution, or across domains, such as determining discrepancies in word usage over specific communities (Schlechtweg et al., 2019) . One of the greatest challenges of this problem is the difficulty of assessing and evaluating models and results, as well as the limited amount of annotated data (Schlechtweg and Walde, 2020) . For that reason, the vast majority of the related work in the literature pursue this problem from an unsupervised perspective, that is, detecting semantic change without having prior knowledge of \"truth\". The importance of such task is manifold: to humans, it can be a powerful tool for studying language change and its cultural implications; to machines, it can be used to improve language models in downstream tasks such as unsupervised word translation, and fine-tuning of word embeddings (Joulin et al., 2018; Bojanowski et al., 2019) . In this task, the goal is to develop a method for unsupervised detection of lexical semantic change over time by comparing across two corpora from different time periods in four languages: English, German, Latin, and Swedish . Particularly, we are required to solve two sub-tasks: binary classification of semantic change (Subtask 1), and semantic change ranking (Subtask 2).",
"cite_spans": [
{
"start": 264,
"end": 290,
"text": "(Schlechtweg et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 454,
"end": 483,
"text": "(Schlechtweg and Walde, 2020)",
"ref_id": "BIBREF14"
},
{
"start": 978,
"end": 999,
"text": "(Joulin et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 1000,
"end": 1024,
"text": "Bojanowski et al., 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are many ways in which a word may change. Specifically, a word w may change sense because it has been completely replaced by a synonym w s (lexical replacement), or because it gains a new meaning, in which case word w may keep or lose its previous meaning across time and domain (Kutuzov et al., 2018) . Each type of change has its unique characteristics and may require different approaches in order to be detected. In this paper we describe a novel model ensemble method based on different features (signals) that we can extract from the text using distribution models (skip-gram word embeddings) and word frequency. Our model is primarily based on features extracted from independently trained Word2Vec embeddings aligned with orthogonal procrustes (Sch\u00f6nemann, 1966) , such as cosine distance, but also introduces two novel measures based on second-order distances and word frequency. Based on the distribution of each feature, we predict the probability that a word has suffered change through an anomaly detection approach. The final decision is made by soft voting (averaging) all the probabilities. For binary classification (Subtask 1) a threshold is applied to the final vote, for ranking (Subtask 2), the output from the soft voting is used as the ranking prediction.",
"cite_spans": [
{
"start": 285,
"end": 307,
"text": "(Kutuzov et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 758,
"end": 776,
"text": "(Sch\u00f6nemann, 1966)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our results show that second order methods and different combinations outperform the frequently used cosine distance in some subtasks and languages. Furthermore, we illustrate that the methods are sensitive to the degree of change in the language. It is possible to improve performance of these methods by aligning two embeddings of the same language from different time slices on a subset of words instead of all words. This opens a new avenue of research on finding optimal words for alignment. The code for the model can be obtained at https://github.com/mgruppi/schme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most methods for detecting semantic change are based on the distributional property of word semantics. The general idea is to compute contextual information of word w in each time or domain, and apply a measure of difference or distance between the observed contexts of w. Some of the first methods for detecting semantic change compute context information using a co-occurrence matrix within a pre-defined window of size L (Sagi et al., 2009; Cook and Stevenson, 2010) . This means that, for a vocabulary of size n, one computes a n \u00d7 n matrix M where M i,j is the frequency in which word i and j co-occur within a window of L words. This often yields a highly sparse matrix M , which is typically reduced in dimensionality by techniques such as Singular Value Decomposition (SVD). Once the matrices are computed, the contextual difference is computed by the cosine distance between the vectors.",
"cite_spans": [
{
"start": 424,
"end": 443,
"text": "(Sagi et al., 2009;",
"ref_id": "BIBREF13"
},
{
"start": 444,
"end": 469,
"text": "Cook and Stevenson, 2010)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Distributed word vector representations such as the ones obtained by the skip-gram with negative sampling (SGNS) (Mikolov et al., 2013) are forms of learning distributional information without the need for computing sparse co-occurrence matrices. A work by Hamilton et al. (2016b) presents a method for detecting semantic change using SGNS word embeddings learned from each corpora and aligned with orthogonal procrustes. The semantic change is, again, computed by the cosine distance between vectors in each time/domain. In another study (Hamilton et al., 2016a), the authors introduce a measure of semantic change based on how the neighborhood of a word changes named Local Neighborhood Change based on the number of words in common.",
"cite_spans": [
{
"start": 113,
"end": 135,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To eliminate the need for alignment, several authors have proposed dynamic word embeddings techniques, which jointly learn distributional word representations using the assumption that words are connected across time (Bamler and Mandt, 2017; Rudolph and Blei, 2018; Yao et al., 2018) . The main assumption in such methods is that word changes are considerably small between adjacent time stamps t 1 and t 2 , i.e. words evolve smoothly, thus word representations should be close between these periods. We argue that the assumption that all words in t 1 and t 2 should be smoothly connected through time does not always hold. This is because the corpora are aggregated over several years/decades/centuries, thus the semantic change may be drastic, and more similar to a cross-domain scenario than a diachronic one. We illustrate this by the corpora in this task and the use of a subset of landmarks for alignment that has not been investigated in the literature.",
"cite_spans": [
{
"start": 217,
"end": 241,
"text": "(Bamler and Mandt, 2017;",
"ref_id": "BIBREF1"
},
{
"start": 242,
"end": 265,
"text": "Rudolph and Blei, 2018;",
"ref_id": "BIBREF12"
},
{
"start": 266,
"end": 283,
"text": "Yao et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The data provided in this task consists of two corpora for each language, each corpus corresponding to different time periods t 1 and t 2 , as well as a list of target words for which we have to predict binary class and rank with respect to magnitude of the semantic change between t 1 and t 2 . The corpora used for each language are summarized in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 349,
"end": 356,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Overview and Data",
"sec_num": "3"
},
{
"text": "Most of our features are based on the alignment of word embeddings. Thus, the first step of our system is to train a Word2Vec model on corpora C 1 and C 2 for each language, let W 1 and W 2 denote the resulting word embeddings, respectively. Since W 1 and W 2 are learned independently, we cannot directly compare their vectors. Hence, similarly to Hamilton et al. (2016b), we apply orthogonal procrustes (OP)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Representations",
"sec_num": "3.1"
},
{
"text": "Language Corpora t 1 t 2 English",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Representations",
"sec_num": "3.1"
},
{
"text": "CCOHA (Alatrash et al., 2020) 1810-1860 1960-2010 German DTA + BZ + ND ) 1800 -1900 1946 -1990 Latin LatinISE (McGillivray and Kilgarriff, 2013 -200 -0 0-2000 Swedish KubHist (Borin et al., 2012) 1790-1830 1895-1903 Table 1 : Data provided for the task. In addition to the corpora, a set of target words is given, for which we need to generate outputs in substasks 1 and 2. (Sch\u00f6nemann, 1966) to align the word embeddings of the corpora. Given matrices A and B, the objective of OP is to learn an orthogonal transformation matrix Q that minimizes the sum of squared distances AQ \u2212 B 2 . Because Q is orthogonal, the transformation AQ is only subject to rotation and reflection, which preserves the relationships between the word vectors in A. We learn the transformation matrix Q from the alignment of W 1 and W 2 , updating W 1 \u2190 W 1 Q. Now the word vectors in W 1 can be directly compared to W 2 . In the following sections, we'll discuss the distance metrics used by the model to measure semantic change.",
"cite_spans": [
{
"start": 6,
"end": 29,
"text": "(Alatrash et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 71,
"end": 77,
"text": ") 1800",
"ref_id": null
},
{
"start": 78,
"end": 83,
"text": "-1900",
"ref_id": null
},
{
"start": 84,
"end": 88,
"text": "1946",
"ref_id": null
},
{
"start": 89,
"end": 94,
"text": "-1990",
"ref_id": null
},
{
"start": 95,
"end": 143,
"text": "Latin LatinISE (McGillivray and Kilgarriff, 2013",
"ref_id": null
},
{
"start": 175,
"end": 195,
"text": "(Borin et al., 2012)",
"ref_id": "BIBREF3"
},
{
"start": 374,
"end": 392,
"text": "(Sch\u00f6nemann, 1966)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 216,
"end": 223,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Representations",
"sec_num": "3.1"
},
{
"text": "Cosine Distance (COS). One of the most used metric for comparing word vectors is the cosine distance. The cosine distance between two vectors in a single source indicates how closely distributed the words are. In the semantic change scenario, we compute the cosine distance for word w as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Measures",
"sec_num": "3.2"
},
{
"text": "d cos = 1 \u2212 cos(v 1 , v 2 ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Measures",
"sec_num": "3.2"
},
{
"text": "where v 1 and v 2 are the word vectors of w in W 1 and W 2 , respectively. Ideally, a small value of d cos would imply that the contexts for w is similar in both corpora C 1 and C 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Measures",
"sec_num": "3.2"
},
{
"text": "Mapped Neighborhood Change (MAP). This measure looks at how a word moves away from its neighborhood across both corpora. To that end, we compute a second-order cosine distance vector s 1 (v 1 , N 1 ) between v 1 and its k nearest neighbors in W 1 , which we'll denote as the set N 1 . Then we compute another second-order vector s 2 (v 1 , N 1 ) using v 1 but looking for corresponding vectors of each word in N 1 in the space of the second corpus W 2 . The mapped neighborhood change is then computed as the cosine distance",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Measures",
"sec_num": "3.2"
},
{
"text": "d map (v 1 ) = d cos (s 1 (v 1 , N 1 ), s 2 (v 1 , N 1 )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Measures",
"sec_num": "3.2"
},
{
"text": "Although this method uses second-order distances like the Local Neighborhood Change (LNC) (Hamilton et al., 2016a), it differs from it by computing the distances between the aligned input embeddings, while LNC only computes such distances for vectors within a single embedding matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Measures",
"sec_num": "3.2"
},
{
"text": "Frequency Differential (FREQ). Let f 1 and f 2 be the relative frequencies of word w in C 1 and C 2 . We define the frequency differential for w as f (w) = f 1 \u2212f 2 f 1 +f 2 . Positive values indicate increase while negative values indicate decrease in frequency across the corpora. We argue that a steep increase in frequency may indicate indicate change more strongly than frequency decrease, which may happen due to a word becoming less popular or being replaced by another word without losing its original sense. This assumption is only viable because we know that C 1 always happens earlier in time than C 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Measures",
"sec_num": "3.2"
},
{
"text": "We compute the aforementioned features on all words in the intersection of the vocabularies of C 1 and C 2 , we use the observed feature distributions to determine potentially changed words. Let X i denote the random variable associated with the distribution of feature i. We work under the assumption that small values of X i denote little or no semantic change to a word. Moreover, unlikely high values of X i indicate a high chance that the word suffered change according to metric i. We define small and large values with respect to all the computed values in the distribution. For instance, if the cosine distance computed for a word is large when compared to the cosine distances of the other words, it is likely that the word has changed. Therefore, we define the probability of change for a word whose feature value is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Ensemble",
"sec_num": "3.3"
},
{
"text": "x i as P i (x i ) = P r(X i \u2264 x i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Ensemble",
"sec_num": "3.3"
},
{
"text": "Thus, P i is the cumulative distribution function (CDF) of X i , describing how unlikely high x i is according to the distribution of X i . We aggregate the probability output of each feature P i (x i ) by applying soft voting to each feature's prediction. The final prediction for a feature vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Ensemble",
"sec_num": "3.3"
},
{
"text": "x = (x 1 , x 2 , ..., x k ) is P (x) = 1 k k 1 P i (x i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Ensemble",
"sec_num": "3.3"
},
{
"text": "For classification, a threshold is a applied to P (x) in order to determine the class. For ranking, the score P (x) is used directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Ensemble",
"sec_num": "3.3"
},
{
"text": "We conduct all the experiments on the data provided for SemEval-2020 Task 1 for all four languages. Given that most of the corpora have been pre-processed with lemmatization and tokenization, our preprocessing consists of removing words whose count is less than 10, and tokenizing words at spaces. In this section we present the experiments and results for the model submitted to the task, as well as additional analysis of the model parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "We begin by learning the distributional representations of words in each corpora using Gensim's (\u0158eh\u016f\u0159ek and Sojka, 2010) implementation of Word2Vec . The parameters for Word2Vec are: vector size d = 300, window L = 10, negative samples ng = 5, and minimum word count min wc = 10. Next, we align the learned word vectors via OP using the intersecting vocabulary as landmarks. Then, we compute the distance metrics and their distributions so that we can get the vote P r(X i \u2264 x i ). Finally, we apply the model ensemble to different feature configurations to predict a final score. For classification, we apply a threshold t to the model output P (x), such that the predicted class is y = 1 if P (x) > t, and y = 0 otherwise. For ranking, the final score P (x) is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "Since there was no validation data during the evaluation phase, our submissions included multiple feature and threshold settings. The feature configurations are combinations of the cosine distance (COS), mapped neighborhood distance (MAP), and frequency differential (FREQ). The applied threshold levels are {0.5, 0.75, 0.9}. Our team (RPI-Trust) ranked 4th place in Subtask 1 with a score of 0.660, and 6th place in Subtask 2 with a score of 0.427 in the evaluation phase. each feature model being able to capture different types of change. For example, many events in between t 1 and t 2 for the English corpora may have contributed to the evolution of the language, such as the Second Industrial Revolution, and the World Wars. Technological development introduced several new concepts such as (air) plane and (record) player which were unheard of in t 1 , the detection of such change relies on signals that can indicate a completely new use of a word while potentially keeping its previous senses. The results for the ranking task are shown in Table 3 . Notice that the best feature configurations for classification are not necessarily the best for ranking. MAP performs best for Latin which might be due to potential big semantic shift in this language which is better captured by incorporating neighborhood information. As seen in the decay column, COS and COS+MAP+FREQ (used in our submission) are the overall best performing methods across the two tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 1049,
"end": 1056,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "When executing procrustes alignment, one must choose which and how many words to align on. Since alignment seeks to enforce short distances between landmark words, we hypothesize that this method may mask some of the semantic shift involving the landmark words. To test this, we analyze the effect of the number of landmark words over the model predictions by executing procrustes alignment at using the top n most frequent landmark words with n \u2208 [300, N ] , where N is the size of the intersecting vocabulary, keeping a classifier threshold fixed at t = 0.75. Figure 1 shows the results for all four languages.",
"cite_spans": [
{
"start": 448,
"end": 457,
"text": "[300, N ]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 562,
"end": 570,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Landmarks Are Important",
"sec_num": "5.1"
},
{
"text": "These results present evidence to our argument: using more landmark words in the alignment procedure favors German and Swedish that likely have less semantic shift compared to Latin and English. Notice that both corpora present class imbalance leaning towards unchanged words, and show increased accuracy as the number of landmarks increase. On the other hand, the same is not true for English, which has more balanced classes, nor for Latin which is unbalanced towards changed words. In both these languages, the classification accuracy peaks at some n < N and then decreases, thus showing that using all possible words as landmarks may decrease the accuracy. Figure 1 : (a) Accuracy in Subtask 1 using different numbers of landmark words for each language. Notice how German and Swedish do not show a decrease in accuracy despite the large number of landmarks used, whereas English and Latin have optimal performance at some point before the maximum; (b) Ranking performance according to number of landmarks shows a different trend from that of the binary classification with Swedish decreasing in performance as the number of landmarks grow.",
"cite_spans": [],
"ref_spans": [
{
"start": 661,
"end": 669,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Landmarks Are Important",
"sec_num": "5.1"
},
{
"text": "We presented a model for unsupervised detection of semantic change based on anomaly detection over a selection of features. SChME works directly on the input corpora, not requiring language-specific pre-trained models. The model ensemble is agnostic to the feature models, which means any measure of change could be easily incorporated to it, if desired. Our results show that the model parameters must be chosen carefully for each task and language. Particularly, we have shown that the choice of landmarks for alignment is strictly related to the degree of change of a language. In future work, we plan on addressing this issue by developed principled ways of choosing the words to align so that the semantic change is revealed more accurately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Post-EvaluationWe evaluate our model on the provided test data in the post-evaluation phase. First, we fix a threshold of t = 0.75, then we use different feature combinations to evaluate the performance on each language. Classification results, seen inTable 2, show that there is no single best feature configuration for all languages. This may happen because each language evolved differently between t 1 and t 2 , and having",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the Rensselaer-IBM AI Research Collaboration (http://airc.rpi. edu), part of the IBM AI Horizons Network (http://ibm.biz/AIHorizons).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Clean corpus of historical american english",
"authors": [
{
"first": "Reem",
"middle": [],
"last": "Alatrash",
"suffix": ""
},
{
"first": "Doninik",
"middle": [],
"last": "Schlechtweg",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Kuhn",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC'20). European Language Resources Association (ELRA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reem Alatrash, Doninik Schlechtweg, Jonas Kuhn, and Sabine Schulte. 2020. Clean corpus of historical amer- ican english. In Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC'20). European Language Resources Association (ELRA).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Dynamic word embeddings",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Bamler",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Mandt",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "380--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Bamler and Stephan Mandt. 2017. Dynamic word embeddings. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 380-389. JMLR. org.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Updating pre-trained word vectors and text classifiers using monolingual alignment",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Onur",
"middle": [],
"last": "Celebi",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.06241"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Onur Celebi, Tomas Mikolov, Edouard Grave, and Armand Joulin. 2019. Updating pre-trained word vectors and text classifiers using monolingual alignment. arXiv preprint arXiv:1910.06241.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Korp -the corpus infrastructure of spr\u00e4kbanken",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Borin",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Forsberg",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Roxendal",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "474--478",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lars Borin, Markus Forsberg, and Johan Roxendal. 2012. Korp -the corpus infrastructure of spr\u00e4kbanken. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 474-478, Istanbul, Turkey, May. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatically identifying changes in the semantic orientation of words",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [],
"last": "Stevenson",
"suffix": ""
}
],
"year": 2010,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Cook and Suzanne Stevenson. 2010. Automatically identifying changes in the semantic orientation of words. In LREC.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Cultural shift or linguistic drift? comparing two computational measures of semantic change",
"authors": [
{
"first": "Jure",
"middle": [],
"last": "William L Hamilton",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Leskovec",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing",
"volume": "2016",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William L Hamilton, Jure Leskovec, and Dan Jurafsky. 2016a. Cultural shift or linguistic drift? comparing two computational measures of semantic change. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing, volume 2016, page 2116. NIH Public Access.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Diachronic word embeddings reveal statistical laws of semantic change",
"authors": [
{
"first": "Jure",
"middle": [],
"last": "William L Hamilton",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Leskovec",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1605.09096"
]
},
"num": null,
"urls": [],
"raw_text": "William L Hamilton, Jure Leskovec, and Dan Jurafsky. 2016b. Diachronic word embeddings reveal statistical laws of semantic change. arXiv preprint arXiv:1605.09096.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Loss in translation: Learning bilingual word mapping with a retrieval criterion",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.07745"
]
},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herv\u00e9 J\u00e9gou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. arXiv preprint arXiv:1804.07745.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Diachronic word embeddings and semantic shifts: a survey",
"authors": [
{
"first": "Andrey",
"middle": [],
"last": "Kutuzov",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
},
{
"first": "Terrence",
"middle": [],
"last": "Szymanski",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1384--1397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrey Kutuzov, Lilja \u00d8vrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embeddings and semantic shifts: a survey. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1384-1397, Santa Fe, New Mexico, USA, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Tools for historical corpus research, and a corpus of latin",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Mcgillivray",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
}
],
"year": 2013,
"venue": "New Methods in Historical Corpus Linguistics",
"volume": "1",
"issue": "3",
"pages": "247--257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara McGillivray and Adam Kilgarriff. 2013. Tools for historical corpus research, and a corpus of latin. New Methods in Historical Corpus Linguistics, 1(3):247-257.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Software Framework for Topic Modelling with Large Corpora",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Radim\u0159eh\u016f\u0159ek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceed- ings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Valletta, Malta, May. ELRA. http://is.muni.cz/publication/884893/en.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Dynamic embeddings for language evolution",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Rudolph",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maja Rudolph and David Blei. 2018. Dynamic embeddings for language evolution. In Proceedings of the 2018 World Wide Web Conference, pages 1003-1011.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Semantic density analysis: Comparing word meaning across time and phonetic space",
"authors": [
{
"first": "Eyal",
"middle": [],
"last": "Sagi",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Kaufmann",
"suffix": ""
},
{
"first": "Brady",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Geometrical Models of Natural Language Semantics",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eyal Sagi, Stefan Kaufmann, and Brady Clark. 2009. Semantic density analysis: Comparing word meaning across time and phonetic space. In Proceedings of the Workshop on Geometrical Models of Natural Language Semantics, pages 104-111. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Simulating lexical semantic change from senseannotated data",
"authors": [
{
"first": "Dominik",
"middle": [],
"last": "Schlechtweg",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.03216"
]
},
"num": null,
"urls": [],
"raw_text": "Dominik Schlechtweg and Sabine Schulte im Walde. 2020. Simulating lexical semantic change from sense- annotated data. arXiv preprint arXiv:2001.03216.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A wind of change: Detecting and evaluating lexical semantic change across times and domains",
"authors": [
{
"first": "Dominik",
"middle": [],
"last": "Schlechtweg",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "H\u00e4tty",
"suffix": ""
},
{
"first": "Marco",
"middle": [
"Del"
],
"last": "Tredici",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "732--746",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominik Schlechtweg, Anna H\u00e4tty, Marco Del Tredici, and Sabine Schulte im Walde. 2019. A wind of change: Detecting and evaluating lexical semantic change across times and domains. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 732-746, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Semeval-2020 task 1: Unsupervised lexical semantic change detetion",
"authors": [
{
"first": "Dominik",
"middle": [],
"last": "Schlechtweg",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Mcgillivray",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Hengchen",
"suffix": ""
},
{
"first": "Haim",
"middle": [],
"last": "Dubossarsky",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Tahmasebi",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominik Schlechtweg, Barbara McGillivray, Simon Hengchen, Haim Dubossarsky, and Nina Tahmasebi. 2020. Semeval-2020 task 1: Unsupervised lexical semantic change detetion. In To appear in SemEval@COLING2020.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A generalized solution of the orthogonal procrustes problem",
"authors": [
{
"first": "H",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sch\u00f6nemann",
"suffix": ""
}
],
"year": 1966,
"venue": "Psychometrika",
"volume": "31",
"issue": "1",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter H Sch\u00f6nemann. 1966. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1-10.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Dynamic word embeddings for evolving semantic discovery",
"authors": [
{
"first": "Zijun",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Weicong",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the eleventh acm international conference on web search and data mining",
"volume": "",
"issue": "",
"pages": "673--681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zijun Yao, Yifan Sun, Weicong Ding, Nikhil Rao, and Hui Xiong. 2018. Dynamic word embeddings for evolving semantic discovery. In Proceedings of the eleventh acm international conference on web search and data mining, pages 673-681.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "Classification accuracy for different feature configurations at a threshold t = 0.75. Majority class (Maj. Class) is a baseline classifier that outputs the most common class for each language (classes 0 or 1). Column decay indicates the accuracy deviation from the best performance for each feature model across languages. A smaller decay means the method performs close to optimal in all languages.",
"num": null,
"content": "<table><tr><td>Feature</td><td colspan=\"4\">English German Latin Swedish Decay (%)</td></tr><tr><td>COS</td><td>0.231</td><td>0.547 0.413</td><td>0.228</td><td>0.09</td></tr><tr><td>MAP</td><td>0.05</td><td>0.504 0.388</td><td>0.200</td><td>0.32</td></tr><tr><td>COS+FREQ</td><td>0.26</td><td>0.407 0.455</td><td>-0.009</td><td>0.32</td></tr><tr><td>COS+MAP+FREQ</td><td>0.203</td><td>0.433 0.424</td><td>0.268</td><td>0.12</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF2": {
"text": "",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
}
}
}
}