ACL-OCL / Base_JSON /prefixS /json /scil /2020.scil-1.43.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:39:40.357116Z"
},
"title": "Where New Words Are Born: Distributional Semantic Analysis of Neologisms and Their Semantic Neighborhoods",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Ryskina",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Ella",
"middle": [],
"last": "Rabinovich",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto",
"location": {}
},
"email": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "San Diego"
}
},
"email": ""
},
{
"first": "David",
"middle": [
"R"
],
"last": "Mortensen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We perform statistical analysis of the phenomenon of neology, the process by which new words emerge in a language, using large diachronic corpora of English. We investigate the importance of two factors, semantic sparsity and frequency growth rates of semantic neighbors, formalized in the distributional semantics paradigm. We show that both factors are predictive of word emergence although we find more support for the latter hypothesis. Besides presenting a new linguistic application of distributional semantics, this study tackles the linguistic question of the role of languageinternal factors (in our case, sparsity) in language change motivated by language-external factors (reflected in frequency growth). 1",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We perform statistical analysis of the phenomenon of neology, the process by which new words emerge in a language, using large diachronic corpora of English. We investigate the importance of two factors, semantic sparsity and frequency growth rates of semantic neighbors, formalized in the distributional semantics paradigm. We show that both factors are predictive of word emergence although we find more support for the latter hypothesis. Besides presenting a new linguistic application of distributional semantics, this study tackles the linguistic question of the role of languageinternal factors (in our case, sparsity) in language change motivated by language-external factors (reflected in frequency growth). 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural languages are constantly changing as the context of their users changes (Aitchison, 2001) . Perhaps the most obvious type of change is the introduction of new lexical items, or neologisms (a process called \"neology\"). Neologisms have various sources. They are occassionally coined out of whole cloth (grok). More frequently, they are loanwords from another language (tahini), derived words (unfriend), or existing words that have acquired new senses (as when web came to mean 'World Wide Web' and then 'the Internet'). While neology has long been of interest to linguists ( \u00a72), there have been relatively few attempts to study it as a global, systemic phenomenon. Computational modeling and analysis of neology is the focus of our work.",
"cite_spans": [
{
"start": 80,
"end": 97,
"text": "(Aitchison, 2001)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "What are the factors that predict neology? Certainly, social context plays a role. Close interaction between two cultures, for example, may result in increased borrowing (Appel and Muysken, 2006) . We hypothesize, though, that there are other factors involved-factors that can be modeled more directly. These factors can be understood in terms of supply and demand. Br\u00e9al (1904) introduced the idea that the distribution of words in semantic space tends towards uniformity. This framework predicts that new words would emerge where they would repair uniformity-where there was a space not occupied by a word. This could be viewed as supplydriven neology. Next, demand plays a role as well as supply (Campbell, 2013) : new words emerge in \"stylish\" neighborhoods, corresponding to domains of discourse that are increasing in importance (reflected by the increasing frequency of the words in those neighborhoods).",
"cite_spans": [
{
"start": 170,
"end": 195,
"text": "(Appel and Muysken, 2006)",
"ref_id": "BIBREF1"
},
{
"start": 366,
"end": 378,
"text": "Br\u00e9al (1904)",
"ref_id": "BIBREF3"
},
{
"start": 699,
"end": 715,
"text": "(Campbell, 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We operationalize these ideas using distributional semantics (Lenci, 2018) . To formalize the hypothesis of supply-driven neology for computational analysis, we measure sparsity of areas in the word embedding space where neologisms would later emerge. The demand-driven view of neology motivates our second hypothesis: neighborhoods in the embedding space containing words rapidly growing in frequency are more likely to produce neologisms. Both hypotheses are defined more formally in \u00a73.",
"cite_spans": [
{
"start": 61,
"end": 74,
"text": "(Lenci, 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Having formalized our hypotheses in terms of word embeddings, we test them by comparing the distributions of the corresponding metrics for a set of automatically identified neologisms and a control set. Methodology of the word selection and hypothesis testing is detailed in \u00a74. We discuss the results in \u00a75, demonstrating evidence for both hypotheses, although the demand-driven hypothesis has more significant support.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Neology Specific sources of neologisms have been studied: lexical borrowing (Taylor and Grant, 2014; Daulton, 2012) , morphological derivation (Lieber, 2017) , blends or portmanteaus (Cook, 2012; Renner et al., 2012) , clippings, acronyms, analogical coinages, and arbitrary coinages, but these studies have tended to look at neologisms atomistically, or to explicate the social conditions under which a new word entered a language rather than looking at neologisms in systemic context.",
"cite_spans": [
{
"start": 76,
"end": 100,
"text": "(Taylor and Grant, 2014;",
"ref_id": "BIBREF33"
},
{
"start": 101,
"end": 115,
"text": "Daulton, 2012)",
"ref_id": "BIBREF6"
},
{
"start": 143,
"end": 157,
"text": "(Lieber, 2017)",
"ref_id": "BIBREF23"
},
{
"start": 183,
"end": 195,
"text": "(Cook, 2012;",
"ref_id": "BIBREF5"
},
{
"start": 196,
"end": 216,
"text": "Renner et al., 2012)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "To address this deficit, we look back to the seminal work of Michel Br\u00e9al, who introduced the idea that words exist in a semantic space. His work implies that, other things being equal, the semantic distribution of words tends towards uniformity (Br\u00e9al, 1904) . This is most explicit in his law of differentiation, which states that near synonyms move apart in semantic space, but has other implications as well. For example, this principle predicts that new words are more likely to emerge where they would increase uniformity. This could be viewed as supply-driven neology-new words appear to fill gaps in semantic space (to express concepts that are not currently lexicalized).",
"cite_spans": [
{
"start": 246,
"end": 259,
"text": "(Br\u00e9al, 1904)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In linguistic literature neology is often associated with new concepts or domains of increasing importance (Campbell, 2013) . Just as there are factors that predict where houses are built other than the availability of land, there are factors that predict where new words emerge other than the availability of semantic space. Demand, we hypothesize, plays a role as well as supply.",
"cite_spans": [
{
"start": 107,
"end": 123,
"text": "(Campbell, 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Most existing computational research on the mechanisms of neology focuses on discovering sociolinguistic factors that predict acceptance of emerging words into the mainstream language and growth of their usage, typically in online social communities (Del Tredici and Fern\u00e1ndez, 2018) . The sociolinguistic factors can include geography (Eisenstein, 2017) , user demographics (Eisenstein et al., 2012 (Eisenstein et al., , 2014 , diversity of linguistic contexts (Stewart and Eisenstein, 2018) or word form (Kershaw et al., 2016) . To the best of our knowledge, there is no prior work focused on discovering factors predictive of the emergence of new words rather than modeling their lifecycle. We model language-external processes indirectly through their reflection in language, thereby capturing phenomena evident of our hypotheses through linguistic analysis.",
"cite_spans": [
{
"start": 250,
"end": 283,
"text": "(Del Tredici and Fern\u00e1ndez, 2018)",
"ref_id": "BIBREF9"
},
{
"start": 336,
"end": 354,
"text": "(Eisenstein, 2017)",
"ref_id": "BIBREF12"
},
{
"start": 375,
"end": 399,
"text": "(Eisenstein et al., 2012",
"ref_id": "BIBREF13"
},
{
"start": 400,
"end": 426,
"text": "(Eisenstein et al., , 2014",
"ref_id": "BIBREF14"
},
{
"start": 462,
"end": 492,
"text": "(Stewart and Eisenstein, 2018)",
"ref_id": "BIBREF31"
},
{
"start": 506,
"end": 528,
"text": "(Kershaw et al., 2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Word embeddings have been successfully used for different applications of the diachronic analysis of language (Tahmasebi et al., 2018) . The closest task to ours is analyzing meaning shift (tracking changes in word sense or emergence of new senses) by comparing word embedding spaces across time periods (Kulkarni et al., 2015; Xu and Kemp, 2015; Hamilton et al., 2016; Kutuzov et al., 2018) . Typically, embeddings are learned for discrete time periods and then aligned (but see Bamler and Mandt, 2017) . There has also been work on revising the existing methodology, specifically accounting for frequency effects in embeddings when modeling semantic shift (Dubossarsky et al., 2017) .",
"cite_spans": [
{
"start": 110,
"end": 134,
"text": "(Tahmasebi et al., 2018)",
"ref_id": "BIBREF32"
},
{
"start": 304,
"end": 327,
"text": "(Kulkarni et al., 2015;",
"ref_id": "BIBREF20"
},
{
"start": 328,
"end": 346,
"text": "Xu and Kemp, 2015;",
"ref_id": "BIBREF35"
},
{
"start": 347,
"end": 369,
"text": "Hamilton et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 370,
"end": 391,
"text": "Kutuzov et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 480,
"end": 503,
"text": "Bamler and Mandt, 2017)",
"ref_id": "BIBREF2"
},
{
"start": 658,
"end": 684,
"text": "(Dubossarsky et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional semantics and language change",
"sec_num": null
},
{
"text": "Other related questions where distributional semantics proved useful were exploring the evolution of bias (Garg et al., 2018) and the degradation of age-and gender-predictive language models (Jaidka et al., 2018) .",
"cite_spans": [
{
"start": 106,
"end": 125,
"text": "(Garg et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 191,
"end": 212,
"text": "(Jaidka et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional semantics and language change",
"sec_num": null
},
{
"text": "This section outlines the two hypotheses we introduced earlier from the linguistic perspective, formalized in terms of distributional semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypotheses",
"sec_num": "3"
},
{
"text": "Hypothesis 1 Neologisms are more likely to emerge in sparser areas of the semantic space. This corresponds to the supply-driven neology hypothesis: we assume that areas of the space that contain fewer semantically related words are likely to give birth to new ones so as to fill in the 'semantic gaps'. Word embeddings give us a natural way of formalizing this: since semantically related words have been shown to populate the same regions in embeddings spaces, we can approximate semantic sparsity (or density) of a word's neighborhood as the number of word vectors within a certain distance of its embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypotheses",
"sec_num": "3"
},
{
"text": "Hypothesis 2 Neologisms are more likely to emerge in semantic neighborhoods of growing popularity. Here we formalize our demand-driven view of neology, which assumes that growing frequency of words in a semantic area is a reflection of its growing importance in discourse, and that the latter is in turn correlated with emergence of neologisms in that area. In terms of word embeddings, we again consider nearest word vectors as the word's semantic neighbors and quantify the rate at which their frequencies grow over decades (formally defined in \u00a74.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypotheses",
"sec_num": "3"
},
{
"text": "Our analysis is based on comparing embedding space neighborhoods of neologism word vectors and neighborhoods of embeddings of words from an alternative set. Automatic selection of neologisms is described in \u00a74.2, and in \u00a74.4 we detail the factors we control for when selecting the alternative set. In \u00a74.1 we describe the datasets used in our experiments. Our data is split into two large corpora, HISTORICAL and MODERN; we additionally require the HISTORICAL corpus to be split into smaller time periods so that we can estimate word frequency change rate. Embedding models are trained on each of the two corpora, as described in \u00a74.3. We compare the neighborhoods in the HIS-TORICAL embedding space, but due to the nature of our neologism selection process, many neologisms might not exist in the HISTORICAL vocabulary. To locate their neighborhoods, we adapt an approach from prior work in diachronic analysis with word embeddings: we learn an orthogonal projection between HISTORICAL and MOD-ERN embeddings to align the two spaces in order to make them comparable (see Hamilton et al., 2016) , and use projected vectors to represent neologisms in the HISTORICAL space. Finally, \u00a74.5 describes the details of hypothesis testing: statistics we choose to quantify our two hypotheses and how their distributions are compared.",
"cite_spans": [
{
"start": 1072,
"end": 1094,
"text": "Hamilton et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hypotheses",
"sec_num": "3"
},
{
"text": "We use the Corpus of Historical American English (COHA, Davies, 2002) and the Corpus of Contemporary American English (COCA, Davies, 2008) , large diachronic corpora balanced by genre to reflect the variety in word usage. COHA data is split into decades; we group COHA documents from 18 decades (1800-1989) to represent the HISTOR-ICAL English collection and use full COCA 1990-2012 corpus as MODERN.",
"cite_spans": [
{
"start": 56,
"end": 69,
"text": "Davies, 2002)",
"ref_id": "BIBREF7"
},
{
"start": 125,
"end": 138,
"text": "Davies, 2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "The obtained HISTORICAL split contains 405M tokens of 2M types, and MODERN contains 547M tokens of 3M types. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "We rely on a usage-based approach to extract the set of neologisms for our analysis, choosing the words based on their patterns of occurrence in our datasets. It can be seen as an approximation to selecting words based on their earliest recorded use dates, as these dates are also determined based on the words' usage in historical corpora. This analogy is supported by the qualitative analysis of the obtained set of neologisms, as discussed in \u00a76.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neologism selection",
"sec_num": "4.2"
},
{
"text": "We limit our analysis to nouns, an open-class lexical category. We identify nouns in our corpora using a part-of-speech dictionary, collected from a POS-tagged corpus of English Wikipedia data (Wikicorpus, Reese et al., 2010) , and select words that are most frequently tagged as 'NN'.",
"cite_spans": [
{
"start": 206,
"end": 225,
"text": "Reese et al., 2010)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neologism selection",
"sec_num": "4.2"
},
{
"text": "We additionally filter candidate neologisms to exclude words that occur more frequently in capitalized than lowercased form; this heuristic helps us remove proper nouns missed by the POS tagger.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neologism selection",
"sec_num": "4.2"
},
{
"text": "We select a set of neologisms by picking words that are substantially more frequent in the MOD-ERN corpus than in the HISTORICAL one. It is important to note that while we use the term \"neologism,\" implying a word at the early stages of emergence, with this method we select words that have entered mainstream vocabulary in MODERN time but might have been coined prior to that. We consider a word w to be a neologism if its ratio f m (w)/f h (w) is greater than a certain threshold; here f m (\u2022) and f h (\u2022) denote word frequencies (normalized counts) in MODERN and HISTORI-CAL data respectively. Empirically we set the frequency ratio threshold equal to 20.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neologism selection",
"sec_num": "4.2"
},
{
"text": "We rank words satisfying these criteria by their frequency in the MODERN corpus and select the first 1000 words to be our neologism set; this is to ensure that we only analyze words that subsequently become mainstream and not misspellings or other artifacts of the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neologism selection",
"sec_num": "4.2"
},
{
"text": "Our hypothesis testing process involves inspecting semantic neighborhoods of neologisms in the HIS-TORICAL embedding space. However, many neologisms are very infrequent or nonexistent in the HISTORICAL data, so we approximate their vectors in the HISTORICAL space by projecting their MODERN embeddings into the same coordinate axes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings",
"sec_num": "4.3"
},
{
"text": "We learn Word2Vec Skip-Gram embeddings 3 (Mikolov et al., 2013) of the two corpora and use orthogonal Procrustes to learn the aligning transformation:",
"cite_spans": [
{
"start": 41,
"end": 63,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings",
"sec_num": "4.3"
},
{
"text": "R = arg min \u2326 k\u2326W (m) W (h) k,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings",
"sec_num": "4.3"
},
{
"text": "where W (h) , W (m) 2 R |V |\u21e5d are the word embedding matrices learned on the HISTORICAL and MODERN corpora respectively, restricted to the intersection of the vocabularies of the two corpora (i.e. every word embedding present in both spaces is used as an anchor). To project MODERN word embeddings into the HISTORICAL space, we multiply them by the obtained rotation matrix R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings",
"sec_num": "4.3"
},
{
"text": "To test our hypotheses, we collect an alternative set of words and analyze how certain statistical properties of their neighbors differ from those of neighbors of neologisms. At this stage it is important to control for non-semantic confounding factors that might affect the word distribution in the semantic space. One such factor is word frequency: it has been shown that embeddings of words of similar frequency tend to be closer in the embedding space (Schnabel et al., 2015; Faruqui et al., 2016) , which results in very dense clusters, or hubs, of words with high cosine similarity (Radovanovi\u0107 et al., 2010; Dinu et al., 2014) . We choose to also restrict our control set to only include words that did not substantially grow or decline in frequency over the HISTORICAL period in order to prevent selecting counterparts that only share similar frequency in the MODERN subcorpus (e.g., due to recent topical relevance), but exhibit significant fluctuation prior to that period. In particular, we refrain from selecting words that emerged in language right before our HISTORI-CAL-MODERN split.",
"cite_spans": [
{
"start": 456,
"end": 479,
"text": "(Schnabel et al., 2015;",
"ref_id": "BIBREF30"
},
{
"start": 480,
"end": 501,
"text": "Faruqui et al., 2016)",
"ref_id": "BIBREF15"
},
{
"start": 588,
"end": 614,
"text": "(Radovanovi\u0107 et al., 2010;",
"ref_id": "BIBREF27"
},
{
"start": 615,
"end": 633,
"text": "Dinu et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Control set selection",
"sec_num": "4.4"
},
{
"text": "We create the alternative set by pairing each neologism with a non-neologism counterpart that exhibits a stable frequency pattern, while controlling for word frequency and word length in characters. Length is chosen as an easily accessible correlate to other factors for which one should control, such as morphological complexity, concreteness, and nativeness. We perform the pairing only to ensure that the distribution of those properties across the two sets is comparable, but once the selection process is complete we treat control words as a set rather than considering them in pairs with neologisms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control set selection",
"sec_num": "4.4"
},
{
"text": "Following Stewart and Eisenstein (2018) , we formalize frequency growth rate as the Spearman correlation coefficient between timesteps {1, . . . , T } and frequency series f (1:T ) (w) of word w. In our setup, timesteps {1, . . . , 18} enumerate decades from 1810s to 1980s, and f t (\u2022) denote word frequencies in the corresponding t-th decade of the HISTORICAL data.",
"cite_spans": [
{
"start": 10,
"end": 39,
"text": "Stewart and Eisenstein (2018)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Control set selection",
"sec_num": "4.4"
},
{
"text": "Formally, for each neologism w n we select a counterpart w c satisfying the following constraints:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control set selection",
"sec_num": "4.4"
},
{
"text": "\u2022 Frequencies of the two words in the corresponding corpora are comparable:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control set selection",
"sec_num": "4.4"
},
{
"text": "f m (w n )/f h (w c ) 2 (1 , 1 + ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control set selection",
"sec_num": "4.4"
},
{
"text": "where was set to 0.25;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control set selection",
"sec_num": "4.4"
},
{
"text": "\u2022 The length of the two words is identical up to 2 characters;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control set selection",
"sec_num": "4.4"
},
{
"text": "\u2022 The Spearman correlation coefficient r s between decades {1, . . . , 18} and the control word frequency series f (1:18) (w c ) is small:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control set selection",
"sec_num": "4.4"
},
{
"text": "|r s {1 : 18}, f (1:18) (w c ) | \uf8ff 0.1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control set selection",
"sec_num": "4.4"
},
{
"text": "These words, which we will refer to as stable, make up our default and most restricted control set. We will also compare neologisms to a relaxed control set, omitting the stability constraint on the frequency change rate but still controlling for length and overall frequency, to see how neologisms differ from non-neologisms in a broader perspective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control set selection",
"sec_num": "4.4"
},
{
"text": "We evaluate our hypotheses by inspecting neighborhoods of neologisms and their stable control counterparts in the HISTORICAL embedding space, viewing them as proxy for neighborhoods in the underlying semantic space. Since many neologisms are very infrequent or nonexistent in the HISTORICAL data, we approximate their vectors in the HISTORICAL space with their MODERN embeddings projected using the transformation described in \u00a74.3. The neighborhood of a word w is defined as the set of HISTORICAL words for which cosine similarity between their HISTORICAL embeddings and v w exceeds the given threshold \u2327 ; v w denotes a projected MODERN embedding if w is a neologism or a HISTORICAL embedding if it is a control word. 4 (a) Semantic neighborhood of the word renewables. Figure 1a shows an example of a neighborhood exhibiting frequency growth: words like synfuel or privatization have been used more towards the end of the HISTORICAL period. The neighborhood also includes natural-gas that can be seen as representing a concept to be replaced by renewables. The word pesto (Figure 1b ) is projected into a neighborhood of other food-related words, most of which are also loanwords, several from the same language; it also has its hypernym sauce as one of its neighbors.",
"cite_spans": [
{
"start": 720,
"end": 721,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 772,
"end": 781,
"text": "Figure 1a",
"ref_id": "FIGREF2"
},
{
"start": 1075,
"end": 1085,
"text": "(Figure 1b",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.5"
},
{
"text": "The two factors we need to formalize are semantic sparsity of the neighborhoods and increase of popularity of the topic that the neighborhood represents. We use sparsity in the embedding space as a proxy for semantic sparsity and approximate growth of interest in a topic with frequency growth of words belonging to it (i.e. embedded into the corresponding neighborhood). For the neighborhood of each word w, we compute the following statistics, corresponding to our two hypotheses:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.5"
},
{
"text": "1. Density of a neighborhood d(w, \u2327 ): number of words that fall into this neighborhood d(w, \u2327 ) = |{u :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.5"
},
{
"text": "cosine(v w , v u ) \u2327 }| 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.5"
},
{
"text": "Average frequency growth rate of a neighborhood r(w, \u2327 ): as defined in the previous subsection, we compute the Spearman correlation coefficient between timesteps and frequency series for each word in the neighborhood and take their mean:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.5"
},
{
"text": "r(w, \u2327 ) = 1 d(w, \u2327 ) \u21e5 \u21e5 X u:cosine(vw,vu) \u2327 r s {1 : 18}, f (1:18) (u)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.5"
},
{
"text": "In our tests, we compare the values of those metrics for neighborhoods of neologisms and semantics (Lenci, 2018) . We have also observed the same results when repeating the experiments with the Euclidean distance metric. neighborhoods of control words and estimate the significance of each of the two factors for a range of neighborhood sizes defined by the threshold \u2327 . We test whether means of the distributions of those statistics for the neologism and the control set differ and whether each of the two is significant for classifying words into neologisms and controls.",
"cite_spans": [
{
"start": 99,
"end": 112,
"text": "(Lenci, 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.5"
},
{
"text": "As mentioned in \u00a74.2, our vocabulary is restricted to nouns, and we only consider vocabulary noun neighbors when evaluating the statistics. 5 Since we project all neologism word vectors from MODERN to HISTORICAL embedding space, for neologisms occurring in the HISTORICAL corpus we might find a HISTORICAL vector of the neologism itself among the neighbors of its projection; we exclude such neighbors from our analysis. We cap the number of nearest neighbors to consider at 5,000, to avoid estimating statistics on overly large sets of possibly less relevant neighbors.",
"cite_spans": [
{
"start": 140,
"end": 141,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.5"
},
{
"text": "Following the experimental setup described in \u00a74.5, we estimate the contribution of each of the hypothesized factors employing strictly constrained and relaxed control sets. We start by analyzing how the distributions of those statistics differ for neologisms and stable controls, both by word vectors within a certain cosine distance of a word and average growth rate of frequency (represented by Spearman correlation coefficient) of those HISTORICAL words, averaged across neologism (darker) and stable control word (lighter) sets. Projected neologism vectors appear in lower-density neighborhoods compared to control words, and neighbors of neologisms exhibit a stronger growth trend than those of the control words, especially in smaller neighborhoods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "comparing their sample means and by more rigorous statistical testing. We also evaluate the significance of the factors using generalized linear models for both stable and relaxed control sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "First, we test our hypotheses on 720 neologismstable control word pairs (not all words are paired in the stable control setting due to its restrictiveness). Figure 2 demonstrates the values of density and frequency growth rate for a range of neighborhood sizes, averaged over neologism and control sets. Both results conform with our hypotheses: Figure 2a shows that on average the projected neologism has fewer neighbors than its stable counterpart, especially for larger neighborhoods, and Figure 2b shows that, on average, frequencies of neighbors of a projected neologism grow at a faster rate than those of a counterpart. Interestingly, we find that neighbors of stable controls still tend to exhibit small positive growth rate. We attribute it to the general pattern that we observed: about 70% of words in our vocabulary have positive frequency growth rate. We believe this might be explained by the imbalance in the amount of data between decades (e.g. 1980s sub-corpus has 20 times more tokens than 1810s): some words might not occur until later in the corpus because of the relative sparsity of data in the early decades.",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 165,
"text": "Figure 2",
"ref_id": "FIGREF4"
},
{
"start": 346,
"end": 355,
"text": "Figure 2a",
"ref_id": "FIGREF4"
},
{
"start": 492,
"end": 501,
"text": "Figure 2b",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Comparison to stable control set",
"sec_num": "5.1"
},
{
"text": "As we can see from Figure 2a , neighborhoods of larger sizes (corresponding to lower values of the threshold) may contain thousands of words, so the statistics obtained from those neighborhoods might be less relevant; we might only want to consider the immediate neighborhoods, as those words are more likely to be semantically related to the central word. It is notable that the difference in the growth trends of the neighbors is substantially more prominent for smaller neighborhoods (Figure 2b) : average correlation coefficient of immediate neighbors of stable words also falls into stable range as we defined it, while immediate neighbors of neologisms exhibit rapid growth.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 28,
"text": "Figure 2a",
"ref_id": "FIGREF4"
},
{
"start": 487,
"end": 498,
"text": "(Figure 2b)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Comparison to stable control set",
"sec_num": "5.1"
},
{
"text": "To estimate the significance and relative contribution of the two factors, we fit a generalized linear model (GLM) with logistic link function to the corresponding features of neologism and control word neighborhoods: 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical significance",
"sec_num": "5.2"
},
{
"text": "y(w) \u21e0 (1 + exp( (\u2327 ) 0 (\u2327 ) d \u2022 d(w, \u2327 ) (\u2327 ) r \u2022 r(w, \u2327 ))) 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical significance",
"sec_num": "5.2"
},
{
"text": "where y is a Bernoulli variable indicating whether the word w belongs to the neologism set (1) or the control set (0), and \u2327 is the cosine similarity threshold defining the neighborhood size. Table 1 shows how the coefficients and p-values for the two statistics change with the neighborhood size. We found that when comparing with denote the coefficients for density and average frequency growth respectively for neighborhoods defined by \u2327 . Comparing the results for the stable and relaxed control sets, we find that for the stable controls density is only significant in larger neighborhoods, but without the stability constraint both factors are significant for all neighborhood sizes. the stable control set, average frequency growth rate of the neighborhood was significant for all sizes, but neighborhood density was significant at level p < 0.01 only for the largest ones. 7 We attribute this to the effect discussed in the previous section: difference in average frequency growth rate between neighbors of neologisms and stable words shrinks as we include more remote neighbors (Figure 2b ), so for large neighborhoods frequency growth rate by itself is no longer predictive enough.",
"cite_spans": [
{
"start": 881,
"end": 882,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 192,
"end": 199,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1087,
"end": 1097,
"text": "(Figure 2b",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Statistical significance",
"sec_num": "5.2"
},
{
"text": "We also evaluate the significance of features for the relaxed control set without the stability constraint on 1000 neologism-control pairs. We have repeated the experiment with 5 different randomly sampled relaxed control sets (results for one showed in Table 1 ). For medium-sized neighborhoods (0.4 \uf8ff \u2327 \uf8ff 0.5) density variable is always significant at p < 0.01, but densities of largest and smallest neighborhoods were rejected in several runs. With more variance in the control set, differences in neighborhood frequency growth rate between neologisms and controls are less prominent than in the stable setting, so density plays a more important role in prediction. 8 Growth feature weights",
"cite_spans": [
{
"start": 669,
"end": 670,
"text": "8",
"ref_id": null
}
],
"ref_spans": [
{
"start": 254,
"end": 261,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Statistical significance",
"sec_num": "5.2"
},
{
"text": "(\u2327 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical significance",
"sec_num": "5.2"
},
{
"text": "r are always positive and density feature weights (\u2327 ) d are negative in the relaxed setting (where density is significant). This matches our intuition that neighborhood frequency growth and sparsity are predictive of neology.",
"cite_spans": [
{
"start": 50,
"end": 54,
"text": "(\u2327 )",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical significance",
"sec_num": "5.2"
},
{
"text": "Comparing sample means of density and growth rates between neologisms and each of the 5 randomly selected relaxed control sets (as we did 7 Applying Wilcoxon signed-rank test to the series of neighborhood density and frequency growth values for neologism and stable control sets showed the same results.",
"cite_spans": [
{
"start": 138,
"end": 139,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical significance",
"sec_num": "5.2"
},
{
"text": "8 Detailed results of the regression analysis and collinearity tests can be found in the repository. No evidence of collinearity was found in any of the experiments. for stable controls in Figure 2 ) demonstrated that neologisms still appear in sparser neighborhoods than the controlled counterparts. The difference in frequency growth rate between the neologism and control word neighborhoods is also observed for all control sets (although it varies noticeably between sets), but it no longer exhibits an inverse correlation with neighborhood size.",
"cite_spans": [],
"ref_spans": [
{
"start": 189,
"end": 197,
"text": "Figure 2",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Statistical significance",
"sec_num": "5.2"
},
{
"text": "We have demonstrated that our two hypotheses hold for the set of words we automatically selected to represent neologisms. To establish validity of our results, we qualitatively examine the obtained word list to see if the words are in fact recent additions to the language. We randomly sample 100 words out of the 1000 selected neologisms and look up their earliest recorded use in the Oxford English Dictionary Online (OED, 2018). Of those 100 words, eight are not defined in the dictionary: they only appear in quotations in other entries (bycatch (quotation from 1995), twentysomething (1997), cross-sex (1958), etc.) or do not occur at all (all-mountain, interobserver, off-task). Of the remaining 92 words, 78 have been first recorded after the year 1810 (i.e. since the beginning of the HISTORICAL timeframe), 44 have been first recorded in the twentieth century, and 21 words since 1950. However, some of the words dating back to before 19th century have only been recorded in their earlier, possibly obsolete sense: for example, while there is evidence of the word software being used in 18th century, this usage corresponds to its obsolete meaning of 'textiles, fabrics', while the first recorded use in its currently dominant sense of 'programs essential to the operation of a computer system' is dated 1958. To account for such semantic neologisms, we can count the first recorded use of the newest sense of the word; that gives us 82, 58 and 31 words appearing since 1810, 1900 and 1950 respectively. 9 This leads us to assume that most words selected for our analysis have indeed been neologisms sometime over the course of the HISTORICAL time.",
"cite_spans": [
{
"start": 1479,
"end": 1514,
"text": "1810, 1900 and 1950 respectively. 9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We would also like to note that the results of this examination may be skewed due to factors for which lexicography may not account: for example, many words identified as neologisms are compound nouns like countertop or soundtrack that have been written as two separate words or joined with a hyphen in earlier use. There is also considerable spelling variation in loanwords, e.g. cuscusu, cooscoosoos, kesksoo were used interchangeably before the form couscous was accepted as the standard spelling. Specific word forms might also have different life cycles: while the word music existed in Middle English, the plural form musics in a particular sense of 'genres, styles of music' is much more recent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Qualitative examination of the neologism set reveals that new words tend to appear in the same topics; for example, many words in our set were related to food, technology, or medicine. This indirectly supports our second hypothesis: rapid change in these spheres makes it likely for related terms to substantially grow in frequency over a short period of time. One example of such a neighborhood is shown in Figure 1a : the neologism renewables appeared in a cluster of words related to energy sources -a topic that has been more discussed recently. There is also some correlation between the topic and how new words are formed in it: most food neologisms are so-called cultural borrowings (Weinreich, 2010) , when the name gets loaned from another culture together with the concept itself (e.g. pesto, salsa, masala), while many technology neologisms are compounds of existing English morphemes (e.g. cyber+space, cell+phone, data+base).",
"cite_spans": [
{
"start": 690,
"end": 707,
"text": "(Weinreich, 2010)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 408,
"end": 417,
"text": "Figure 1a",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We also consider nearest neighbors (HISTORICAL words with highest cosine similarity) of the neologisms to ensure that they are projected into the appropriate parts of the embedding space. Examples of nearest neighbors are shown in Table 2 . We saw different patterns of how the concept represented by the neologism We can see that words get projected into semantically relevant neighborhoods, and nearest neighbors can even be useful for observing the evolution of a concept (e.g. pager:beeper).",
"cite_spans": [],
"ref_spans": [
{
"start": 231,
"end": 238,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "relates to concepts represented by its neighbors. For example, some terms for new concepts appear next to related concepts they succeeded and possibly made obsolete: e.g. email:letter, e-book:paperback, database:card-index. Other neologisms emerge in clusters of related concepts they still equally coexist with: hip-hop:jazz, hoodie:turtleneck; most cultural borrowings fall under this type (see the neighborhood of pesto in Figure 1b ). Both those patterns can be viewed as examples of a more general trend: one concept takes place of another related one, whether in terms of fully replacing it or just taking its place as the dominant form.",
"cite_spans": [],
"ref_spans": [
{
"start": 426,
"end": 435,
"text": "Figure 1b",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Other interesting effects we observed include lexical replacement (a new word form replacing an old one without a change in meaning, e.g. vibe:ambience), tendency to abbreviate terms as they become mainstream (biotech:biotechnology, chemo:chemotherapy), and the previously mentioned changes in spellings of compounds (lifestyle:life-style, daycare:day-care).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We have shown that our two hypothesized factors, semantic neighborhood sparsity and its average frequency growth rate, play a role in determining in what semantic neighborhoods new words are likely to emerge. Our analyses provide more support for the latter, conforming with prior linguistic intuition of how language-external factors (which this factor implicitly represents) affect language change. We also found evidence for the former, although it was found less significant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our contributions are manifold. From a computational perspective, we extend prior research on meaning change to a new task of analyzing word emergence, proposing another way to obtain linguistic insights from distributional semantics. From the point of view of linguistics, we approach an important question of whether language change is affected by not only languageexternal factors but language-internal factors as well. We show that internal factors-semantic sparsity, specifically-contribute to where in semantic space neologisms emerge. To the best of our knowledge, our work is the first to use word embeddings as a way of quantifying semantic sparsity. We have also been able to operationalize one kind of external factor, technological and cultural change, as something that can been measured in corpora and word embeddings, paving the way to similar work with other kinds of languageexternal factors in language change.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "An admittable limitation of our analysis lies in its restricted ability to account for polysemy, which is a pervasive issue in distributional semantics studies (Faruqui et al., 2016) . As such, semantic neologisms (existing words taking on a novel sense) were not a subject of this study, but they introduce a potential future direction. Additional properties of word's neighbors can also be correlated with word emergence, both languageinternal (word abstractness or specificity) and external; these can also be promising directions for future work. Finally, our future plans include exploration of how features of semantic neighborhoods are correlated with word obsolescence (gradual decline in usage), using similar semantic observations.",
"cite_spans": [
{
"start": 160,
"end": 182,
"text": "(Faruqui et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The code and word lists are available at https:// github.com/ryskina/neology",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Statistics accompanying the corpora state that entire COHA dataset contains 385M words, and COCA contains 440M words; we assume the discrepancy is explained by tokenization differences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Hyperparameters: vector dimension 300, window size 5, minimum count 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Cosine similarity is chosen as our distance metric since it is traditionally used for word similarity tasks in distributional",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Here we refer to the vocabulary of words participating in our analysis, not the embedding model vocabulary; embeddings are trained on the entire corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the implementation provided in the MATLAB Statistics and Machine Learning Toolbox.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For all words that have one or more senses marked as a noun, we only consider those senses. Out of the 92 listed words, only three do not have nominal senses, and for two more usage as a noun is marked to be rare.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the BergLab members for helpful discussion, and the anonymous reviewers for their valuable feedback. This work was supported in part by NSF grant IIS-1812327.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Language Change: Progress Or Decay?",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Aitchison",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean Aitchison. 2001. Language Change: Progress Or Decay? Cambridge University Press.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Language contact and bilingualism",
"authors": [
{
"first": "Ren\u00e9",
"middle": [],
"last": "Appel",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Muysken",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ren\u00e9 Appel and Pieter Muysken. 2006. Language con- tact and bilingualism. Amsterdam University Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Dynamic word embeddings",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Bamler",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Mandt",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "380--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Bamler and Stephan Mandt. 2017. Dynamic word embeddings. In International Conference on Machine Learning, pages 380-389.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Essai de s\u00e9mantique:(science des significations)",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Br\u00e9al",
"suffix": ""
}
],
"year": 1904,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Br\u00e9al. 1904. Essai de s\u00e9mantique:(science des significations). Hachette.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Historical Linguistics: an Introduction",
"authors": [
{
"first": "Lyle",
"middle": [],
"last": "Campbell",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lyle Campbell. 2013. Historical Linguistics: an Intro- duction. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Using social media to find English lexical blends",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 15th EU-RALEX International Congress (EURALEX 2012)",
"volume": "",
"issue": "",
"pages": "846--854",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Cook. 2012. Using social media to find English lexical blends. In Proceedings of the 15th EU- RALEX International Congress (EURALEX 2012), pages 846-854, Oslo, Norway.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Lexical borrowing",
"authors": [
{
"first": "Frank",
"middle": [
"E"
],
"last": "Daulton",
"suffix": ""
}
],
"year": 2012,
"venue": "The Encyclopedia of Applied Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1002/9781405198431.wbeal0687"
]
},
"num": null,
"urls": [],
"raw_text": "Frank E. Daulton. 2012. Lexical borrowing. In The Encyclopedia of Applied Linguistics. American Can- cer Society.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The Corpus of Historical American English (COHA): 400 million words",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Davies",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "1810--2009",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Davies. 2002. The Corpus of Historical Amer- ican English (COHA): 400 million words, 1810- 2009. Brigham Young University.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The corpus of contemporary American English. BYE",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Davies",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Davies. 2008. The corpus of contemporary American English. BYE, Brigham Young Univer- sity.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The road to success: Assessing the fate of linguistic innovations in online communities",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Del Tredici",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1591--1603",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Del Tredici and Raquel Fern\u00e1ndez. 2018. The road to success: Assessing the fate of linguistic in- novations in online communities. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1591-1603.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improving zero-shot learning by mitigating the hubness problem",
"authors": [
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6568"
]
},
"num": null,
"urls": [],
"raw_text": "Georgiana Dinu, Angeliki Lazaridou, and Marco Ba- roni. 2014. Improving zero-shot learning by mitigating the hubness problem. arXiv preprint arXiv:1412.6568.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Outta control: Laws of semantic change and inherent biases in word representation models",
"authors": [
{
"first": "Haim",
"middle": [],
"last": "Dubossarsky",
"suffix": ""
},
{
"first": "Daphna",
"middle": [],
"last": "Weinshall",
"suffix": ""
},
{
"first": "Eitan",
"middle": [],
"last": "Grossman",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1136--1145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haim Dubossarsky, Daphna Weinshall, and Eitan Grossman. 2017. Outta control: Laws of semantic change and inherent biases in word representation models. In Proceedings of the 2017 conference on empirical methods in natural language processing, pages 1136-1145.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Identifying regional dialects in on-line social media. The Handbook of Dialectology",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "368--383",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein. 2017. Identifying regional dialects in on-line social media. The Handbook of Dialectol- ogy, pages 368-383.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Mapping the geographical diffusion of new words",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2012,
"venue": "NIPS Workshop on Social Network and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein, Brendan O'Connor, Noah A Smith, and Eric P Xing. 2012. Mapping the geographical diffusion of new words. In NIPS Workshop on So- cial Network and Social Media Analysis.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Diffusion of lexical change in social media",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2014,
"venue": "PloS one",
"volume": "9",
"issue": "11",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein, Brendan O'Connor, Noah A Smith, and Eric P Xing. 2014. Diffusion of lexical change in social media. PloS one, 9(11):e113114.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Problems with evaluation of word embeddings using word similarity tasks",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "30--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Yulia Tsvetkov, Pushpendre Rastogi, and Chris Dyer. 2016. Problems with evaluation of word embeddings using word similarity tasks. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 30- 35.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Londa",
"middle": [],
"last": "Schiebinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2018,
"venue": "Sciences",
"volume": "115",
"issue": "16",
"pages": "3635--3644",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Pro- ceedings of the National Academy of Sciences, 115(16):E3635-E3644.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Diachronic word embeddings reveal statistical laws of semantic change",
"authors": [
{
"first": "Jure",
"middle": [],
"last": "William L Hamilton",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Leskovec",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1489--1501",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William L Hamilton, Jure Leskovec, and Dan Jurafsky. 2016. Diachronic word embeddings reveal statisti- cal laws of semantic change. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 1489-1501.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Diachronic degradation of language models: Insights from social media",
"authors": [
{
"first": "Kokil",
"middle": [],
"last": "Jaidka",
"suffix": ""
},
{
"first": "Niyati",
"middle": [],
"last": "Chhaya",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "195--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kokil Jaidka, Niyati Chhaya, and Lyle Ungar. 2018. Diachronic degradation of language models: In- sights from social media. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), vol- ume 2, pages 195-200.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Towards modelling language innovation acceptance in online social networks",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Kershaw",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Rowe",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Stacey",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Ninth ACM International Conference on Web Search and Data Mining",
"volume": "",
"issue": "",
"pages": "553--562",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Kershaw, Matthew Rowe, and Patrick Stacey. 2016. Towards modelling language innovation ac- ceptance in online social networks. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, pages 553-562. ACM.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Statistically significant detection of linguistic change",
"authors": [
{
"first": "Vivek",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Perozzi",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Skiena",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "625--635",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2015. Statistically significant de- tection of linguistic change. In Proceedings of the 24th International Conference on World Wide Web, pages 625-635. International World Wide Web Con- ferences Steering Committee.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Diachronic word embeddings and semantic shifts: a survey",
"authors": [
{
"first": "Andrey",
"middle": [],
"last": "Kutuzov",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
},
{
"first": "Terrence",
"middle": [],
"last": "Szymanski",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1384--1397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrey Kutuzov, Lilja \u00d8vrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embed- dings and semantic shifts: a survey. In Proceedings of the 27th International Conference on Computa- tional Linguistics, pages 1384-1397.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Distributional models of word meaning",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
}
],
"year": 2018,
"venue": "Annual review of Linguistics",
"volume": "4",
"issue": "",
"pages": "151--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Lenci. 2018. Distributional models of word meaning. Annual review of Linguistics, 4:151-171.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Derivational morphology",
"authors": [
{
"first": "Rochelle",
"middle": [],
"last": "Lieber",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"https://oxfordre.com/linguistics/view/10.1093/acrefore/9780199384655.001.0001/acrefore-9780199384655-e-248"
]
},
"num": null,
"urls": [],
"raw_text": "Rochelle Lieber. 2017. Derivational morphology.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Visualizing data using t-SNE",
"authors": [
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of machine learning research",
"volume": "9",
"issue": "",
"pages": "2579--2605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(Nov):2579-2605.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "OED Online",
"authors": [],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Proffitt, editor. 2018. OED Online. Oxford University Press. http://www.oed.com/.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Hubs in space: Popular nearest neighbors in high-dimensional data",
"authors": [
{
"first": "Milo\u0161",
"middle": [],
"last": "Radovanovi\u0107",
"suffix": ""
},
{
"first": "Alexandros",
"middle": [],
"last": "Nanopoulos",
"suffix": ""
},
{
"first": "Mirjana",
"middle": [],
"last": "Ivanovi\u0107",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Machine Learning Research",
"volume": "11",
"issue": "",
"pages": "2487--2531",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milo\u0161 Radovanovi\u0107, Alexandros Nanopoulos, and Mir- jana Ivanovi\u0107. 2010. Hubs in space: Popular nearest neighbors in high-dimensional data. Journal of Ma- chine Learning Research, 11(Sep):2487-2531.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Wikicorpus: A word-sense disambiguated multilingual Wikipedia corpus",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Reese",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
},
{
"first": "Montse",
"middle": [],
"last": "Cuadros",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "Padr\u00f3",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Reese, Gemma Boleda, Montse Cuadros, Llu\u00eds Padr\u00f3, and German Rigau. 2010. Wikicorpus: A word-sense disambiguated multilingual Wikipedia corpus. In Proceedings of the Seventh conference on International Language Resources and Evalua- tion (LREC'10).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Cross-disciplinary perspectives on lexical blending",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Renner",
"suffix": ""
},
{
"first": "Franois",
"middle": [],
"last": "Maniez",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Arnaud",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Renner, Franois Maniez, and Pierre Arnaud, editors. 2012. Cross-disciplinary perspectives on lexical blending. De Gruyter Mouton, Berlin.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Evaluation methods for unsupervised word embeddings",
"authors": [
{
"first": "Tobias",
"middle": [],
"last": "Schnabel",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Labutov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "298--307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing, pages 298-307.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Making\" fetch\" happen: The influence of social and linguistic context on nonstandard word growth and decline",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Stewart",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4360--4370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Stewart and Jacob Eisenstein. 2018. Making\" fetch\" happen: The influence of social and linguis- tic context on nonstandard word growth and decline. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 4360-4370.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Survey of computational approaches to diachronic conceptual change",
"authors": [
{
"first": "Nina",
"middle": [],
"last": "Tahmasebi",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Borin",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Jatowt",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.06278"
]
},
"num": null,
"urls": [],
"raw_text": "Nina Tahmasebi, Lars Borin, and Adam Jatowt. 2018. Survey of computational approaches to diachronic conceptual change. arXiv preprint arXiv:1811.06278.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Lexical Borrowing",
"authors": [
{
"first": "R",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Anthony",
"middle": [
"P"
],
"last": "Taylor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grant",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199641604.001.0001/oxfordhb-9780199641604-e-029"
]
},
"num": null,
"urls": [],
"raw_text": "John R Taylor and Anthony P. Grant. 2014. Lexical Borrowing. Oxford University Press, Oxford.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Languages in contact: Findings and problems",
"authors": [
{
"first": "Uriel",
"middle": [],
"last": "Weinreich",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Uriel Weinreich. 2010. Languages in contact: Find- ings and problems. Walter de Gruyter, The Hague.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A computational evaluation of two laws of semantic change",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Kemp",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Xu and Charles Kemp. 2015. A computational evaluation of two laws of semantic change. In CogSci.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "2"
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "reshold (b) Semantic neighborhood of the word pesto."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Neighborhoods of projected MODERN embeddings of two neologisms (shown in red), renewables and pesto, in the HISTORICAL embedding space, visualized using t-SNE(Maaten and Hinton, 2008)."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Average HISTORICAL word vector density in the neighborhoods of neologisms and stable control set words. Average frequency growth rate of HISTORICAL word vectors in the neighborhoods of neologisms and stable control set words."
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Number of HISTORICAL"
},
"TABREF1": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>(\u2327 ) d and</td><td>(\u2327 ) r</td></tr></table>",
"num": null,
"text": "Values of the GLM coefficients and their p-values for different neighborhood cosine similarity thresholds \u2327 ."
},
"TABREF3": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "Nearest HISTORICAL neighbors of projected MODERN embeddings for a sample of emerging words."
}
}
}
}