ACL-OCL / Base_JSON /prefixR /json /repl4nlp /2021.repl4nlp-1.19.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:59:40.705987Z"
},
"title": "Deriving Word Vectors from Contextualized Language Models using Topic-Aware Mention Selection",
"authors": [
{
"first": "Yixiao",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cardiff University",
"location": {
"country": "UK"
}
},
"email": ""
},
{
"first": "Zied",
"middle": [],
"last": "Bouraoui",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 d'Artois",
"location": {
"country": "France"
}
},
"email": "[email protected]"
},
{
"first": "Luis",
"middle": [
"Espinosa"
],
"last": "Anke",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cardiff University",
"location": {
"country": "UK"
}
},
"email": ""
},
{
"first": "Steven",
"middle": [],
"last": "Schockaert",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cardiff University",
"location": {
"country": "UK"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "One of the long-standing challenges in lexical semantics consists in learning representations of words which reflect their semantic properties. The remarkable success of word embeddings for this purpose suggests that highquality representations can be obtained by summarizing the sentence contexts of word mentions. In this paper, we propose a method for learning word representations that follows this basic strategy, but differs from standard word embeddings in two important ways. First, we take advantage of contextualized language models (CLMs) rather than bags of word vectors to encode contexts. Second, rather than learning a word vector directly, we use a topic model to partition the contexts in which words appear, and then learn different topic-specific vectors for each word. Finally, we use a taskspecific supervision signal to make a soft selection of the resulting vectors. We show that this simple strategy leads to high-quality word vectors, which are more predictive of semantic properties than word embeddings and existing CLM-based strategies.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "One of the long-standing challenges in lexical semantics consists in learning representations of words which reflect their semantic properties. The remarkable success of word embeddings for this purpose suggests that highquality representations can be obtained by summarizing the sentence contexts of word mentions. In this paper, we propose a method for learning word representations that follows this basic strategy, but differs from standard word embeddings in two important ways. First, we take advantage of contextualized language models (CLMs) rather than bags of word vectors to encode contexts. Second, rather than learning a word vector directly, we use a topic model to partition the contexts in which words appear, and then learn different topic-specific vectors for each word. Finally, we use a taskspecific supervision signal to make a soft selection of the resulting vectors. We show that this simple strategy leads to high-quality word vectors, which are more predictive of semantic properties than word embeddings and existing CLM-based strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the last few years, contextualized language models (CLMs) such as BERT (Devlin et al., 2019) have largely replaced the use of static (i.e. noncontextualized) word vectors in many Natural Language Processing (NLP) tasks. However, static word vectors remain important in applications where word meaning has to be modelled in the absence of (sentence) context. For instance, static word vectors are needed for zero-shot image classification (Socher et al., 2013) and zero-shot entity typing (Ma et al., 2016) , for ontology alignment (Kolyvakis et al., 2018) and completion (Li et al., 2019) , taxonomy learning (Bordea et al., 2015 (Bordea et al., , 2016 , or for representing query terms in information retrieval systems (Nikolaev and Kotov, 2020) . Moreover, Liu et al. (2020) recently found that static word vectors can complement CLMs, by serving as anchors for contextualized vectors, while Alghanmi et al. (2020) found that incorporating static word vectors could improve the performance of BERT for social media classification.",
"cite_spans": [
{
"start": 74,
"end": 95,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 441,
"end": 462,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF28"
},
{
"start": 491,
"end": 508,
"text": "(Ma et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 534,
"end": 558,
"text": "(Kolyvakis et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 574,
"end": 591,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 612,
"end": 632,
"text": "(Bordea et al., 2015",
"ref_id": "BIBREF5"
},
{
"start": 633,
"end": 655,
"text": "(Bordea et al., , 2016",
"ref_id": "BIBREF6"
},
{
"start": 723,
"end": 749,
"text": "(Nikolaev and Kotov, 2020)",
"ref_id": "BIBREF22"
},
{
"start": 762,
"end": 779,
"text": "Liu et al. (2020)",
"ref_id": "BIBREF15"
},
{
"start": 897,
"end": 919,
"text": "Alghanmi et al. (2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given the impressive performance of CLMs across many NLP tasks, a natural question is whether such models can be used to learn highquality static word vectors, and whether the resulting vectors have any advantages compared to those from standard word embedding models (Mikolov et al., 2013; Pennington et al., 2014) . A number of recent works have begun to explore this question (Ethayarajh, 2019; Bommasani et al., 2020; Vulic et al., 2020) . Broadly speaking, the idea is to construct a static word vector for a word w by randomly selecting sentences in which this word occurs, and then averaging the contextualized representations of w across these sentences.",
"cite_spans": [
{
"start": 268,
"end": 290,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF20"
},
{
"start": 291,
"end": 315,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF24"
},
{
"start": 379,
"end": 397,
"text": "(Ethayarajh, 2019;",
"ref_id": "BIBREF10"
},
{
"start": 398,
"end": 421,
"text": "Bommasani et al., 2020;",
"ref_id": "BIBREF4"
},
{
"start": 422,
"end": 441,
"text": "Vulic et al., 2020)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since it is not usually computationally feasible to run the CLM on all sentences mentioning w, a sample of such sentences has to be selected. This begs the question: how should these sentences be chosen? In the aforementioned works, sentences are selected at random, but this may not be optimal. If we want to use the resulting word vectors in downstream tasks such as zero-shot learning or ontology completion, we need vectors that capture the salient semantic properties of words. Intuitively, we should thus favor sentences that best reflect these properties. For instance, many of the mentions of the word banana on Wikipedia are about the cultivation and export of bananas, and about the specifics of particular banana cultivars. By learning a static word vector from such sentences, we may end up with a vector that does not reflect our commonsense understanding of bananas, e.g. the fact that they are curved, yellow and sweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main aim of this paper is to analyze to what extent topic models such as Latent Dirichlet Allocation (Blei et al., 2003) can be used to address this issue. Continuing the previous example, we may find that the word banana occurs in Wikipedia articles on the following topics: economics, biology, food or popular culture. While most mentions might be in articles on economics and biology, it is the latter two topics that are most relevant for modelling the commonsense properties of bananas. Note that the optimal selection of topics is taskdependent, e.g. in an NLP system for analyzing financial news, the economics topic would clearly be more relevant. For this reason, we propose to learn a word vector for each topic separately. Since the optimal choice of topics is task-dependent, we then rely on a task-specific supervision signal to make a soft selection of these topic-specific vectors.",
"cite_spans": [
{
"start": 105,
"end": 124,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Another important question is how CLMs should be used to obtain contextualized word vectors. Given a sentence mentioning w, a model such as BERT-base constructs 12 vector representations of w, i.e. one for each layer of the transformer stack. Previous work has suggested to use the average of particular subsets of these vectors. In particular, Vulic et al. (2020) found that lexical semantics is most prevalent in the representations from the early layers, and that averaging vectors from the first few layers seems to give good results on many benchmarks. On the other hand, these early layers are least affected by the sentence context (Ethayarajh, 2019) , hence such strategies might not be suitable for learning topic-specific vectors. We therefore also explore a different strategy, which is to mask the target word in the given sentence, i.e. to replace the entire word by a single [MASK] token, and to use the vector representation of this token at the final layer. The resulting vector representations thus specifically encode what the given sentence reveals about the target word, making this a natural strategy for learning topic-specific vectors.",
"cite_spans": [
{
"start": 345,
"end": 364,
"text": "Vulic et al. (2020)",
"ref_id": "BIBREF30"
},
{
"start": 639,
"end": 657,
"text": "(Ethayarajh, 2019)",
"ref_id": "BIBREF10"
},
{
"start": 889,
"end": 895,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Note that there is a clear relationship between this latter strategy and CBOW (Mikolov et al., 2013) : where in CBOW the vector representation of w is obtained by averaging the vector representations of the context words that co-occur with w, we similarly represent words by averaging context representations. The main advantage compared to CBOW thus comes from the higher-quality context encodings that can be obtained using CLMs. The main challenge, as already mentioned, is that we cannot consider all the mentions of w, whereas this is typically feasible for CBOW (and other standard word embedding models). Our contributions can be summarized as follows 1 :",
"cite_spans": [
{
"start": 78,
"end": 100,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We analyze different strategies for deriving word vectors from CLMs, which rely on sampling mentions of the target word from a text collection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose the use of topic models to improve how these mentions are sampled. In particular, rather than learning a single vector representation for the target word, we learn one vector for each sufficiently relevant topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose to construct the final representation of a word w as a weighted average of different vectors. This allows us to combine multiple vectors without increasing the dimensionality of the final representations. We use this approach for combining different topicspecific vectors and for combining vectors from different transformer layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A few recent works have already proposed strategies for computing static word vectors from CLMs. While Ethayarajh (2019) relied on principal components of individual transformer layers for this purpose, most approaches rely on averaging the contextualised representations of randomly selected mentions of the target word (Bommasani et al., 2020; Vulic et al., 2020) . Several authors have pointed out that the representations obtained from early layers tend to perform better in lexical semantics probing tasks. However, Bommasani et al. (2020) found that the optimal layer depends on the number of sampled mentions, with later layers performing better when a large number of mentions is used. Rather than fixing a single layer, Vulic et al. (2020) advocated averaging representations from several layers. Note that none of the aforementioned methods uses masking when computing contextualized vectors. This means that the final representations may have to be obtained by pooling different word-piece vectors, usually by averaging them.",
"cite_spans": [
{
"start": 321,
"end": 345,
"text": "(Bommasani et al., 2020;",
"ref_id": "BIBREF4"
},
{
"start": 346,
"end": 365,
"text": "Vulic et al., 2020)",
"ref_id": "BIBREF30"
},
{
"start": 521,
"end": 544,
"text": "Bommasani et al. (2020)",
"ref_id": "BIBREF4"
},
{
"start": 729,
"end": 748,
"text": "Vulic et al. (2020)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As an alternative to using topic models, Chronis and Erk (2020) cluster the contextual word vectors, obtained from mentions of the same word. The resulting multi-prototype representation is then used to compute word similarity in an adaptive way. Along similar lines, Amrami and Goldberg (2019) cluster contextual word vectors for word sense induction. Thompson and Mimno (2020) showed that clustering the contextual representations of a given set of words can produce clusters of semantically related words, which were found to be similar in spirit to LDA topics. The idea of learning topic-specific representations of words has been extensively studied in the context of standard word embeddings (Liu et al., 2015; Li et al., 2016; Shi et al., 2017; Zhu et al., 2020) . To the best of our knowledge, learning topic-specific word representations using CLMs has not yet been studied. More broadly, however, some recent methods have combined CLMs with topic models. For instance, Peinelt et al. (2020) use such a combination for predicting semantic similarity. In particular they use the LDA or GSDMM topic distribution of two sentences to supplement their BERT encoding. Finally, Bianchi et al. (2020) suggested using sentence embeddings from SBERT (Reimers and Gurevych, 2019) as input to a neural topic model, with the aim of learning more coherent topics.",
"cite_spans": [
{
"start": 353,
"end": 378,
"text": "Thompson and Mimno (2020)",
"ref_id": "BIBREF29"
},
{
"start": 698,
"end": 716,
"text": "(Liu et al., 2015;",
"ref_id": "BIBREF16"
},
{
"start": 717,
"end": 733,
"text": "Li et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 734,
"end": 751,
"text": "Shi et al., 2017;",
"ref_id": "BIBREF27"
},
{
"start": 752,
"end": 769,
"text": "Zhu et al., 2020)",
"ref_id": "BIBREF31"
},
{
"start": 979,
"end": 1000,
"text": "Peinelt et al. (2020)",
"ref_id": null
},
{
"start": 1180,
"end": 1201,
"text": "Bianchi et al. (2020)",
"ref_id": "BIBREF2"
},
{
"start": 1249,
"end": 1277,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In Section 3.1, we first describe different strategies for deriving static word vectors from CLMs. Section 3.2 subsequently describes how we choose the most relevant topics for each word, and how we sample topic-specific word mentions. Finally, in Section 3.3 we explain how the resulting topicspecific representations are combined to obtain task-specific word vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing Word Vectors",
"sec_num": "3"
},
{
"text": "We first briefly recall the basics of the BERT contextualised language model. BERT represents a sentence s as a sequence of word-pieces w 1 ..., w n . Frequent words will typically be represented as a single word-piece, but in general, word-pieces may correspond to sub-word tokens. Each of these word-pieces w is represented as an input vector, which is constructed from a static word-piece embedding w 0 (together with vectors that encode at which position in the sentence the word appears, and in which sentence). The resulting sequence of word-piece vectors is then fed to a stack of 12 (for BERT-base) or 24 (for BERT-large) transformer layers. Let us write w s i for the representation of word-piece w in the i th transformer layer. We will refer to the representation in the last layer, i.e. w s 12 for BERT-base and w s 24 for BERT-large, as the output vector. When BERT is trained, some of the word-pieces are replaced by a special [MASK] token. The corresponding output vector then encodes a prediction of the masked word-piece.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtaining Contextualized Word Vectors",
"sec_num": "3.1"
},
{
"text": "Given a sentence s in which the word w is mentioned, there are several ways in which BERT and related models can be used to obtain a vector representation of w. If w consists of a single word-piece, a natural strategy is to feed the sentence s as input and use the output vector as the representation of w. However, several authors have found that it can be beneficial to also take into account some or all of the earlier transformer layers, where finegrained word senses are mostly captured in the later layers (Reif et al., 2019) but word-level lexical semantic features are primarily found in the earlier layers (Vulic et al., 2020) . For this reason, we will also experiment with models in which the vectors w s 1 , ..., w s 12 (or w s 1 , ..., w s 24 in the case of BERT-large) are all used. In particular, our model will construct a weighted average of these vectors, where the weights will be learned from training data (see Section 3.3). For words that consist of multiple word-pieces, following common practice, we compute the representation of w as the average of its word-piece vectors. For instance, this strategy was found to outperform other aggregation strategies in Bommasani et al. (2020) .",
"cite_spans": [
{
"start": 512,
"end": 531,
"text": "(Reif et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 615,
"end": 635,
"text": "(Vulic et al., 2020)",
"ref_id": "BIBREF30"
},
{
"start": 1182,
"end": 1205,
"text": "Bommasani et al. (2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Obtaining Contextualized Word Vectors",
"sec_num": "3.1"
},
{
"text": "We will also experiment with a strategy that relies on masking. In this case, the word w is replaced by a single [MASK] token (even if w would normally be tokenized into more than one wordpiece). Let us write m s w for the output vector corresponding to this [MASK] token. Since this vector corresponds to BERT's prediction of what word is missing, this vector should intuitively capture the properties of w that are asserted in the given sentence. We can thus expect that these vectors m s w will be more sensitive to how the sentences mentioning w are chosen. Note that in this case, we only use the output layer, as the earlier layers are less likely to be informative.",
"cite_spans": [
{
"start": 113,
"end": 119,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Obtaining Contextualized Word Vectors",
"sec_num": "3.1"
},
{
"text": "To obtain a static representation of w, we first select a set of sentences s 1 , ..., s n in which w is mentioned. Then we compute vector representations w s 1 , ..., w sn of w from each of these sentences, using any of the aforementioned strategies. Our final representation w is then obtained by averaging these sentence-specific representations, i.e.:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtaining Contextualized Word Vectors",
"sec_num": "3.1"
},
{
"text": "w = n i=1 w s i n i=1 w s i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtaining Contextualized Word Vectors",
"sec_num": "3.1"
},
{
"text": "To construct a vector representation of w, we need to select some sentences s 1 , ..., s n mentioning w. While these sentences are normally selected randomly, our hypothesis in this paper is that purely random strategies may not be optimal. Intuitively, this is because the contexts in which a given word w is most frequently mentioned might not be the most informative ones, i.e. they may not be the contexts which best characterize the properties of w that matter for a given task. To test this hypothesis, we experiment with a strategy based on topic models. Our strategy relies on the following steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Topic-Specific Mentions",
"sec_num": "3.2"
},
{
"text": "1. Identify the topics which are most relevant for the target word w;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Topic-Specific Mentions",
"sec_num": "3.2"
},
{
"text": "2. For each of the selected topics t, select sentences s t 1 , ..., s t n mentioning w from documents that are closely related to this topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Topic-Specific Mentions",
"sec_num": "3.2"
},
{
"text": "For each of the selected topics t, we can then use the sentences s t 1 , ..., s t n to construct a topic-specific vector w t , using any of the strategies from Section 3.1. The final representation of w will be computed as a weighted average of these topic-specific vectors, as will be explained in Section 3.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Topic-Specific Mentions",
"sec_num": "3.2"
},
{
"text": "We now explain these two steps in more detail. First, we use Latent Dirichlet Allocation (LDA) (Blei et al., 2003) to obtain a representation of each document d in the considered corpus as a multinomial distribution over m topics. Let us write \u03c4 i (d) for the weight of topic i in the representation of document d, where m i=1 \u03c4 i (d) = 1. Suppose that the word w is mentioned N w times in the corpus, and let d w j be the document in which the j th mention of w occurs. Then we define the importance of topic i for word w as follows:",
"cite_spans": [
{
"start": 95,
"end": 114,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Topic-Specific Mentions",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c4 i (w) = 1 N w Nw j=1 \u03c4 i (d w j )",
"eq_num": "(1)"
}
],
"section": "Selecting Topic-Specific Mentions",
"sec_num": "3.2"
},
{
"text": "In other words, the importance of topic i for word w is defined as the average importance of topic i for the documents in which w occurs. To select the set of topics T w that are relevant to w, we rank the topics from most to least important and then select the smallest set of topics whose cumulative importance is at least 60%, i.e. T w is the smallest set of topics such that t i \u2208Tw \u03c4 i (w) \u2265 0.6. For each of the topics t i in T w we select the corresponding sentences s t 1 , ..., s t n as follows. We rank all the documents in which w is mentioned according to \u03c4 i (d). Then, starting with the document with the highest score (i.e. the document for which topic i is most important), we iterate over the ranked list of documents, selecting all sentences from these documents in which w is mentioned, until we have obtained a total of n sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Topic-Specific Mentions",
"sec_num": "3.2"
},
{
"text": "Section 3.1 highlighted a number of strategies that could be used to construct a vector representation of a target word w. As mentioned before, it can be beneficial to combine vector representations from different transformer layers. To this end, we propose to learn a weighted average of the different input vectors, using a task specific supervision signal. In particular, let w 1 , ..., w k be the different vector representations we have available for word w (e.g. the vectors from different transformer layers). To combine these vectors, we compute a weighted average as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Word Representations",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03bb i = exp(a i ) k j=1 exp(a i )",
"eq_num": "(2)"
}
],
"section": "Combining Word Representations",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w = i \u03bb i w i i \u03bb i w i",
"eq_num": "(3)"
}
],
"section": "Combining Word Representations",
"sec_num": "3.3"
},
{
"text": "where the scalar parameters a 1 , ...a k \u2208 R are jointly learned with the model in which w is used. Another possibility would be to concatenate the input vectors w 1 , ..., w k . However, this significantly increases the dimensionality of the word representations, which can be challenging in downstream applications. In initial experiments, we also confirmed that this concatenation strategy indeed under-performs the use of weighted averages. If topic-specific vectors are used, we also want to compute a weighted average of the available vectors. However, (2)-(3) cannot be used in this case, because the set of topics for which topicspecific vectors are available differs from word to word. Let us write w i topic for the representation of word w that was obtained for topic t i , where we assume w i topic = 0 if t i / \u2208 T w . We then define:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Word Representations",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00b5 w i = exp(b i ) \u2022 1[t i \u2208 T w ] k j=1 exp(b i ) \u2022 1[t j \u2208 T w ]",
"eq_num": "(4)"
}
],
"section": "Combining Word Representations",
"sec_num": "3.3"
},
{
"text": "w topic = i \u00b5 w i w i topic i \u00b5 w i w i topic (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Word Representations",
"sec_num": "3.3"
},
{
"text": "where 1[t i \u2208 T w ] = 1 if topic t i is considered to be relevant for word w (i.e. t i \u2208 T w ), and 1[t i \u2208 T w ] = 0 otherwise. Note that the softmax function in (4) relies on the scalar parameters b 1 , ..., b k \u2208 R, which are independent of w. However, the softmax is selectively applied to those topics that are relevant to w, which is why the resulting weight \u00b5 w i is dependent on w, or more precisely, on the set of topics T w .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Word Representations",
"sec_num": "3.3"
},
{
"text": "We compare the proposed strategy with standard word embeddings and existing CLM-based strategies. In Section 4.1 we first describe our experimental setup. Section 4.2 then provides an overview of the datasets we used for the experiments, where we focus on lexical classification benchmarks. These benchmarks in particular allow us to assess how well various semantic properties can be predicted from the word vectors. The experimental results are discussed in Section 4.3 and a qualitative analysis is presented in Section 4.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "We experiment with a number of different strategies for obtaining word vectors: C last We take the vector representation of w from the last transformer layer (i.e. w s 12 or w s 24 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "We take the input embedding of w (i.e. w 0 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C input",
"sec_num": null
},
{
"text": "We take the average of w 0 , w s 1 , ..., w s 12 for the base models and w 0 , w s 1 , ..., w s 24 for the large models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C avg",
"sec_num": null
},
{
"text": "We use all of w 0 , w s 1 , ..., w s 12 as input for the base models, and all of w 0 , w s 1 , ..., w s 24 for the large models. These vectors are then aggregated using (2)-(3), i.e. we use a learned soft selection of the transformer layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C all",
"sec_num": null
},
{
"text": "We replace the target word by [MASK] and use the corresponding output vector.",
"cite_spans": [
{
"start": 30,
"end": 36,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C mask",
"sec_num": null
},
{
"text": "For words consisting of more than one word-piece, we average the corresponding vectors in all cases, except for C mask where we always end up with a single vector (i.e. we replace the entire word by a single [MASK] token). We also consider three variants that rely on topic-specific vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C mask",
"sec_num": null
},
{
"text": "T last We learn topic-specific vectors using the last transformer layers. These vectors are then used as input to (4)-(5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C mask",
"sec_num": null
},
{
"text": "T avg Similar to the previous case but using the average of all transformer layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C mask",
"sec_num": null
},
{
"text": "T mask Similar to the previous cases but using the output vector of the masked word mention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C mask",
"sec_num": null
},
{
"text": "Furthermore, we consider variants of T last , T avg and T mask in which a standard (i.e. unweighted) average of the available topic-specific vectors is computed, instead of relying on (4)-(5). We will refer to these averaging-based variants as A last , A avg and A mask . As baselines, we also consider the two Word2vec models (Mikolov et al., 2013) :",
"cite_spans": [
{
"start": 327,
"end": 349,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C mask",
"sec_num": null
},
{
"text": "SG 300-dimensional Skip-gram vectors trained on a May 2016 dump of the English Wikipedia, using a window size of 5 tokens, and minimum frequency threshold of 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C mask",
"sec_num": null
},
{
"text": "CBOW 300-dimensional Continuous Bag-of-Words vectors trained on the same corpus and with the same hyperparameters as SG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C mask",
"sec_num": null
},
{
"text": "We show results for four pre-trained CLMs (Devlin et al., 2019; Liu et al., 2019) : BERT-baseuncased, BERT-large-uncased, RoBERTa-baseuncased, RoBERTa-large-uncased 2 . As the corpus for sampling word mentions, we used the same Wikipedia dump as for training the word embeddings models. For C mask , C last , C avg and C all we selected 500 mentions. For the topic-specific strategies (T last , T avg and T mask ) we selected 100 mentions per topic. To obtain the topic assignments, we used Latent Dirichlet Allocation (Blei et al., 2003) with 25 topics. We set \u03b1 = 0.0001 to restrict the total number of topics attributed to a document, and use default values for the other hyper-parameters 3 .",
"cite_spans": [
{
"start": 42,
"end": 63,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF9"
},
{
"start": 64,
"end": 81,
"text": "Liu et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 519,
"end": 538,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C mask",
"sec_num": null
},
{
"text": "To select the relevant topics for a given word w, we find the smallest set of topics whose cumulative importance score \u03c4 i (w) is at least 60%, with a maximum of 6 topics. In the experiments, we restrict the vocabulary to those words with at least 100 occurrences in Wikipedia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C mask",
"sec_num": null
},
{
"text": "For the experiments, we focus on a number of lexical classification tasks, where categories of individual words need to be predicted. In particular, we used two datasets which are focused on commonsense properties (e.g. dangerous): the extension of the McRae feature norms dataset (McRae et al., 2005 ) that was introduced by Forbes et al. (2019) 4 and the CSLB Concept Property Norms 5 . We furthermore used the WordNet supersenses dataset 6 , which groups nouns into broad categories (e.g. human). Finally, we also used the BabelNet domains dataset 7 (Camacho-Collados and Navigli, 2017) , which assigns lexical entities to thematic domains (e.g. music).",
"cite_spans": [
{
"start": 281,
"end": 300,
"text": "(McRae et al., 2005",
"ref_id": "BIBREF19"
},
{
"start": 553,
"end": 589,
"text": "(Camacho-Collados and Navigli, 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "In our experiments, we have only considered properties/classes for which sufficient positive examples are available, i.e. at least 10 for McRae, 30 for CSLB, and 100 for WordNet supersenses and BabelNet domains. For the McRae dataset, we used the standard training-validation-test split. For the other datasets, we used random splits of 60% for training, 20% for tuning and 20% for testing. An overview of the datasets is shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 431,
"end": 438,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "For all datasets, we consider a separate binary classification problem for each property and we report the (unweighted) average of the F1 scores for the different properties. To classify words, we feed their word vector directly to a sigmoid classification layer. We optimise the network using AdamW with a cross-entropy loss. The batch size and learning rate were tuned, with possible values chosen from 4,8,16 and 0.01, 0.005, 0.001, 0.0001 respectively. Note that for C all and the topic-specific variants, the classification network jointly learns the parameters of the classification layer and the attention weights in (2) and (4) for combining the input vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "The results are shown in Table 1 . We consistently see that the topic-specific variants outperform the 4 https://github.com/mbforbes/ physical-commonsense 5 https://cslb.psychol.cam.ac.uk/ propnorms 6 https://wordnet.princeton.edu/ download 7 http://lcl.uniroma1.it/babeldomains/ different C-variants, often by a substantial margin. This confirms our main hypothesis, namely that using topic models to determine how context sentences are selected has a material effect on the quality of the resulting word representations. Among the C-variants, the best results are obtained by C mask and C last . None of the three T-variants consistently outperforms the others. Surprisingly, the A-variants outperform the corresponding Tvariants in several cases. This suggests that the outperformance of the topic-specific vectors primarily comes from the fact that the context sentences for each word were sampled in a more balanced way (i.e. from documents covering a broader range of topics), rather than from the ability to adapt the topic weights based on the task. This is a clear benefit for applications, as the A-variants allow us to simply represent each word as a static word vector. The performance of SG and CBOW is also surprisingly strong. In particular, these traditional word embedding models outperform all of the Cvariants, as well as the T and A variants in some cases, especially for BERT-base and RoBERTabase. This seems to be related, at least in part, to the lower dimensionality of these vectors. The classification network has to be learned from a rather small number of examples, especially for McRae and CSLB. Having 768 or 1024 dimensional input vectors can be problematic in such cases. To analyse this effect, we used Principal Component Analysis (PCA) to reduce the dimensionality of the CLM-derived vectors to 300. For this experiment, we focused in particular on C mask and T mask . The results are also shown in Table 1 as C mask -PCA and T mask -PCA. As can be seen, this dimensionality reduction step has a clearly beneficial effect, with T mask -PCA outperforming all baselines, except for the BabelNet domains benchmark. The latter benchmark is focused on thematic similarity rather than semantic properties, which the CLMbased representations seem to struggle with.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1934,
"end": 1941,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Topic-specific vectors can be expected to focus on different properties, depending on the chosen topic. In this section, we present a qualitative analysis in support of this view. In Table 3 we list, for a sample of words from the WordNet supersenses dataset, the top 5 nearest neighbours per topic in terms of cosine similarity. For this analysis, we used the BERT-base masked embeddings. We can see that for the word 'partner', its topic-specific embeddings correspond to its usage in the context of 'finance', 'stock market' and 'fiction'. These three embeddings roughly correspond to three different senses of the word 8 . This de-conflation or implicit wise embeddings do not represent different senses of these words, but rather indicate different types of usage (possibly related to cultural or commonsense properties). Specifically, we see that the same sense of 'sky' is used in mythological, landscaping and geological contexts. Likewise, 'strength' is clustered into different mentions, but while this word also preserves the same sense, it is clearly used in different contexts: physical, as a human feature, and in military contexts. Finally, 'noon' and 'galaxy' (which only occur in two topics), also show this topicality. In both cases, we have representations that reflect their physics and everyday usages, for the same senses of these words. As a final analysis, In Figure 1 we plot a twodimensional PCA-reduced visualization of selected words from the McRae dataset, using two versions of the topic-specific vectors: T mask and T last . In both cases, BERT-base was used to obtain the vectors. We select four pairs of concepts which are topically related, which we plot with the same datapoint marker (animals, plants, weapons and musical instruments). For T last , we can see that the different topic-specific representations of the same word are clustered together, which is in accordance with the findings from Ethayarajh (2019). For T mask , we can see that the representations of words with similar properties (e.g. cheetah and hyena) become more similar, suggesting that T mask is more tailored towards modelling the semantic properties of words, perhaps at the expense of a reduced ability to differentiate between closely related words. The case of turnip and peach is particularly striking, as the vectors are clearly separated in the T last plot, while being clustered together in the T mask plot.",
"cite_spans": [],
"ref_spans": [
{
"start": 183,
"end": 190,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 1384,
"end": 1392,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative analysis",
"sec_num": "4.4"
},
{
"text": "We have proposed a strategy for learning static word vectors, in which topic models are used to help select diverse mentions of a given target word and a contextualized language model is subsequently used to infer vector representations from the selected mentions. We found that selecting an equal number of mentions per topic substantially outperforms purely random selection strategies. We also considered the possibility of learning a weighted average of topic-specific vector representations, which in principle should allow us to \"tune\" word representations to different tasks, by learning task-specific topic importance weights. However, in practice we found that a standard average of the topic specific vectors leads to a comparable performance, suggesting that the outperformance of our vectors comes from the fact that they are obtained from a more diverse set of contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "All code and data to replicate our experiments is available at https://github.com/Activeyixiao/ topic-specific-vector/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used the implementations from https://github. com/huggingface/transformers.3 We used the implementation from https: //radimrehurek.com/gensim/wiki.html.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In fact, we can directly pinpoint these vectors to the following WordNet(Miller, 1995) senses: partner.n.03, collaborator.n.03 and spouse.n.01. disambiguation is also found for words such as 'cell', 'port', 'bulb' or 'mail', which shows a striking relevance of the role of mail in the election topic, being semantically similar in the corresponding vector space to words such as 'telemarketing', 'spam' or 'wiretap'. In the case of 'fingerprint', we can also see some implicit disambiguation (distinguishing between fingerprinting in computer science, as a form of hashing, and the more traditional sense). However, we also see a more topical distinction, revealing differences between the role played by fingerprints in fictional works and forensic research. This tendency of capturing different contexts is more evidently shown in the last four examples. First, for 'sky' and 'strength', the topic-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was performed using the computational facilities of the Advanced Research Computing @ Cardiff (ARCCA) Division, Cardiff University and using HPC resources from GENCI-IDRIS (Grant 2021-[AD011012273]).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Combining bert with static word embeddings for categorizing social media",
"authors": [
{
"first": "Israa",
"middle": [],
"last": "Alghanmi",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Espinosa Anke",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Schockaert",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Sixth Workshop on Noisy Usergenerated Text",
"volume": "",
"issue": "",
"pages": "28--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Israa Alghanmi, Luis Espinosa Anke, and Steven Schockaert. 2020. Combining bert with static word embeddings for categorizing social media. In Pro- ceedings of the Sixth Workshop on Noisy User- generated Text, pages 28-33.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Towards better substitution-based word sense induction",
"authors": [
{
"first": "Asaf",
"middle": [],
"last": "Amrami",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.12598"
]
},
"num": null,
"urls": [],
"raw_text": "Asaf Amrami and Yoav Goldberg. 2019. To- wards better substitution-based word sense induc- tion. arXiv:1905.12598.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Pre-training is a hot topic: Contextualized document embeddings improve topic coherence",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Terragni",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Bianchi, Silvia Terragni, and Dirk Hovy. 2020. Pre-training is a hot topic: Contextual- ized document embeddings improve topic coher- ence. CoRR, abs/2004.03974.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993-1022.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Interpreting pretrained contextualized representations via reductions to static embeddings",
"authors": [
{
"first": "Rishi",
"middle": [],
"last": "Bommasani",
"suffix": ""
},
{
"first": "Kelly",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings ACL",
"volume": "",
"issue": "",
"pages": "4758--4781",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rishi Bommasani, Kelly Davis, and Claire Cardie. 2020. Interpreting pretrained contextualized repre- sentations via reductions to static embeddings. In Proceedings ACL, pages 4758-4781.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "SemEval-2015 task 17: Taxonomy extraction evaluation (TExEval). In Proceedings SemEval",
"authors": [
{
"first": "Georgeta",
"middle": [],
"last": "Bordea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Buitelaar",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Faralli",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "902--910",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgeta Bordea, Paul Buitelaar, Stefano Faralli, and Roberto Navigli. 2015. SemEval-2015 task 17: Tax- onomy extraction evaluation (TExEval). In Proceed- ings SemEval, pages 902-910.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semeval-2016 task 13: Taxonomy extraction evaluation (texeval-2)",
"authors": [
{
"first": "Georgeta",
"middle": [],
"last": "Bordea",
"suffix": ""
},
{
"first": "Els",
"middle": [],
"last": "Lefever",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Buitelaar",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings SemEval",
"volume": "",
"issue": "",
"pages": "1081--1091",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgeta Bordea, Els Lefever, and Paul Buitelaar. 2016. Semeval-2016 task 13: Taxonomy extraction evalu- ation (texeval-2). In Proceedings SemEval, pages 1081-1091.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BabelDomains: Large-scale domain labeling of lexical resources",
"authors": [
{
"first": "Jose",
"middle": [],
"last": "Camacho",
"suffix": ""
},
{
"first": "-Collados",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings EACL",
"volume": "",
"issue": "",
"pages": "223--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jose Camacho-Collados and Roberto Navigli. 2017. BabelDomains: Large-scale domain labeling of lex- ical resources. In Proceedings EACL, pages 223- 228.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "When is a bishop not like a rook? when it's like a rabbi! multiprototype BERT embeddings for estimating semantic relationships",
"authors": [
{
"first": "Gabriella",
"middle": [],
"last": "Chronis",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings CoNLL",
"volume": "",
"issue": "",
"pages": "227--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriella Chronis and Katrin Erk. 2020. When is a bishop not like a rook? when it's like a rabbi! multi- prototype BERT embeddings for estimating seman- tic relationships. In Proceedings CoNLL, pages 227- 244.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings NAACL-HLT.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "How contextual are contextualized word representations? comparing the geometry of BERT, ELMo, and GPT-2 embeddings",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings EMNLP",
"volume": "",
"issue": "",
"pages": "55--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh. 2019. How contextual are contex- tualized word representations? comparing the geom- etry of BERT, ELMo, and GPT-2 embeddings. In Proceedings EMNLP, pages 55-65.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Do neural language representations learn physical commonsense? Proceedings CogSci",
"authors": [
{
"first": "Maxwell",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maxwell Forbes, Ari Holtzman, and Yejin Choi. 2019. Do neural language representations learn physical commonsense? Proceedings CogSci.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Deepalignment: Unsupervised ontology matching with refined word vectors",
"authors": [
{
"first": "Prodromos",
"middle": [],
"last": "Kolyvakis",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings NAACL-HLT",
"volume": "",
"issue": "",
"pages": "787--798",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prodromos Kolyvakis, Alexandros Kalousis, and Dim- itris Kiritsis. 2018. Deepalignment: Unsupervised ontology matching with refined word vectors. In Proceedings NAACL-HLT, pages 787-798.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Ontology completion using graph convolutional networks",
"authors": [
{
"first": "Na",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zied",
"middle": [],
"last": "Bouraoui",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Schockaert",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings ISWC",
"volume": "",
"issue": "",
"pages": "435--452",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Na Li, Zied Bouraoui, and Steven Schockaert. 2019. Ontology completion using graph convolutional net- works. In Proceedings ISWC, pages 435-452.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Generative topic embedding: a continuous representation of documents",
"authors": [
{
"first": "Shaohua",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tat-Seng",
"middle": [],
"last": "Chua",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Chunyan",
"middle": [],
"last": "Miao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaohua Li, Tat-Seng Chua, Jun Zhu, and Chunyan Miao. 2016. Generative topic embedding: a contin- uous representation of documents. In Proceedings ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Towards better context-aware lexical semantics: Adjusting contextualized representations through static anchors",
"authors": [
{
"first": "Qianchu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings EMNLP",
"volume": "",
"issue": "",
"pages": "4066--4075",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qianchu Liu, Diana McCarthy, and Anna Korhonen. 2020. Towards better context-aware lexical se- mantics: Adjusting contextualized representations through static anchors. In Proceedings EMNLP, pages 4066-4075.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Tat-Seng Chua, and Maosong Sun",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings AAAI",
"volume": "",
"issue": "",
"pages": "2418--2424",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2015. Topical word embeddings. In Proceed- ings AAAI, pages 2418-2424.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Label embedding for zero-shot fine-grained named entity typing",
"authors": [
{
"first": "Yukun",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Sa",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings COLING",
"volume": "",
"issue": "",
"pages": "171--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yukun Ma, Erik Cambria, and Sa Gao. 2016. Label embedding for zero-shot fine-grained named entity typing. In Proceedings COLING, pages 171-180.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Semantic feature production norms for a large set of living and nonliving things",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Mcrae",
"suffix": ""
}
],
"year": 2005,
"venue": "Behavior research methods",
"volume": "37",
"issue": "",
"pages": "547--559",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ken McRae et al. 2005. Semantic feature production norms for a large set of living and nonliving things. Behavior research methods, 37:547-559.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In Proceedings ICLR.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Wordnet: a lexical database for English",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1995. Wordnet: a lexical database for English. Communications of the ACM, 38(11):39- 41.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Joint word and entity embeddings for entity retrieval from a knowledge graph",
"authors": [
{
"first": "Fedor",
"middle": [],
"last": "Nikolaev",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Kotov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings ECIR",
"volume": "",
"issue": "",
"pages": "141--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fedor Nikolaev and Alexander Kotov. 2020. Joint word and entity embeddings for entity retrieval from a knowledge graph. In Proceedings ECIR, pages 141-155.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "2020. tBERT: Topic models and BERT joining forces for semantic similarity detection",
"authors": [
{
"first": "Nicole",
"middle": [],
"last": "Peinelt",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings ACL",
"volume": "",
"issue": "",
"pages": "7047--7055",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicole Peinelt, Dong Nguyen, and Maria Liakata. 2020. tBERT: Topic models and BERT joining forces for semantic similarity detection. In Proceedings ACL, pages 7047-7055.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings EMNLP",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word rep- resentation. In Proceedings EMNLP, pages 1532- 1543.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Visualizing and measuring the geometry of BERT",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Reif",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [
"B"
],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Coenen",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Pearce",
"suffix": ""
},
{
"first": "Been",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings NeurIPS",
"volume": "",
"issue": "",
"pages": "8592--8600",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B. Vi\u00e9gas, Andy Coenen, Adam Pearce, and Been Kim. 2019. Visualizing and measuring the geometry of BERT. In Proceedings NeurIPS, pages 8592-8600.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings EMNLP",
"volume": "",
"issue": "",
"pages": "3982--3992",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings EMNLP, pages 3982- 3992.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Jointly learning word embeddings and latent topics",
"authors": [
{
"first": "Bei",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Wai",
"middle": [],
"last": "Lam",
"suffix": ""
},
{
"first": "Shoaib",
"middle": [],
"last": "Jameel",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Schockaert",
"suffix": ""
},
{
"first": "Kwun Ping",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings SIGIR",
"volume": "",
"issue": "",
"pages": "375--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bei Shi, Wai Lam, Shoaib Jameel, Steven Schockaert, and Kwun Ping Lai. 2017. Jointly learning word em- beddings and latent topics. In Proceedings SIGIR, pages 375-384.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Zero-shot learning through cross-modal transfer",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Milind",
"middle": [],
"last": "Ganjoo",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings NIPS",
"volume": "",
"issue": "",
"pages": "935--943",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Milind Ganjoo, Christopher D Man- ning, and Andrew Ng. 2013. Zero-shot learning through cross-modal transfer. In Proceedings NIPS, pages 935-943.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Topic modeling with contextualized word representation clusters",
"authors": [
{
"first": "Laure",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laure Thompson and David Mimno. 2020. Topic mod- eling with contextualized word representation clus- ters. CoRR, abs/2010.12626.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Probing pretrained language models for lexical semantics",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vulic",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Edoardo",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Ponti",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Litschko",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Glavas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings EMNLP",
"volume": "",
"issue": "",
"pages": "7222--7240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vulic, Edoardo Maria Ponti, Robert Litschko, Goran Glavas, and Anna Korhonen. 2020. Probing pretrained language models for lexical semantics. In Proceedings EMNLP, pages 7222-7240.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A neural generative model for joint learning topics and topicspecific word embeddings",
"authors": [
{
"first": "Lixing",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Deyu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yulan",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2020,
"venue": "Trans. Assoc. Comput. Linguistics",
"volume": "8",
"issue": "",
"pages": "471--485",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lixing Zhu, Deyu Zhou, and Yulan He. 2020. A neural generative model for joint learning topics and topic- specific word embeddings. Trans. Assoc. Comput. Linguistics, 8:471-485.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"text": "49.1 59.6 54.5 55.6 49.1 59.6 54.5 55.6 49.1 59.6 54.5 55.6 49.1 CBOW 61.1 50.6 48.4 45.0 61.1 50.6 48.4 45.0 61.1 50.6 48.4 45.0 61.1 50.6 48.4 45.0 Cmask 54.6 44.0 48.8 38.9 52.0 43.0 48.7 38.7 56.0 43.4 47.1 42.1 55.8 42.3 47.0 38.1 Clast 52.9 45.1 46.7 38.4 54.3 46.2 48.2 39.6 56.5 42.2 46.1 37.3 56.3 43.8 46.5 37.8",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td>BERT-base</td><td>BERT-large</td><td>RoBERTa-base</td><td>RoBERTa-large</td></tr><tr><td/><td>MC CS SS BD</td><td>MC CS SS BD</td><td>MC CS SS BD</td><td>MC CS SS BD</td></tr><tr><td colspan=\"5\">SG 59.6 54.5 55.6 C input 48.9 32.2 41.1 34.8 53.1 33.0 39.0 34.5 42.1 25.6 35.2 31.8 51.3 31.4 28.6 36.0</td></tr><tr><td>Cavg</td><td colspan=\"4\">45.9 32.8 44.1 36.4 50.0 37.1 42.7 36.7 39.4 21.6 30.8 28.7 43.7 22.9 30.1 28.2</td></tr><tr><td>Call</td><td colspan=\"4\">45.9 31.0 41.3 35.4 45.0 33.7 43.4 24.6 32.8 19.0 25.9 24.7 37.5 21.2 30.4 28.6</td></tr><tr><td>Tmask</td><td colspan=\"4\">58.6 54.1 60.1 45.8 62.8 54.6 61.4 46.2 56.4 49.4 56.7 42.1 59.6 50.4 57.2 42.1</td></tr><tr><td>Tlast</td><td colspan=\"4\">63.6 51.8 59.5 47.3 60.5 54.8 61.2 49.2 52.8 40.1 54.6 41.2 60.2 48.5 59.5 45.2</td></tr><tr><td>Tavg</td><td colspan=\"4\">61.0 52.7 59.6 42.3 65.2 52.4 60.7 48.4 54.2 39.9 55.9 41.5 59.5 46.8 60.0 45.2</td></tr><tr><td>Amask</td><td colspan=\"4\">61.6 53.5 59.6 41.5 63.0 56.4 60.6 41.5 61.2 55.3 59.6 40.6 63.4 57.1 61.2 42.3</td></tr><tr><td>Alast</td><td colspan=\"4\">60.8 49.6 57.9 44.4 61.4 55.5 60.3 46.7 50.3 36.8 56.5 39. 7 59.5 47.3 58.0 41.2</td></tr><tr><td>Aavg</td><td colspan=\"4\">60.7 49.7 57.9 44.4 63.9 52.0 59.4 44.0 55.6 40.6 56.4 39.8 59.4 47.3 58.0 41.2</td></tr><tr><td colspan=\"5\">Cmask-PCA 56.8 46.4 49.2 38.8 56.6 43.5 48.4 39.2 58.8 51.6 50.4 39.2 58.3 49.8 49.3 39.3</td></tr><tr><td colspan=\"5\">Tmask-PCA 63.3 56.2 62.6 46.9 64.4 57.3 60.6 48.0 61.6 55.8 62.5 46.0 65.4 56.3 64.1 46.4</td></tr></table>"
},
"TABREF1": {
"text": "Results of lexical feature classification experiments for the extended McRae feature norms (MC), CSLB norms (CS), WordNet Supersenses (SS) and BabelNet domains (BD). Results are reported in terms of F1 (%).Figure 1: BERT-base topic-specific vectors when using the output vectors without using masking (left) and with masking (right). Words have been selected from the McRae dataset.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>Dataset</td><td>Type</td><td colspan=\"2\">Words Properties</td></tr><tr><td>McRae</td><td>Commonsense</td><td>475</td><td>49</td></tr><tr><td>CSLB</td><td>Commonsense</td><td>570</td><td>54</td></tr><tr><td colspan=\"2\">WN supersenses Taxonomic</td><td>24,324</td><td>24</td></tr><tr><td>BN domains</td><td>Topical</td><td>43,319</td><td>34</td></tr></table>"
},
"TABREF2": {
"text": "Overview of the considered datasets.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"text": "Nearest neighbours of topic-specific embeddings for a sample of words from the WordNet SuperSenses dataset, using BERT-base embeddings. The top 6 selected samples illustrate clear topic distributions per word sense, and the bottom 4 also show topical properties within the same sense. The most relevant words for each topic are shown under the TOPIC column.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}