Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K16-1006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:11:22.166535Z"
},
"title": "context2vec: Learning Generic Context Embedding with Bidirectional LSTM",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Melamud",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Context representations are central to various NLP tasks, such as word sense disambiguation, named entity recognition, coreference resolution, and many more. In this work we present a neural model for efficiently learning a generic context embedding function from large corpora, using bidirectional LSTM. With a very simple application of our context representations, we manage to surpass or nearly reach state-of-the-art results on sentence completion, lexical substitution and word sense disambiguation tasks, while substantially outperforming the popular context representation of averaged word embeddings. We release our code and pretrained models, suggesting they could be useful in a wide variety of NLP tasks.",
"pdf_parse": {
"paper_id": "K16-1006",
"_pdf_hash": "",
"abstract": [
{
"text": "Context representations are central to various NLP tasks, such as word sense disambiguation, named entity recognition, coreference resolution, and many more. In this work we present a neural model for efficiently learning a generic context embedding function from large corpora, using bidirectional LSTM. With a very simple application of our context representations, we manage to surpass or nearly reach state-of-the-art results on sentence completion, lexical substitution and word sense disambiguation tasks, while substantially outperforming the popular context representation of averaged word embeddings. We release our code and pretrained models, suggesting they could be useful in a wide variety of NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Generic word embeddings capture semantic and syntactic information about individual words in a compact low-dimensional representation. While they are trained to optimize a generic taskindependent objective function, word embeddings were found useful in a broad range of NLP tasks, making an overall huge impact in recent years. A major advancement in this field was the introduction of highly efficient models, such as word2vec (Mikolov et al., 2013a) and GloVe (Pennington et al., 2014) , for learning generic word embeddings from very large corpora. Capturing information from such corpora substantially increased the value of word embeddings to both unsupervised and semi-supervised NLP tasks.",
"cite_spans": [
{
"start": 428,
"end": 451,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF23"
},
{
"start": 462,
"end": 487,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To make inferences regarding a concrete target word instance, good representations of both the target word type and the given context are helpful. For example, in the sentence \"I can't find [April]\", we need to consider both the target word April and its context \"I can't find [ ]\" to infer that April probably refers to a person. This principle applies to various tasks, including word sense disambiguation, co-reference resolution and named entity recognition (NER).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Like target words, contexts are commonly represented via word embeddings. In an unsupervised setting, such representations were found useful for measuring context-sensitive similarity (Huang et al., 2012) , word sense disambiguation (Chen et al., 2014) , word sense induction (K\u00e5geb\u00e4ck et al., 2015) , lexical substitution (Melamud et al., 2015b) , sentence completion (Liu et al., 2015) and more. The context representations used in such tasks are commonly just a simple collection of the individual embeddings of the neighboring words in a window around the target word, or a (sometimes weighted) average of these embeddings. We note that such approaches do not include any mechanism for optimizing the representation of the entire sentential context as a whole.",
"cite_spans": [
{
"start": 184,
"end": 204,
"text": "(Huang et al., 2012)",
"ref_id": "BIBREF5"
},
{
"start": 233,
"end": 252,
"text": "(Chen et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 276,
"end": 299,
"text": "(K\u00e5geb\u00e4ck et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 323,
"end": 346,
"text": "(Melamud et al., 2015b)",
"ref_id": "BIBREF20"
},
{
"start": 369,
"end": 387,
"text": "(Liu et al., 2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In supervised settings, various NLP systems use labeled data to learn how to consider context word representations in a more optimized task-specific way. This was done in tasks, such as chunking, NER, semantic role labeling, and co-reference resolution (Turian et al., 2010; Collobert et al., 2011; Melamud et al., 2016) , mostly by considering the embeddings of words in a window around the target of interest. More recently, bidirectional recurrent neural networks, and specifically bidirectional LSTMs, were used in such tasks to learn internal representations of wider sentential contexts (Zhou and Xu, 2015; Lample et al., 2016) . Since supervised data is usually limited in size, it has been shown that training such systems, using word embeddings that were pre-trained on large corpora, improves performance significantly. Yet, pre-trained word embeddings carry limited information regarding the inter-dependencies between target words and their sentential context as a whole. To model this (and more), the supervised systems still need to rely heavily on their albeit limited supervised data.",
"cite_spans": [
{
"start": 253,
"end": 274,
"text": "(Turian et al., 2010;",
"ref_id": "BIBREF31"
},
{
"start": 275,
"end": 298,
"text": "Collobert et al., 2011;",
"ref_id": "BIBREF3"
},
{
"start": 299,
"end": 320,
"text": "Melamud et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 593,
"end": 612,
"text": "(Zhou and Xu, 2015;",
"ref_id": "BIBREF34"
},
{
"start": 613,
"end": 633,
"text": "Lample et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we present context2vec, an unsupervised model and toolkit 1 for efficiently learning generic context embedding of wide sentential contexts, using bidirectional LSTM. Essentially, we use large plain text corpora to learn a neural model that embeds entire sentential contexts and target words in the same low-dimensional space, which is optimized to reflect inter-dependencies between targets and their entire sentential context as a whole. To demonstrate their high quality, we show that with a very simple application of our context representations, we are able to surpass or nearly reach state-of-the-art results on sentence completion, lexical substitution and word sense disambiguation tasks, while substantially outperforming the common average-of-word-embeddings representation (denoted AWE). We further hypothesize that both unsupervised and semi-supervised systems may benefit from using our pre-trained models, instead or in addition to individual pre-trained word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Context2vec's Neural Model 2.1 Model Overview",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main goal of our model is to learn a generic task-independent embedding function for variable-length sentential contexts around target words. To do this, we propose a neural network architecture, which is based on word2vec's CBOW architecture (Mikolov et al., 2013a) , but replaces its naive context modeling of averaged word embeddings in a fixed window, with a much more powerful neural model, using bidirectional LSTM. Our proposed architecture is illustrated in Figure 1, together with the analogical word2vec architecture. Both models learn context and target 1 Source code and pre-trained models are available at: http://www.cs.biu.ac.il/nlp/resources/ downloads/context2vec/ word representations at the same time, by embedding them into the same low-dimensional space, with the objective of having the context predict the target word via a log linear model. However, we utilize a much more powerful parametric model to capture the essence of sentential context.",
"cite_spans": [
{
"start": 247,
"end": 270,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 470,
"end": 476,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The left-hand side of Figure 1b illustrates how context2vec represents sentential context. We use a bidirectional LSTM recurrent neural network, feeding one LSTM network with the sentence words from left to right, and another from right to left. The parameters of these two networks are completely separate, including two separate sets of left-to-right and right-to-left context word embeddings. To represent the context of a target word in a sentence (e.g. for \"John [submitted] a paper\"), we first concatenate the LSTM output vector representing its left-to-right context (\"John\") with the one representing its right-to-left context (\"a paper\"). With this, we aim to capture the relevant information in the sentential context, even when it is remote from the target word. Next, we feed this concatenated vector into a multi-layer perceptron to be capable of representing non-trivial dependencies between the two sides of the context. We consider the output of this layer as the embedding of the entire joint sentential context around the target word. At the same time, the target word itself (right-hand side of Figure 1b) is represented with its own embedding, equal in dimensionality to that of the sentential context. We note that the only (yet crucial) difference between our model and word2vec's CBOW (Figure 1a) is that CBOW represents the context around a target word as a simple average of the embeddings of the context words in a window around it, while con-text2vec utilizes a full-sentence neural representation of context. Finally, to learn the parameters of our network, we use word2vec's negative sampling objective function, with a positive pair being a target word and its entire sentential context, and respective k negative pairs as random target words, sampled from a (smoothed) unigram distribution over the vocabulary, paired with the same context. With this, we learn both the context embedding network parameters and the target word embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 31,
"text": "Figure 1b",
"ref_id": "FIGREF1"
},
{
"start": 1114,
"end": 1124,
"text": "Figure 1b)",
"ref_id": "FIGREF1"
},
{
"start": 1308,
"end": 1319,
"text": "(Figure 1a)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In contrast to word2vec and similar word embedding models that use context modeling mostly internally and consider the target word embeddings as their main output, our primary focus is the context representation. Our model achieves its objective by assigning similar embeddings to sentential contexts and their associated target words. Further, similar to the case in word2vec models, this indirectly results in assigning similar embeddings to target words that are associated with similar sentential contexts, and conversely to sentential contexts that are associated with similar target words. We will show in the following sections how these properties make our model useful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use a bidirectional LSTM recurrent neural network to obtain a sentence-level context representation. Let lLS be an LSTM reading the words of a given sentence from left to right, and let rLS be a reverse one reading the words from right to left. Given a sentence w 1:n , our 'shallow' bidirectional LSTM context representation for the target w i is defined as the following vector concatenation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Specification and Analysis",
"sec_num": "2.2"
},
{
"text": "biLS(w 1:n , i) = lLS(l 1:i\u22121 ) \u2295 rLS(r n:i+1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Specification and Analysis",
"sec_num": "2.2"
},
{
"text": "where l/r represent distinct left-to-right/right-toleft word embeddings of the sentence words. 2 This definition is a bit different than standard bidirectional LSTM, as we do not feed the LSTMs with the target word itself (i.e. the word in position i). Next, we apply the following non-linear function on the concatenation of the left and right context representations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Specification and Analysis",
"sec_num": "2.2"
},
{
"text": "MLP(x) = L 2 (ReLU(L 1 (x)))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Specification and Analysis",
"sec_num": "2.2"
},
{
"text": "where MLP stands for Multi Layer Perceptron, ReLU is the Rectified Linear Unit activation function, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Specification and Analysis",
"sec_num": "2.2"
},
{
"text": "L i (x) = W i x + b i is a fully connected linear operation. Let c = (w 1 , ..., w i\u22121 , \u2212, w i+1 , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Specification and Analysis",
"sec_num": "2.2"
},
{
"text": ".., w n ) be the sentential context of the word in position i. We define con-text2vec's representation of c as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Specification and Analysis",
"sec_num": "2.2"
},
{
"text": "c = MLP(biLS(w 1:n , i)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Specification and Analysis",
"sec_num": "2.2"
},
{
"text": "Next, we denote the embedding of a target word t as t. We use the same embedding dimensionality for target and sentential context representations. To learn target word and context representations, we use the word2vec negative sampling objective function (Mikolov et al., 2013b) :",
"cite_spans": [
{
"start": 254,
"end": 277,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Specification and Analysis",
"sec_num": "2.2"
},
{
"text": "S = t,c log \u03c3( t \u2022 c) + k i=1 log \u03c3(\u2212 t i \u2022 c) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Specification and Analysis",
"sec_num": "2.2"
},
{
"text": "where the summation goes over each word token t in the training corpus and its corresponding (single) sentential context c, and \u03c3 is the sigmoid function. t 1 , ..., t k are the negative samples, independently sampled from a smoothed version of the target words unigram distribution: p \u03b1 (t) \u221d (#t) \u03b1 , such that 0 \u2264 \u03b1 < 1 is a smoothing factor, which increases the probability of rare words. Levy and Goldberg (2014b) proved that when the objective function in Equation (1) is applied to single-word contexts, it is optimized when:",
"cite_spans": [
{
"start": 393,
"end": 418,
"text": "Levy and Goldberg (2014b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Specification and Analysis",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t \u2022 c = PMI \u03b1 (t, c) \u2212 log(k)",
"eq_num": "(2)"
}
],
"section": "Formal Specification and Analysis",
"sec_num": "2.2"
},
{
"text": "where PMI(t, c) = log p(t,c) p\u03b1(t)p(c) is the pointwise mutual information between the target word t and the context word c. The analysis presented in Levy and Goldberg (2014b) is valid for every cooccurrence matrix that describes the joint distribution of two random variables. Specifically, it can be applied to our case, where the context is not just a single word but an entire sentential context of a target word. Accordingly, we can view the targetcontext embedding obtained by our algorithm as a factorization of the PMI matrix between all possible target words and all possible different sentential contexts. Unlike the case of single-word contexts, it is not feasible to explicitly compute here this PMI matrix due to the exponential number of possible sentential contexts. However, the objective function that we optimize still aims to best approximate it. Based on the above analysis, we can expect the inner-product of our target and context embeddings to approximate PMI \u03b1 (c, t). We note that accordingly, with larger values of \u03b1, there will be more bias towards placing rare words closer to their associated contexts in this space.",
"cite_spans": [
{
"start": 151,
"end": 176,
"text": "Levy and Goldberg (2014b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Specification and Analysis",
"sec_num": "2.2"
},
{
"text": "To demonstrate the qualities of the embedded space learned by context2vec, we illustrate three types of similarity metrics in that space: target-tocontext (t2c), context-to-context (c2c) and targetto-target (t2t). All these are measured by the vector cosine value between the respective embedding representations. Only the latter target-to-target metric is the one typically used when illustrating and evaluating word embedding models, such as word2vec. Figure 2 provides a 2D illustration of such a space and respective metrics.",
"cite_spans": [],
"ref_spans": [
{
"start": 454,
"end": 462,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Model Illustration",
"sec_num": "2.3"
},
{
"text": "In Table 1 we show sentential contexts and the target words that are closest to them, using the target-to-context similarity metric with con-text2vec embeddings. As can be seen, the bidirectional LSTM modeling of context2vec is indeed capable in this case to capture long range dependencies, as well as to take both sides of the context into account. In Table 2 we show the closest target words to given contexts, using different context2vec models, each learned with a different negative sampling smoothing parameter \u03b1. This illustrates the bias that high \u03b1 values introduce towards rare words, as predicted with the analysis in section 2.2.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": null
},
{
"start": 354,
"end": 361,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Model Illustration",
"sec_num": "2.3"
},
{
"text": "Next, to illustrate the context-to-context similarity metric, we took the set of contexts for the target lemma add from the training set of Senseval-3 (Mihalcea et al., 2004) . In Table 3 we show an example for a 'query' context from that set and the other two most similar contexts to it, based on context2vec and AWE (average of Skip-gram word embeddings) context representations. Melamud et al. (2015a) argues that since contexts induce meanings (or senses) for target words, a good context similarity measure should assign high similarity values to contexts that induce similar senses for the same target word. As can be seen in this example, AWE's similarity measure seems to be influenced by the presence of the location names in the contexts, even though they have little effect on the perceived meaning of add in the sentences. Indeed, the sense of add in the closest contexts retrieved by AWE is different than that in the 'query' context. In this case, context2vec's similarity measure was robust to this problem. Table 4 , we show the closest target words to a few given target words, based on the target-to-target similarity metric. We compare context2vec's target word embeddings to Skipgram word2vec embeddings, trained with 2-word and 10-word windows. As can be seen, our model seems to better preserve the function of the given target words including part-of-speech and even tense, in comparison to the 2-word window model, and even more so compared to the 10-word window one. The intuition for this behavior is that Skip-gram literally skips words in the context around the target word and therefore may find, for instance, the contexts of san and francisco to be very similar. In contrast, our model considers only entire sentential contexts, taking context word order and position into consideration. Melamud et al. (2016) showed that target word embeddings, learned from context representations that are generated using n-gram language models, also exhibit function-preserving similarities, which is consistent with our observations.",
"cite_spans": [
{
"start": 151,
"end": 174,
"text": "(Mihalcea et al., 2004)",
"ref_id": "BIBREF22"
},
{
"start": 383,
"end": 405,
"text": "Melamud et al. (2015a)",
"ref_id": "BIBREF19"
},
{
"start": 1820,
"end": 1841,
"text": "Melamud et al. (2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 180,
"end": 187,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 1024,
"end": 1031,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Model Illustration",
"sec_num": "2.3"
},
{
"text": "Our model is closely related to language models, as can be seen in section 2.2 and tables 1 and 2. In particular, it has a lot in common with LSTMbased language models, as both train LSTM neural networks with the objective to predict target words based on their (short and long range) context, and both use techniques, such as negative sampling, to address large vocabulary computational challenges during training (Jozefowicz et al., 2016) . The main difference is that LSTM language models are mainly concerned with optimizing predictions of conditional probabilities for target words given their history, while our model is focused on deriving generally useful representations to whole history-and-future contexts of target words. We follow word2vec's learning framework as it is known to produce high-quality representations for single words. It does so by having t \u2022 c approximate PMI(t, c) rather than log p(t|c).",
"cite_spans": [
{
"start": 415,
"end": 440,
"text": "(Jozefowicz et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation to Language Models",
"sec_num": "2.4"
},
{
"text": "We intend context2vec's generic context embedding function to be integrated into various more optimized task-specific systems. However, to demonstrate its qualities independently, we address three different types of tasks by the simple means of measuring cosine distances between its embedded representations. Yet, we compare our performance against the state-of-the-art results of highly competitive task-optimized systems on each task. In addition we use AWE as a baseline representing a commonly used generic context representation, which like ours, can represent variable-length contexts with a fixed-size vector. Our evaluation includes the following tasks: sentence completion, lexical substitution and supervised word sense disambiguation (WSD).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Settings",
"sec_num": "3"
},
{
"text": "With the exception of the sentence completion task (MSCC), which comes with its own learning corpus, we used the two billion word ukWaC (Ferraresi et al., 2008) as our learning corpus. To speed-up the training of context2vec, we discarded all sentences that are longer than 64 words, reducing the size of the corpus by \u223c10%. However, we train the embeddings used in the AWE baseline on the full corpus to not penalize it on account of our model. We lower-cased all text and considered any token with fewer than 100 occurrences as an unknown word. This yielded a vocabulary of a little over 180K words for the full corpus, and 160K words for the trimmed version.",
"cite_spans": [
{
"start": 136,
"end": 160,
"text": "(Ferraresi et al., 2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning corpus",
"sec_num": "3.1"
},
{
"text": "context2vec We implemented our model using the Chainer toolkit (Tokui et al., 2015) , and Adam (Kingma and Ba, 2014) for optimization. To speed-up the learning time we used mini-batch training, where only sentences of equal length are assigned to the same batch. We discuss the hyperparameters tuning of our model in section 4.1.",
"cite_spans": [
{
"start": 63,
"end": 83,
"text": "(Tokui et al., 2015)",
"ref_id": "BIBREF30"
},
{
"start": 95,
"end": 116,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compared Methods",
"sec_num": "3.2"
},
{
"text": "AWE We learned word embeddings with the popular word2vec Skip-gram model using standard hyperparameters: 600 dimensions, 10 negative samples, window-size 10 and 3/5 iterations for the ukWaC/MSCC learning corpora, respectively. Then we used a simple average of these embeddings as our AWE context representation. 3 In addition, we experimented with the following variations: (1) ignoring stopwords (2) performing a weighted average of the words in the context using tf-idf weights (3) considering just the 5-word window around the target word instead of the whole sentence. Specifically, in the WSD experiment the context provided for the target words is a full paragraph. Though it could be extended, context2vec is currently not designed to take advantage of such large context and therefore ignores all context out-side of the sentence of the target word. However, for AWE we also experimented with the option of generating the context representation based on the entire paragraph. In all cases, the size (dimensionality) of the AWE context representation was equal to that of context2vec, and the context-to-target and context-to-context similarities were computed using vector cosine between the respective embedding representations, as with context2vec.",
"cite_spans": [
{
"start": 312,
"end": 313,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compared Methods",
"sec_num": "3.2"
},
{
"text": "The Microsoft Sentence Completion Challenge (MSCC) (Zweig and Burges, 2011) includes 1,040 items. Each item is a sentence with one word replaced by a gap, and the challenge is to identify the word, out of five choices, that is most meaningful and coherent as the gap-filler. While there is no official dev/test split for this dataset, we followed previous work (Mirowski and Vlachos, 2015) and used the first 520 sentences for parameter tuning and the rest as the test set. 4",
"cite_spans": [
{
"start": 51,
"end": 75,
"text": "(Zweig and Burges, 2011)",
"ref_id": "BIBREF35"
},
{
"start": 361,
"end": 389,
"text": "(Mirowski and Vlachos, 2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Completion Challenge",
"sec_num": "3.3"
},
{
"text": "The MSCC includes a learning corpus of 50 million words. To use this corpus for training our models, we first discarded all sentences longer than 128 words, which resulted in a negligible reduction of \u223c 1% in the size of the corpus. Then, we converted all text to lowercase and considered all words with frequency less than 3 as unknown, yielding a vocabulary of about 100K word types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Completion Challenge",
"sec_num": "3.3"
},
{
"text": "Finally, as the gap-filler, we simply choose the word whose target word embedding is the most similar to the embedding of the given context using the target-to-context similarity metric. We report the accuracy achieved in this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Completion Challenge",
"sec_num": "3.3"
},
{
"text": "The lexical substitution task requires finding a substitute word for a given target word in sentential context. The difference between this and the sentence completion task is that the substitute word needs not only to be coherent with the sentential context, but also preserve the meaning of the original word in that context. Most recent works evaluated their performance on a ranking variant of the lexical substitution task, which uses predefined candidate lists provided with the gold standard, and requires to rank them considering the sentential context. Performance in this task is reported with generalized average precision (GAP). 5 As in MSCC, in this evaluation we rank lexical substitutes according to the measured similarity between their target word embeddings and the embedding of the given sentential context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Substitution Task",
"sec_num": "3.4"
},
{
"text": "We used two lexical substitution datasets in our experiments. The first is the dataset introduced in the lexical substitution task of SemEval 2007 (Mc-Carthy and Navigli, 2007) , denoted LST-07, split into 300 dev sentences and 1,710 test sentences. The second is a more recent 'all-words ' dataset (Kremer et al., 2014) , denoted LST-14, with over 15K target word instances. It comes with a predefined 35%/65% split. We used the smaller set as the dev set for parameter tuning and the larger one as our test set.",
"cite_spans": [
{
"start": 134,
"end": 161,
"text": "SemEval 2007 (Mc-Carthy and",
"ref_id": null
},
{
"start": 162,
"end": 176,
"text": "Navigli, 2007)",
"ref_id": "BIBREF18"
},
{
"start": 299,
"end": 320,
"text": "(Kremer et al., 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Substitution Task",
"sec_num": "3.4"
},
{
"text": "In supervised WSD tasks, the goal is to determine the correct sense of words in context, based on a manually tagged training set. To classify a test word instance in context, we consider all of the context word units 300 LSTM hidden/output units 600 MLP input units 1200 MLP hidden units 1200 sentential context units 600 target word units 600 negative samples 10 Table 5 : context2vec hyperparameters tagged instances of the same word lemma in the training set, and find the instance whose context embedding is the most similar to the context embedding of the test instance using the context-tocontext similarity metric. Then, we use the tagged senses 6 of that instance. We note that this is essentially the simplest form of a k-nearest-neighbor algorithm, with k = 1. As our supervised WSD dataset we used the Senseval-3 lexical sample dataset (Mihalcea et al., 2004) , denoted SE-3, which includes 7,860 train and 3,944 test instances. We used the training set for parameter tuning and report accuracy results on the test set.",
"cite_spans": [
{
"start": 847,
"end": 870,
"text": "(Mihalcea et al., 2004)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 364,
"end": 371,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Supervised WSD",
"sec_num": "3.5"
},
{
"text": "The hyperparameters used in our reported experiments with context2vec are summarized in Table 5. In preliminary development experiments, we used only 200 units for representing sentential contexts, and then saw significant improvement in results, when moving to 600 units. Increasing the representation size to 1,000 units did not seem to further improve results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development Experiments",
"sec_num": "4.1"
},
{
"text": "With mini-batches of 1,000 sentences at a time, we started by training our models with a single iteration over the 2-billion-word ukWaC corpus. This took \u223c30 hours, using a single Tesla K80 GPU. For the smaller 50-million-word MSCC learning corpus, a full iteration with a batch size of 100 took only about 3 hours. For this corpus, we started with 5 training iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development Experiments",
"sec_num": "4.1"
},
{
"text": "To explore the rare-word bias effect of the vocabulary smoothing factor \u03b1, we varied its value in our development experiments. The results appear in Table 6 on the left hand side. Since we preferred to keep our model as simple as possible, based on these results, we chose the single Table 6 : Development set results. iters+ denotes the best model found when running more training iterations with \u03b1 = 0.75. AWE config: W5/sent denotes using a 5-word-window/full-sentence, and stop/tf-idf denotes ignoring stop words or using tf-idf weights, respectively. value \u03b1 = 0.75 for all of our test sets experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 149,
"end": 156,
"text": "Table 6",
"ref_id": null
},
{
"start": 284,
"end": 291,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Development Experiments",
"sec_num": "4.1"
},
{
"text": "With this choice, we also tried training our models with more iterations and found that with 3 iterations over the ukWaC corpus and 10 iterations over the MSCC corpus we can obtain some further improvement in results, see iters+ in Table 6 . The results of our experiments with all of the AWE variants, described in section 3.2, appear on the right hand side of Table 6 . For brevity, we report only the best and worst configuration for each benchmark. As can be seen, in two out of four benchmarks, a window of 5 words yields better performance than a full sentential context, suggesting that the AWE representation is not very successful in leveraging effectively long range information. Removing stop words or using tf-idf weights improves performance significantly. However, the results are still much lower than the ones achieved with context2vec. To raise the bar, in each test-set experiment we used the best AWE configuration found for the corresponding development-set experiment.",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 239,
"text": "Table 6",
"ref_id": null
},
{
"start": 362,
"end": 369,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Development Experiments",
"sec_num": "4.1"
},
{
"text": "The test set results are summarized in Table 7 . First, we see that context2vec substantially outperforms AWE across all benchmarks. This suggests that our context representations are much better optimized for capturing sentential context information than AWE, at least for these tasks. Further, we see that with context2vec we either surpass or almost reach the state-of-the-art on all benchmarks. This is quite impressive, considering that all we did was measure cosine distances between context2vec's representations to compete with more complex and task-optimized systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 7",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Test Sets Results",
"sec_num": "4.2"
},
{
"text": "More specifically, in the sentence completion task (MSCC) the prior state-of-the-art result is due to Mikolov et al. (2013a) and iters+ denotes the model that was trained with more iterations. S-1/S-2 stand for the best/secondbest prior result reported for the benchmark.",
"cite_spans": [
{
"start": 102,
"end": 124,
"text": "Mikolov et al. (2013a)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test Sets Results",
"sec_num": "4.2"
},
{
"text": "weighted combination of scores from two different models: a recurrent neural network language model, and a Skip-gram model. The second-best result is due to Liu et al. (2015) and is based on word embeddings that are learned based on both corpora and structured knowledge resources, such as WordNet. context2vec outperforms both of them. In the lexical substitution tasks, the best prior results are due to Melamud et al. (2015a) . 7 They employ an exemplar-based approach that requires keeping thousands of exemplar contexts for every target word type. The second-best is due to Melamud et al. (2015b) . They propose a simple approach, but it requires dependency-parsed text as input. context2vec achieves comparable results with these works, using the same learning corpus. In the Senseval-3 supervised WSD task, the best result is due to Ando (2006) and the second-best to Rothe and Sch\u00fctze (2015) . context2vec is almost on par with these results, which were achieved with dedicated feature engineering and supervised machine learning models.",
"cite_spans": [
{
"start": 157,
"end": 174,
"text": "Liu et al. (2015)",
"ref_id": "BIBREF17"
},
{
"start": 406,
"end": 428,
"text": "Melamud et al. (2015a)",
"ref_id": "BIBREF19"
},
{
"start": 431,
"end": 432,
"text": "7",
"ref_id": null
},
{
"start": 579,
"end": 601,
"text": "Melamud et al. (2015b)",
"ref_id": "BIBREF20"
},
{
"start": 875,
"end": 899,
"text": "Rothe and Sch\u00fctze (2015)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test Sets Results",
"sec_num": "4.2"
},
{
"text": "Substitute vectors (Yuret, 2012) represent contexts as a probabilistic distribution over the potential gap-filler words for the target slot, pruned to its top-k most probable words. While using this representation showed interesting potential (Yatbaz et al., 2012; Melamud et al., 2015a) , it can currently be generated efficiently only with n-gram language models and hence is limited to fixed-size context windows. It is also high dimensional and sparse, in contrast to our proposed representations. Syntactic dependency context embeddings have been proposed recently (Levy and Goldberg, 2014a; Bansal et al., 2014) . They depend on the availability of a high-quality dependency parser, and can be viewed as a 'bag-of-dependencies' rather than a single representation for the entire sentential context. However, we believe that incorporating such dependency-based information in our model is an interesting future direction.",
"cite_spans": [
{
"start": 19,
"end": 32,
"text": "(Yuret, 2012)",
"ref_id": "BIBREF33"
},
{
"start": 243,
"end": 264,
"text": "(Yatbaz et al., 2012;",
"ref_id": "BIBREF32"
},
{
"start": 265,
"end": 287,
"text": "Melamud et al., 2015a)",
"ref_id": "BIBREF19"
},
{
"start": 570,
"end": 596,
"text": "(Levy and Goldberg, 2014a;",
"ref_id": "BIBREF13"
},
{
"start": 597,
"end": 617,
"text": "Bansal et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "A couple of recent works extended word2vec's CBOW by replacing its internal context representation. Ling et al. (2015b) proposed a continuous window, which is a simple linear projection of the context window embeddings into a low dimensional vector. Ling et al. (2015a) proposed 'CBOW with attention', which is used for finding the relevant features in a context window. In contrast to our model, both approaches confine the context to a fixed-size window. Furthermore, they limit their scope to using these context representations only internally to improve the learning of target words embeddings, rather than evaluate the benefit of using them directly in NLP tasks, as we do. represent words in context using bidirectional LSTMs and multilingual supervision. In contrast, our model is focused on representing the context alone. Yet, as shown in our lexical substitution and word sense disambiguation evaluations, it can easily be used for modeling the meaning of words in context as well.",
"cite_spans": [
{
"start": 100,
"end": 119,
"text": "Ling et al. (2015b)",
"ref_id": "BIBREF16"
},
{
"start": 250,
"end": 269,
"text": "Ling et al. (2015a)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Finally, there is considerable work on using recurrent neural networks to represent word sequences, such as phrases or sentences (Socher et al., 2011; Kiros et al., 2015) . We note that the techniques used for learning sentence representations have much in common with those we use for sentential context representations. Yet, sentential context representations aim to reflect the information in the sentence only inasmuch as it is relevant to the target slot. Specifically, different target positions in the same sentence can yield completely different context representations. In contrast, sentence representations aim to reflect the entire contents of the sentence.",
"cite_spans": [
{
"start": 129,
"end": 150,
"text": "(Socher et al., 2011;",
"ref_id": "BIBREF28"
},
{
"start": 151,
"end": 170,
"text": "Kiros et al., 2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We presented context2vec, a neural model that learns a generic embedding function for variablelength contexts of target words. We demonstrated that it can be trained in a reasonable time over billions of words and generate high quality context representations, which substantially outperform the traditional average-of-word-embeddings approach on three different tasks. As such, we hypothesize that it could contribute to various NLP systems that model context. Specifically, semisupervised systems may benefit from using our model, as it may carry more useful information learned from large corpora, than individual pretrained word embeddings do.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Potential",
"sec_num": "6"
},
{
"text": "We pad every input sentence with special BOS and EOS words in positions 0 and n + 1, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We made some preliminary experiments using word embeddings learned with word2vec's CBOW model, instead of Skip-gram, but this yielded worse results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Mikolov et al. (2013a) did not specify their dev/test split and all other works reported results only on the entire dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See Melamud et al. (2015a) for more of their setting details, which we followed here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "There's one or more senses assigned to a each instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Szarvas et al. (2013) achieved almost the same result, but with a supervised model, not directly compared to ours.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank our anonymous reviewers for their useful comments. This work was partially supported by the Israel Science Foundation grant 880/12 and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Applying alternating structure optimization to word sense disambiguation",
"authors": [
{
"first": "Ando",
"middle": [],
"last": "Rie Kubota",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Tenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "77--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rie Kubota Ando. 2006. Applying alternating struc- ture optimization to word sense disambiguation. In Proceedings of the Tenth Conference on Compu- tational Natural Language Learning, pages 77-84. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Tailoring continuous word representations for dependency parsing",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014. Tailoring continuous word representations for dependency parsing. In Proceedings of the Annual Meeting of the Association for Computational Lin- guistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A unified model for word sense representation and disambiguation",
"authors": [
{
"first": "Xinxiong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of EMNLP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Re- search, 12:2493-2537.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Introducing and evaluating ukwac, a very large web-derived corpus of English",
"authors": [
{
"first": "Adriano",
"middle": [],
"last": "Ferraresi",
"suffix": ""
},
{
"first": "Eros",
"middle": [],
"last": "Zanchetta",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Bernardini",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 4th Web as Corpus Workshop (WAC-4)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adriano Ferraresi, Eros Zanchetta, Marco Baroni, and Silvia Bernardini. 2008. Introducing and evaluating ukwac, a very large web-derived corpus of English. In Proceedings of the 4th Web as Corpus Workshop (WAC-4).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Improving word representations via global context and multiple word prototypes",
"authors": [
{
"first": "Eric",
"middle": [
"H"
],
"last": "Huang",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric H. Huang, Richard Socher, Christopher D. Man- ning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Exploring the limits of language modeling",
"authors": [
{
"first": "Rafal",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1602.02410"
]
},
"num": null,
"urls": [],
"raw_text": "Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Neural context embeddings for automatic discovery of word senses",
"authors": [
{
"first": "Mikael",
"middle": [],
"last": "K\u00e5geb\u00e4ck",
"suffix": ""
},
{
"first": "Fredrik",
"middle": [],
"last": "Johansson",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikael K\u00e5geb\u00e4ck, Fredrik Johansson, Richard Johans- son, and Devdatt Dubhashi. 2015. Neural context embeddings for automatic discovery of word senses. In Proceedings of NAACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning to represent words in context with multilingual supervision",
"authors": [
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Workshop in ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazuya Kawakami and Chris Dyer. 2016. Learning to represent words in context with multilingual super- vision. In Workshop in ICLR.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Skip-thought vectors",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ruslan",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Proceedings of NIPS.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "What substitutes tell us-analysis of an all-words lexical substitution corpus",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "Kremer",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Thater",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerhard Kremer, Katrin Erk, Sebastian Pad\u00f3, and Ste- fan Thater. 2014. What substitutes tell us-analysis of an all-words lexical substitution corpus. In Pro- ceedings of EACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.01360"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recog- nition. arXiv preprint arXiv:1603.01360.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dependencybased word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014a. Dependency- based word embeddings. In Proceedings of ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Neural word embeddings as implicit matrix factorization",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014b. Neural word embeddings as implicit matrix factorization. In Pro- ceedings of NIPS.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Not all contexts are created equal: Better word representations with variable attention",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Chu-Cheng",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Silvio",
"middle": [],
"last": "Amir",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Lin Chu-Cheng, Yulia Tsvetkov, and Silvio Amir. 2015a. Not all contexts are created equal: Better word representations with variable attention. In Proceedings of EMNLP.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Two/too simple adaptations of word2vec for syntax problems",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Chris Dyer, Alan Black, and Isabel Tran- coso. 2015b. Two/too simple adaptations of word2vec for syntax problems. In Proceedings of NAACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning semantic word embeddings based on ordinal knowledge constraints",
"authors": [
{
"first": "Quan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Zhen-Hua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quan Liu, Hui Jiang, Si Wei, Zhen-Hua Ling, and Yu Hu. 2015. Learning semantic word embeddings based on ordinal knowledge constraints. Proceed- ings of ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semeval-2007 task 10: English lexical substitution task",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of SemEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana McCarthy and Roberto Navigli. 2007. Semeval- 2007 task 10: English lexical substitution task. In Proceedings of SemEval.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Modeling word meaning in context with substitute vectors",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Melamud",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Melamud, Ido Dagan, and Jacob Goldberger. 2015a. Modeling word meaning in context with sub- stitute vectors. In Proceedings of ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A simple word embedding model for lexical substitution",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Melamud",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Workshop on Vector Space Modeling for NLP (VSM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Melamud, Omer Levy, and Ido Dagan. 2015b. A simple word embedding model for lexical substitu- tion. In Proceedings of Workshop on Vector Space Modeling for NLP (VSM).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The role of context types and dimensionality in learning word embeddings",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Melamud",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Melamud, David McClosky, Siddharth Patward- han, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embed- dings. In Proceedings of NAACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The senseval-3 english lexical sample task",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"Anatolievich"
],
"last": "Chklovski",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea, Timothy Anatolievich Chklovski, and Adam Kilgarriff. 2004. The senseval-3 english lex- ical sample task. ACL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Proceedings of NIPS.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Dependency recurrent neural language models for sentence completion",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Mirowski",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1507.01193"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Mirowski and Andreas Vlachos. 2015. Depen- dency recurrent neural language models for sentence completion. arXiv preprint arXiv:1507.01193.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings EMNLP.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Autoextend: Extending word embeddings to embeddings for synsets and lexemes",
"authors": [
{
"first": "Sascha",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sascha Rothe and Hinrich Sch\u00fctze. 2015. Autoex- tend: Extending word embeddings to embeddings for synsets and lexemes. Proceedings of ACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Semi-supervised recursive autoencoders for predicting sentiment distributions",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Eric",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Huang",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predict- ing sentiment distributions. In Proceedings of EMNLP.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Learning to rank lexical substitutions",
"authors": [
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "Szarvas",
"suffix": ""
},
{
"first": "R\u00f3bert",
"middle": [],
"last": "Busa-Fekete",
"suffix": ""
},
{
"first": "Eyke",
"middle": [],
"last": "H\u00fcllermeier",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gy\u00f6rgy Szarvas, R\u00f3bert Busa-Fekete, and Eyke H\u00fcllermeier. 2013. Learning to rank lexical sub- stitutions. In Proceedings of EMNLP.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Chainer: a next-generation open source framework for deep learning",
"authors": [
{
"first": "Seiya",
"middle": [],
"last": "Tokui",
"suffix": ""
},
{
"first": "Kenta",
"middle": [],
"last": "Oono",
"suffix": ""
},
{
"first": "Shohei",
"middle": [],
"last": "Hido",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Clayton",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Workshop on Machine Learning Systems (Learn-ingSys) in NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. 2015. Chainer: a next-generation open source framework for deep learning. In Proceedings of Workshop on Machine Learning Systems (Learn- ingSys) in NIPS.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Word representations: A simple and general method for semisupervised learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semisupervised learning. In Proceedings of ACL.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Learning syntactic categories using paradigmatic representations of word context",
"authors": [
{
"first": "Enis",
"middle": [],
"last": "Mehmet Ali Yatbaz",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Sert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehmet Ali Yatbaz, Enis Sert, and Deniz Yuret. 2012. Learning syntactic categories using paradigmatic representations of word context. In Proceedings EMNLP.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "FASTSUBS: An efficient and exact procedure for finding the most likely lexical substitutes based on an n-gram language model",
"authors": [
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2012,
"venue": "Signal Processing Letters",
"volume": "19",
"issue": "11",
"pages": "725--728",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deniz Yuret. 2012. FASTSUBS: An efficient and ex- act procedure for finding the most likely lexical sub- stitutes based on an n-gram language model. Signal Processing Letters, IEEE, 19(11):725-728.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "End-to-end learning of semantic role labeling using recurrent neural networks",
"authors": [
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural net- works. In Proceedings of ACL.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "The microsoft research sentence completion challenge",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Burges",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Zweig and Christopher JC Burges. 2011. The microsoft research sentence completion challenge. Technical report, Technical Report MSR-TR-2011- 129, Microsoft.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"text": "word2vec and context2vec architectures.",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "A 2D illustration of context2vec's embedded space and similarity metrics. Triangles and circles denote sentential context embeddings and target word embeddings, respectively.",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"text": "Sentential Context Closest target words This [ ] is due item, fact-sheet, offer, pack, card This [ ] is due not just to mere luck offer, suggestion, announcement, item, prize This [ ] is due not just to mere luck, award, prize, turnabout, offer, gift but to outstanding work and dedication [ ] is due not just to mere luck,it, success, this, victory, prize-money but to outstanding work and dedicationTable 1: Closest target words to various sentential contexts, illustrating context2vec's sensitivity to long range dependencies, and both sides of the target word.",
"content": "<table><tr><td>\u03b1</td><td>John was [ ] last year</td></tr><tr><td colspan=\"2\">0.25 born, late, married, out, back</td></tr><tr><td colspan=\"2\">0.50 born, back, married, released, elected</td></tr><tr><td colspan=\"2\">0.75 born, interviewed, re-elected</td></tr><tr><td colspan=\"2\">1.00 starstruck, goal-less, unwed</td></tr></table>",
"num": null,
"html": null
},
"TABREF1": {
"type_str": "table",
"text": "Closest target words to a given sentential context using different \u03b1 values in context2vec.",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF2": {
"type_str": "table",
"text": "Query Furthermore our work in Uganda and Romania [ adds ] a wider perspective. ... themes in art have a fascination , since they [ add ] a subject interest context2vec to a viewer's enjoyment of artistic qualities. closest Richard is joining us every month to pass on tips , ideas and news from the world of horticulture , and [ add ] a touch of humour too ... the foreign ministers said political and economic reforms in Poland and Hungary AWE had made considerable progress but [ added ] : the process remains fragile ... closest ... Germany had announced the solution as a humanitarian act by the government, [ adding ] that it hoped Bonn in future would run its embassies in normal manner...",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "An example for a given 'query' context and the two closest contexts to it, as 'retrieved' by context2vec similarity and AWE similarity.",
"content": "<table><tr><td colspan=\"4\">context2vec word2vec-w2 word2vec-w10 context2vec</td><td>word2vec-w2</td><td>word2vec-w10</td></tr><tr><td/><td>flying</td><td/><td/><td>syntactically</td><td/></tr><tr><td>gliding</td><td>flew</td><td>flew</td><td>semantically</td><td colspan=\"2\">grammatically semantically</td></tr><tr><td>sailing</td><td>fly</td><td>fly</td><td>lexically</td><td colspan=\"2\">phonologically grammatically</td></tr><tr><td>diving</td><td>aerobatics</td><td>aeroplane</td><td colspan=\"2\">grammatically semantically</td><td>syntax</td></tr><tr><td>flown</td><td>low-flying</td><td>flown</td><td colspan=\"3\">phonologically ungrammatical syntactic</td></tr><tr><td>travelling</td><td>flown</td><td>bi-plane</td><td>topologically</td><td>lexically</td><td>lexically</td></tr><tr><td/><td>san</td><td/><td/><td>prize</td><td/></tr><tr><td>agios</td><td>francisco</td><td>francisco</td><td>prizes</td><td>prizes</td><td>prizes</td></tr><tr><td>aghios</td><td>diego</td><td>diego</td><td>award</td><td>prize-winner</td><td>winner</td></tr><tr><td>los</td><td>fransisco</td><td>fransisco</td><td>trophy</td><td>prizewinner</td><td>winners</td></tr><tr><td>tanjung</td><td>los</td><td>bernardino</td><td>medal</td><td>prize</td><td>prizewinner</td></tr><tr><td>puerto</td><td>obispo</td><td>los</td><td>prizewinner</td><td>prizewinners</td><td>prize.</td></tr></table>",
"num": null,
"html": null
},
"TABREF4": {
"type_str": "table",
"text": "Top-5 closest target words to a few given target words.",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF6": {
"type_str": "table",
"text": "Results on test sets. c2v is context2vec",
"content": "<table><tr><td>and was achieved by a</td></tr></table>",
"num": null,
"html": null
}
}
}
}