ACL-OCL / Base_JSON /prefixR /json /repl4nlp /2020.repl4nlp-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:58:23.106166Z"
},
"title": "Word Embeddings as Tuples of Feature Probabilities",
"authors": [
{
"first": "Siddharth",
"middle": [],
"last": "Bhat",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Theory & Algorithmic Research (C-STAR",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Alok",
"middle": [],
"last": "Debnath",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Souvik",
"middle": [],
"last": "Bannerjee",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we provide an alternate perspective on word representations, by reinterpreting the dimensions of the vector space of a word embedding as a collection of features. In this reinterpretation, every component of the word vector is normalized against all the word vectors in the vocabulary. This idea now allows us to view each vector as an n-tuple (akin to a fuzzy set), where n is the dimensionality of the word representation and each element represents the probability of the word possessing a feature. Indeed, this representation enables the use fuzzy set theoretic operations, such as union, intersection and difference. Unlike previous attempts, we show that this representation of words provides a notion of similarity which is inherently asymmetric and hence closer to human similarity judgements. We compare the performance of this representation with various benchmarks, and explore some of the unique properties including function word detection, detection of polysemous words, and some insight into the interpretability provided by set theoretic operations.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we provide an alternate perspective on word representations, by reinterpreting the dimensions of the vector space of a word embedding as a collection of features. In this reinterpretation, every component of the word vector is normalized against all the word vectors in the vocabulary. This idea now allows us to view each vector as an n-tuple (akin to a fuzzy set), where n is the dimensionality of the word representation and each element represents the probability of the word possessing a feature. Indeed, this representation enables the use fuzzy set theoretic operations, such as union, intersection and difference. Unlike previous attempts, we show that this representation of words provides a notion of similarity which is inherently asymmetric and hence closer to human similarity judgements. We compare the performance of this representation with various benchmarks, and explore some of the unique properties including function word detection, detection of polysemous words, and some insight into the interpretability provided by set theoretic operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word embedding is one of the most crucial facets of Natural Language Processing (NLP) research. Most non-contextualized word representations aim to provide a distributional view of lexical semantics, known popularly by the adage \"a word is known by the company it keeps\" (Firth, 1957) . Popular implementations of word embeddings such as word2vec (Mikolov et al., 2013a) and GloVe (Pennington et al., 2014) aim to represent words as embeddings in a vector space. These embeddings are trained to be oriented such that vectors with higher similarities have higher dot products when normalized. Some of the most common methods of intrinsic evaluation of word embeddings include similarity, analogy and compositionality. While similarity is computed using the notion of dot product, analogy and compositionality use vector addition.",
"cite_spans": [
{
"start": 271,
"end": 284,
"text": "(Firth, 1957)",
"ref_id": "BIBREF7"
},
{
"start": 347,
"end": 370,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF18"
},
{
"start": 381,
"end": 406,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, distributional representations of words over vector spaces have an inherent lack of interpretablity (Goldberg and Levy, 2014) . Furthermore, due to the symmetric nature of the vector space operations for similarity and analogy, which are far from human similarity judgements (Tversky, 1977) . Other word representations tried to provide asymmetric notions of similarity in a noncontextualized setting, including Gaussian embeddings (Vilnis and McCallum, 2014) and word similarity by dependency (Gawron, 2014) . However, these models could not account for the inherent compositionality of word embeddings (Mikolov et al., 2013b) . Moreover, while work has been done on providing entailment for vector space models by entirely reinterpreting word2vec as an entailment based semantic model (Henderson and Popa, 2016) , it requires an external notion of compositionality. Finally, word2vec and GloVe, as such, are meaning conflation deficient, meaning that a single word with all its possible meanings is represented by a single vector (Camacho-Collados and Pilehvar, 2018) . Sense representation models in noncontextualized representations such as multi-sense skip gram, by performing joint clustering for local word neighbourhood. However, these sense representations are conditioned on non-disambiguated senses in the context and require additional conditioning on the intended senses (Li and Jurafsky, 2015) .",
"cite_spans": [
{
"start": 109,
"end": 134,
"text": "(Goldberg and Levy, 2014)",
"ref_id": "BIBREF9"
},
{
"start": 284,
"end": 299,
"text": "(Tversky, 1977)",
"ref_id": "BIBREF28"
},
{
"start": 441,
"end": 468,
"text": "(Vilnis and McCallum, 2014)",
"ref_id": "BIBREF29"
},
{
"start": 503,
"end": 517,
"text": "(Gawron, 2014)",
"ref_id": "BIBREF8"
},
{
"start": 613,
"end": 636,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF19"
},
{
"start": 796,
"end": 822,
"text": "(Henderson and Popa, 2016)",
"ref_id": "BIBREF11"
},
{
"start": 1041,
"end": 1078,
"text": "(Camacho-Collados and Pilehvar, 2018)",
"ref_id": "BIBREF2"
},
{
"start": 1393,
"end": 1416,
"text": "(Li and Jurafsky, 2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we aim to answer the question: Can a single word representation mechanism account for lexical similarity and analogy, compositionality, lexical entailment and be used to detect and resolve polysemy? We find that by performing column-wise normalization of word vectors trained using the word2vec skip-gram negative sampling regime, we can indeed represent all the above characteristics in a single representation. We interpret a column wise normalized word representation. We now treat these representations as fuzzy sets and can therefore use fuzzy set theoretic operations such as union, intersection, difference, etc. while also being able to succinctly use asymmetric notions of similarity such as K-L divergence and cross entropy. Finally, we show that this representation can highlight syntactic features such as function words, use their properties to detect polysemy, and resolve it qualitatively using the inherent compositionality of this representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to make these experiments and their results observable in general, we have provided the code which can be used to run these operations. The code can be found at https://github. com/AlokDebnath/fuzzy_embeddings. The code also has a working command line interface where users can perform qualitative assessments on the set theoretic operations, similarity, analogy and compositionality which are discussed in the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The representation of words using logical paradigms such as fuzzy logic, tensorial representations and other probabilistic approaches have been attempted before. In this section, we uncover some of these representations in detail. Lee (1999) introduced measures of distributional similarity to improve the probability estimation for unseen occurrences. The measure of similarity of distributional word clusters was based on multiple measures including Euclidian distance, cosine distance, Jaccard's Coefficient, and asymmetric measures like \u03b1-skew divergence.",
"cite_spans": [
{
"start": 231,
"end": 241,
"text": "Lee (1999)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Bergmair (2011) used a fuzzy set theoretic view of features associated with word representations. While these features were not adopted from the vector space directly, it presents a unique perspective of entailment chains for reasoning tasks. Their analysis of inference using fuzzy representations provides interpretability in reasoning tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Grefenstette (2013) presents a tenosrial calculus for word embeddings, which is based on compositional operators which uses vector representation of words to create a compositional distributional model of meaning. By providing a categorytheoretic framework, the model creates an inherently compositional structure based on distributional word representations. However, they showed that in this framework, quantifiers could not be expressed. Herbelot and Vecchi (2015) refers to a notion of general formal semantics inferred from a distributional representation by creating relevant ontology based on the existing distribution. This mapping is therefore from a standard distributional model to a set-theoretic model, where dimensions are predicates and weights are generalised quantifiers. Copestake (2016, 2017) developed functional distributional semantics, which is a probabilistic framework based on model theory. The framework relies on differentiating and learning entities and predicates and their relations, on which Bayesian inference is performed. This representation is inherently compositional, context dependent representation.",
"cite_spans": [
{
"start": 441,
"end": 467,
"text": "Herbelot and Vecchi (2015)",
"ref_id": "BIBREF12"
},
{
"start": 789,
"end": 811,
"text": "Copestake (2016, 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we provide a basic background of fuzzy sets including some fuzzy set operations, reinterpreting sets as tuples in a universe of finite elements and showing some set operations. We also cover the computation of fuzzy entropy as a Bernoulli random variable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Fuzzy Sets and Fuzzy Logic",
"sec_num": "3"
},
{
"text": "A fuzzy set is defined as a set with probabilistic set membership. Therefore, a fuzzy set is denoted as A = {(x, \u00b5 A (x)), x \u2208 \u2126}, where x is an element of set A with a probability \u00b5 A (x) such that 0 \u2264 \u00b5 A \u2264 1, and \u2126 is the universal set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Fuzzy Sets and Fuzzy Logic",
"sec_num": "3"
},
{
"text": "If our universe \u2126 is finite and of cardinality n, our notion of probabilistic set membership is constrained to a maximum n values. Therefore, each fuzzy set A can be represented as an n-tuple, with each member of the tuple A[i] being the probability of the ith member of \u2126. We can rewrite a fuzzy set as an n-tuple A = (\u00b5 A (x), \u2200x \u2208 \u2126), such that |A | = |\u2126|. In this representation, A[i] is the probability of the ith member of the tuple A. We define some common set operations in terms of this representation as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Fuzzy Sets and Fuzzy Logic",
"sec_num": "3"
},
{
"text": "(A \u2229 B)[i] \u2261 A[i] \u00d7 B[i] (set intersection) (A \u222a B)[i] \u2261 A[i] + B[i] \u2212 A[i] \u00d7 B[i] (set union) (A B)[i] \u2261 max(1, min(0, A[i] + B[i])) (disjoint union) (\u00acA)[i] \u2261 1 \u2212 A[i] (complement) (A \\ B)[i] \u2261 A[i] \u2212 min(A[i], B[i]) (set difference) (A \u2286 B) \u2261 \u2200x \u2208 \u2126 : \u00b5A(x) \u2264 \u00b5B(x) (set inclusion) |A| \u2261 i\u2208\u2126 \u00b5A(i) (cardinality)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Fuzzy Sets and Fuzzy Logic",
"sec_num": "3"
},
{
"text": "The notion of entropy in fuzzy sets is an extrapolation of Shannon entropy from a single variable on the entire set. Formally, the fuzzy entropy of a set S is a measure of the uncertainty of the elements belonging to the set. The possibility of a member x belonging to the set S is a random variable X S i which is true with probability (p S i ) and f alse with probability (1 \u2212 p S i ). Therefore, X S i is a Bernoulli random variable. In order to compute the entropy of a fuzzy set, we sum the entropy values of each X S i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Fuzzy Sets and Fuzzy Logic",
"sec_num": "3"
},
{
"text": "H(A) \u2261 i H(X A i ) \u2261 i \u2212p A i ln p A i \u2212 (1 \u2212 p A i ) ln(1 \u2212 p A i ) \u2261 i \u2212A[i] ln A[i] \u2212 (1 \u2212 A[i]) ln(1 \u2212 A[i])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Fuzzy Sets and Fuzzy Logic",
"sec_num": "3"
},
{
"text": "This formulation will be useful in section 4.4 where we discuss two asymmetric measures of similarity, cross-entropy and K-L divergence, which can be seen as a natural extension of this formulation of fuzzy entropy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Fuzzy Sets and Fuzzy Logic",
"sec_num": "3"
},
{
"text": "In this section, we use the mathematical formulation above to reinterpret word embeddings. We first show how these word representations are created, then detail the interpretation of each of the set operations with some examples. We also look into some measures of similarity and their formulation in this framework. All examples in this section have been taken using the Google News Negative 300 vectors 1 . We used these gold standard vectors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Operations",
"sec_num": "4"
},
{
"text": "We start by converting the skip-gram negative sample word vectors into a tuple of feature probabilities. In order to construct a tuple of features representation in R n , we consider that the projection of a vector v onto a dimension i is a function of its probability of possessing the feature associated with that dimension. We compute the conversion from a word vector to a tuple of features by first exponentiating the projection of each vector along each direction, then averaging it over that feature for the entire vocabulary size, i.e. column-wise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing the Tuple of Feature Probabilities",
"sec_num": "4.1"
},
{
"text": "vexp[i] \u2261 exp v[i] v[i] \u2261 vexp[i] w\u2208VOCAB exp wexp[i]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing the Tuple of Feature Probabilities",
"sec_num": "4.1"
},
{
"text": "This normalization then produces a tuple of probabilities associated with each feature (corresponding to the dimensions of R n ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing the Tuple of Feature Probabilities",
"sec_num": "4.1"
},
{
"text": "In line with our discussion from 3, this tuple of probabilities is akin to our representation of a fuzzy set. Let us consider the word v, and its corresponding n-dimensional word vector v. The projection of v on a dimension i normalized (as shown above) to be interpreted as if this dimension i were a property, what is probability that v would possess that property?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing the Tuple of Feature Probabilities",
"sec_num": "4.1"
},
{
"text": "In word2vec, words are distributed in a vector space of a particular dimensionality. Our representation attempts to provide some insight into how the arrangement of vectors provides insight into the properties they share. We do so by considering a function of the projection of a word vector onto a dimension and interpreting as a probability. This allows us an avenue to explore the relation between words in relation to the properties they share. It also allows us access to the entire arsenal of set operations, which are described below in section 4.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing the Tuple of Feature Probabilities",
"sec_num": "4.1"
},
{
"text": "Now that word vectors can be represented as tuples of feature probabilities, we can apply fuzzy set theoretic operations in order to ascertain the veracity of the implementation. We show qualitative examples of the set operations in this subsection, and the information they capture. Throughout this subsection, we follow the following notation: For any two words w 1 , w 2 \u2208 VOCAB,\u0175 1 and\u0175 2 represents R RV VR \u222aV risen cashew wavelengths yellowish flower capita risen ultraviolet whitish red peaked soared purple aquamarine stripes declined acuff infrared roans flowers increased rafters yellowish bluish green rises equalled pigment greenish garlands Table 1 : An example of feature union. Rose is represented by R and Violet by V . We see here that while the word rose and violet have different meanings and senses, the union R\u222aV captures the sense of the flower as well as of colours, which are the senses common to these two words. We list words closest to the given word in the table. Closeness measured by cosine similarity for word2vec and cross-entropy-similarity for our vectors.",
"cite_spans": [],
"ref_spans": [
{
"start": 412,
"end": 693,
"text": "\u222aV risen cashew wavelengths yellowish flower capita risen ultraviolet whitish red peaked soared purple aquamarine stripes declined acuff infrared roans flowers increased rafters yellowish bluish green rises equalled pigment greenish garlands Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Operations on Feature Probabilities",
"sec_num": "4.2"
},
{
"text": "those words using our representation, while w 1 and w 2 are the word2vec vectors of those words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Operations on Feature Probabilities",
"sec_num": "4.2"
},
{
"text": "In section 3, we showed the formulation of fuzzy set operations, assuming a finite universe of elements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Union, Intersection and Difference",
"sec_num": null
},
{
"text": "As we saw in section 4.1, considering each dimension as a feature allows us to reinterpret word vectors as tuples of feature probabilities. Therefore, we can use the fuzzy set theoretic operations on this reinterpretation of fuzzy sets. For convenience, these operations have been called feature union, intersection and difference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Union, Intersection and Difference",
"sec_num": null
},
{
"text": "Intuitively, the feature intersection of words\u0175 1 and\u0175 2 should give us that word\u0175 1\u22292 which has the features common between the two words; an example of which is given in table 1. Similarly, the feature union\u0175 1\u222a2 \u0175 1 \u222a\u0175 2 which has the properties of both the words, normalized for those properties which are common between the two, and feature difference\u0175 1\\2 \u0175 1 \\\u0175 2 is that word which is similar to w 1 without the features of w 2 . Examples of feature intersection and feature difference are shown in table 2 and 3 respectively. While feature union does not seem to have a word2vec analogue, we consider that feature intersection is analogous to vector addition, and feature difference as analogous to vector difference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Union, Intersection and Difference",
"sec_num": null
},
{
"text": "Feature Inclusion Feature inclusion is based on the subset relation of fuzzy sets. We aim to capture feature inclusion by determining if there exist two words w 1 and w 2 such that all the feature probabilities of\u0175 1 are less than that of\u0175 2 , then\u0175 2 \u2286\u0175 1 . We find that feature inclusion is closely linked to hyponymy, which we will show in 5. Table 2 : An example of feature intersection with the possible word2vec analogue (vector addition). The word computer is represented by C and power by P . Note that power is also a decent example of polysemy, and we see that in the context of computers, the connotations of hardware and the CPU are the most accessible. We list words closest to the given word in the table. Closeness measured by cosine similarity for word2vec and cross-entropy-similarity for our vectors.",
"cite_spans": [],
"ref_spans": [
{
"start": 346,
"end": 353,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Union, Intersection and Difference",
"sec_num": null
},
{
"text": "For a word represented using a tuple of feature probabilities, the notion of entropy is strongly tied to the notion of certainty (Xuecheng, 1992), i.e. with what certainty does this word possess or not possess this set of features? Formally, the fuzzy entropy of a set S is a measure of the uncertainty of elements belonging to the set. The possibility a member x i belonging to S is a random variable X S i , which is true with probability p S i , false with probability (1 \u2212 p S i ). Thus, X S i is a Bernoulli random variable. So, to measure the fuzzy entropy of a set, we add up the entropy values of each of the X S i (MacKay and Mac Kay, 2003) . Intuitively, words with the highest entropy are those which have features which are equally likely to belong to them and to their complement, i.e. \u2200i \u2208 \u2126, A",
"cite_spans": [
{
"start": 623,
"end": 649,
"text": "(MacKay and Mac Kay, 2003)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interpreting Entropy",
"sec_num": "4.3"
},
{
"text": "[i] 1 \u2212 A[i]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpreting Entropy",
"sec_num": "4.3"
},
{
"text": ". So words with high fuzzy entropy can occur only in two scenarios: (1) The words occur with very low frequency so their random initialization remained, or (2) The words occur around so many different word groups that their corresponding fuzzy sets have some probability of possessing most of the features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpreting Entropy",
"sec_num": "4.3"
},
{
"text": "Therefore, our representation of words as tuples of features can be used to isolate function words better than the more commonly considered notion of simply using frequency, as it identifies the information theoretic distribution of features based on the context the function word occurs in. French is represented by F and British by B. We see here that set difference capture french words from the dataset, while there does not seem to be any such correlation in the vector difference. We list words closest to the given word in the table. Closeness measured by cosine similarity for word2vec and cross-entropysimilarity for our vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpreting Entropy",
"sec_num": "4.3"
},
{
"text": "4 provides the top 15 function words by entropy, and the correspodingly ranked words by frequency. We see that frequency is clearly not a good enough measure to identify function words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpreting Entropy",
"sec_num": "4.3"
},
{
"text": "One of the most important notions in presenting a distributional word representation is its ability to capture similarity (Van der Plas and Tiedemann, 2006). Since we use and modify vector based word representations, we aim to preserve the \"distribution\" of the vector embeddings, while providing a more robust interpretation of similarity measures. With respect to similarity, we make two strong claims:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "4.4"
},
{
"text": "1. Representing words as a tuple of feature probabilities lends us an inherent notion of similarity. Feature difference provides this notion, as it estimates the difference between two words along each feature probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "4.4"
},
{
"text": "2. Our representation allows for an easy adoption of known similarity measures such as K-L divergence and cross-entropy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "4.4"
},
{
"text": "Note that feature difference (based on fuzzy set difference), K-L divergence and cross-entropy are all asymmetric measures of similarity. As Nematzadeh et al. (2017) points out, human similarity judgements are inherently asymmetric in nature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "4.4"
},
{
"text": "We would like to point out that while most methods of introducing asymmetric similarity measures in word2vec account for both the focus and context vector Asr et al. (2018) and provide the asymmetry by querying on this combination of focus and context representations of each word. Our representation, on the other hand, uses only the focus representations (which are a part of the word representations used for downstream task as well as any other intrinsic evaluation), and still provides an innately asymmetric notion of similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "4.4"
},
{
"text": "K-L Divergence From a fuzzy set perspective, we measure similarity as an overlap of features. For this purpose, we exploit the notion of fuzzy information theory by comparing how close the probability distributions of the similar words are using a standard measure, Kullback-Leibler (K-L) divergence. K-L divergence is an asymmetric measure of similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "4.4"
},
{
"text": "The K-L divergence of a distribution P from another distribution Q is defined in terms of loss of compression. Given data d which follows distribution P , the extra bits need to store it under the false assumption that the data d follows distribution Q is the K-L divergence between the distributions P and Q. In the fuzzy case, we can compute the KL divergence as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "4.4"
},
{
"text": "D(S || T ) \u2261 D X S i X T i = i p S i log p S i /p T i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "4.4"
},
{
"text": "We see in table 5 some qualitative examples of how K-L divergence shows the relation between two words (or phrases when composed using feature intersection as in the case of north korea). We exemplify Nematzadeh et al. (2017)'s human annotator judgement of the distance between China and North Korea, where human annotators considered \"North Korea\" to be very similar to \"China,\" while the reverse relationship was rated as significantly less strong (\"China\" is not very similar to \"North Korea\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "4.4"
},
{
"text": "We also calculate the cross entropy between two words, as it can be used to determine the entropy associated with the similarity between two words. Ideally, by determining the \"spread\" of the similarity of features between two words, we can determine the features that allow two words to be similar, allowing a more interpretable notion of feature-wise relation. and the in one which to however two for eight this of of in the zero to is a for as and only a also nine it as but s Table 5 : Examples of KL-divergence as an asymmetric measure of similarity. Lower is closer. We see here that the evaluation of North Korea as a concept being closer to China than vice versa can be observed by the use of K-L Divergence on column-wise normalization.",
"cite_spans": [],
"ref_spans": [
{
"start": 480,
"end": 487,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cross Entropy",
"sec_num": null
},
{
"text": "The cross-entropy of two distributions P and Q is a sum of the entropy of P and the K-L divergence between P and Q. In this sense, in captures both the uncertainty in P , as well as the distance from P to Q, to give us a general sense of the information theoretic difference between the concepts of P and Q. We use a generalized version of cross-entropy to fuzzy sets (Li, 2015) , which is:",
"cite_spans": [
{
"start": 368,
"end": 378,
"text": "(Li, 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Entropy",
"sec_num": null
},
{
"text": "H(S, T ) \u2261 i H(X S i ) + D(X S i || X T i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Entropy",
"sec_num": null
},
{
"text": "Feature representations which on comparison provide high cross entropy imply a more distributed feature space. Therefore, provided the right words to compute cross entropy, it could be possible to extract various features common (or associated) with a large group of words, lending some insight into how a single surface form (and its representation) can capture the distribution associated with different senses. Here, we use cross-entropy as a measure of polysemy, and isolate polysemous words based on context. We provide an example of capturing polysemy using composition by feature intersection in table 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Entropy",
"sec_num": null
},
{
"text": "We can see that the words which are most similar to noble are a combination of words from many senses, which provides some perspective into its distribution, . Indeed, it has an entropy value of 6.2765 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Entropy",
"sec_num": null
},
{
"text": "Finally, we construct the notion of analogy in our representation of a word as a tuple of features. Word analogy is usually represented as a problem where given a pairing (a : b), and a prior x, we are asked to compute an unknown word y ? such that a : b :: x : y ? . In the vector space model, analogy is computed based on vector distances. We find that this training mechanism does not have a consistent interpretation beyond evaluation. This is because normalization of vectors performed only during inference, not during training. Thus, computing analogy in terms of vector distances provides little insight into the distribution of vectors or to the notion of the length of the word vectors, which seems to be essential to analogy computation using vector operations In using a fuzzy set theoretic representation, vector projections are inherently normalized, making them feature dense. This allows us to compute analogies much better in lower dimension spaces. We consider analogy to be an operation involving union and set difference. Word analogy is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing Analogy",
"sec_num": "4.5"
},
{
"text": "a : b :: x : y ? y ? = b \u2212 a + x =\u21d2 y ? = (b + x) \u2212 a y = (b x) \\ a (Set-theoretic interpretation)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing Analogy",
"sec_num": "4.5"
},
{
"text": "Notice that this form of word analogy can be \"derived\" from the vector formula by re-arrangement. We use non-disjoint set union so that the common features are not eliminated, but the values Table 7 : Examples of analogy compared to the analogy in word2vec. We see here that the comparisons constructed by feature representations are similar to those given by the standard word vectors.",
"cite_spans": [],
"ref_spans": [
{
"start": 191,
"end": 198,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Constructing Analogy",
"sec_num": "4.5"
},
{
"text": "are clipped at (0, 1] so that the fuzzy representation is consistent. Analogical reasoning is based on the common features between the word representations, and conflates multiple types of relations such as synonymy, hypernymy and causal relations (Chen et al., 2017) . Using fuzzy set theoretic representations, we can also provide a context for the analogy, effectively reconstructing analogous reasoning to account for the type of relation from a lexical semantic perspective. Some examples of word analogy based are presented in table 7.",
"cite_spans": [
{
"start": 248,
"end": 267,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing Analogy",
"sec_num": "4.5"
},
{
"text": "In this section, we present our experiments and their results in various domains including similarity, analogy, function word detection, polysemy detection, lexical entailment and compositionality. All the experiments have been conducted on established datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "Similarity and analogy are the most popular intrinsic evaluation mechanisms for word representations (Mikolov et al., 2013a) . Therefore, to evaluate our representations, the first tasks we show are similarity and analogy. For similarity computations, we use the SimLex corpus (Hill et al., 2015) for training and testing at different dimensions For word analogy, we use the MSR Word Relatedness Test (Mikolov et al., 2013c) . We compare it to the vector representation of words for different dimensions.",
"cite_spans": [
{
"start": 101,
"end": 124,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF18"
},
{
"start": 277,
"end": 296,
"text": "(Hill et al., 2015)",
"ref_id": "BIBREF13"
},
{
"start": 401,
"end": 424,
"text": "(Mikolov et al., 2013c)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity and Analogy",
"sec_num": "5.1"
},
{
"text": "Our scores our compared to the word2vec scores of similarity using the Spearman rank correlation coefficient (Spearman, 1987) , which is a ratio of the covariances and standard deviations of the inputs being compared.",
"cite_spans": [
{
"start": 109,
"end": 125,
"text": "(Spearman, 1987)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity",
"sec_num": "5.1.1"
},
{
"text": "As shown in table 8, using our representation, similarity is slightly better represented according to the SimLex corpus. We show similarity on both the asymmetric measures of similarity for our repre- Table 8 : Similarity scores on the SimLex-999 dataset (Hill et al., 2015) , for various dimension sizes (Dims.). The scores are provided according to the Spearman Correlation to incorporate higher precision. Table 9 : Comparison of Analogies between word2vec and our representation for 50 and 100 dimensions (Dims.). For the first five, only overall accuracy is shown as overall accuracy is the same as semantic accuracy (as there is no syntactic accuracy measure). For all the others, we present, syntactic, semantic and overall accuracy as well. We see here that we outperform word2vec on every single metric.",
"cite_spans": [
{
"start": 255,
"end": 274,
"text": "(Hill et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 201,
"end": 208,
"text": "Table 8",
"ref_id": null
},
{
"start": 409,
"end": 416,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Similarity",
"sec_num": "5.1.1"
},
{
"text": "sentation, K-L divergence as well as cross-entropy. We see that cross-entropy performs better than K-L Divergence. While the similarity scores are generally higher, we see a reduction in the degree of similarity beyond 100 dimension vectors (features).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity",
"sec_num": "5.1.1"
},
{
"text": "For analogy, we see that our model outperforms word2vec at both 50 and 100 dimensions. We see that at lower dimension sizes, our normalized feature representation captures significantly more syntactic and semantic information than its vector counterpart. We conjecture that this can primarily be attributed to the fact that constructing feature probabilities provides more information about the Table 10 : Function word detection using entropy (in our representation) and by frequency in word2vec. We see that we consistently detect more function words than word2vec, based on the 176 function word list released by Nation (2016). The metric is number of words, i.e. the number of words chosen by frequency for word2vec and entropy for our representation common (and distinct) \"concepts\" which are shared between two words. Since feature representations are inherently fuzzy sets, lower dimension sizes provide a more reliable probability distribution, which becomes more and more sparse as the dimensionality of the vectors increases (i.e. number of features rise). Therefore, we notice that the increase in feature probabilities is a lot more for 50 dimensions than it is for 100.",
"cite_spans": [],
"ref_spans": [
{
"start": 395,
"end": 403,
"text": "Table 10",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analogy",
"sec_num": "5.1.2"
},
{
"text": "As mentioned in section 4.3, we use entropy as a measure of detecting function words for the standard GoogleNews-300 negative sampling dataset 3 . In order to quantitatively evaluate the detection of function words, we choose the top n words in our representation ordered by entropy with a frequency \u2265 100, and compare it to the top n words ordered by frequency from word2vec; n being 15, 30 and 50. We compare the number of function words in both in table 10. The list of function words is derived from Nation (2016).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Function Word Detection",
"sec_num": "5.2"
},
{
"text": "Finally, we evaluate the compositionality of word embeddings. Mikolov et al. (2013b) claims that word embeddings in vector spaces possess additive compositionality, i.e. by vector addition, semantic phrases such as compounds can be well represented. We claim that our representation in fact captures the semantics of phrases by performing a literal combination of the features of the head and modifier word, therefore providing a more robust representation of phrases.",
"cite_spans": [
{
"start": 62,
"end": 84,
"text": "Mikolov et al. (2013b)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality",
"sec_num": "5.3"
},
{
"text": "We use the English nominal compound phrases from . An initial set of experiments on nominal compounds using word2vec have been done before , where it Table 11 : Results for compositionality of word embeddings for nominal compounds for various dimensions (Dims.). We see that almost across the board, we perform better, however, for the Pearson correlation metric, at 200 dimensions, we find that word2vec has a better representation of rank by frequency for nominal compounds.",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 158,
"text": "Table 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "Compositionality",
"sec_num": "5.3"
},
{
"text": "was shown to be a fairly difficult task for modern non-contextual word embeddings. In order to analyse nominal compounds, we adjust our similarity metric to account for asymmetry in the similarity between the head-word and the modifier, and vice versa. We report performance on two metrics, the Spearman correlation (Spearman, 1987) and Pearson correlation (Pearson, 1920) . The results are shown in table 11. The difference in scores for the Pearson and Spearman rank correlation show that word2vec at higher dimensions better represents the rank of words (by frequency), but at lower dimensions, the feature probability representation has a better analysis of both rank by frequency, and its correlation with similarity of words with a nominal compound. Despite this, we show a higher Spearman correlation coefficient at 200 dimesions as well, as we capture non-linear relations.",
"cite_spans": [
{
"start": 316,
"end": 332,
"text": "(Spearman, 1987)",
"ref_id": "BIBREF27"
},
{
"start": 357,
"end": 372,
"text": "(Pearson, 1920)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositionality",
"sec_num": "5.3"
},
{
"text": "In this subsection, we provide some interpretation of the results above, and examine the effect of scaling dimensions to the feature representation. As seen here, the evaluation has been done on smaller dimension sizes of 50 and 100, and we see that our representation can be used for a slightly larger range of tasks from the perspective of intrinsic evaluations. However, the results of quantitative analogy for higher dimensions have been observed to be lower for fuzzy representations rather than the word2vec negative-sampling word vectors. We see that the representation we propose does not scale well as dimensions increase. This is because our representation relies on the distribution of probability mass per feature (dimension) across all the words. Therefore, increasing the dimension-ality of the word vectors used makes the representation that much more sparse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dimensionality Analysis and Feature Representations",
"sec_num": "5.4"
},
{
"text": "In this paper, we presented a reinterpretation of distributional semantics. We performed a columnwise normalization on word vectors, such that each value in this normalized representation represented the probability of the word possessing a feature that corresponded to each dimension. This provides us a representation of each word as a tuple of feature probabilities. We find that this representation can be seen as a fuzzy set, with each probability being the function of the projection of the original word vector on a dimension.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Considering word vectors as fuzzy sets allows us access to set operations such as union, intersection and difference. In our modification, these operations provide the product, disjoint sum and difference of the word representations, feature wise. Using qualitative examples, we show that our representation naturally captures an asymmetric notion of similarity using feature difference, from which known asymmetric measures can be easily constructed, such as Cross Entropy and K-L Divergence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We qualitatively show how our model accounts for polysemy, while showing quantitative proofs of our representation's performance at lower dimensions in similarity, analogy, compositionality and function word detection. We hypothesize that lower dimensions are more suited for our representation as sparsity increases with higher dimensions, so the significance of feature probabilities reduces. This sparsity causes a diffusion of the probabilities across multiple features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Through this work, we aim to provide some insights into interpreting word representations by showing one possible perspective and explanation of the lengths and projections of word embeddings in the vector space. These feature representations can be adapted for basic neural models, allowing the use of feature based representations at lower dimensions for downstream tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://code.google.com/archive/p/ word2vec/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their time and comments which have helped make this paper and its contribution better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Querying word embeddings for similarity and relatedness",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Fatemeh Torabi Asr",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Zinkov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "675--684",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fatemeh Torabi Asr, Robert Zinkov, and Michael Jones. 2018. Querying word embeddings for similarity and relatedness. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 675- 684.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Monte Carlo Semantics: Robust inference and logical pattern processing with Natural Language text",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Bergmair",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Bergmair. 2011. Monte Carlo Semantics: Ro- bust inference and logical pattern processing with Natural Language text. Ph.D. thesis, University of Cambridge.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "From word to sense embeddings: A survey on vector representations of meaning",
"authors": [
{
"first": "Jose",
"middle": [],
"last": "Camacho",
"suffix": ""
},
{
"first": "-Collados",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Artificial Intelligence Research",
"volume": "63",
"issue": "",
"pages": "743--788",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jose Camacho-Collados and Mohammad Taher Pile- hvar. 2018. From word to sense embeddings: A sur- vey on vector representations of meaning. Journal of Artificial Intelligence Research, 63:743-788.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Evaluating vector-space models of analogy",
"authors": [
{
"first": "Dawn",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"C"
],
"last": "Peterson",
"suffix": ""
},
{
"first": "Thomas L",
"middle": [],
"last": "Griffiths",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.04416"
]
},
"num": null,
"urls": [],
"raw_text": "Dawn Chen, Joshua C Peterson, and Thomas L Grif- fiths. 2017. Evaluating vector-space models of anal- ogy. arXiv preprint arXiv:1705.04416.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Predicting the compositionality of nominal compounds: Giving word embeddings a hard time",
"authors": [
{
"first": "Silvio",
"middle": [],
"last": "Cordeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Ramisch",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Idiart",
"suffix": ""
},
{
"first": "Aline",
"middle": [],
"last": "Villavicencio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1986--1997",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1187"
]
},
"num": null,
"urls": [],
"raw_text": "Silvio Cordeiro, Carlos Ramisch, Marco Idiart, and Aline Villavicencio. 2016. Predicting the composi- tionality of nominal compounds: Giving word em- beddings a hard time. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1986- 1997, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Functional distributional semantics",
"authors": [
{
"first": "Guy",
"middle": [],
"last": "Emerson",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "40--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guy Emerson and Ann Copestake. 2016. Functional distributional semantics. In Proceedings of the 1st Workshop on Representation Learning for NLP, pages 40-52.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semantic composition via probabilistic model theory",
"authors": [
{
"first": "Guy",
"middle": [],
"last": "Emerson",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
}
],
"year": 2017,
"venue": "IWCS 2017-12th International Conference on Computational Semantics-Long papers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guy Emerson and Ann Copestake. 2017. Seman- tic composition via probabilistic model theory. In IWCS 2017-12th International Conference on Com- putational Semantics-Long papers.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis",
"authors": [
{
"first": "",
"middle": [],
"last": "John R Firth",
"suffix": ""
}
],
"year": 1957,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John R Firth. 1957. A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improving sparse word similarity models with asymmetric measures",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Mark Gawron",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "296--301",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean Mark Gawron. 2014. Improving sparse word sim- ilarity models with asymmetric measures. In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 296-301.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "word2vec explained: deriving mikolov et al.'s negativesampling word-embedding method",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1402.3722"
]
},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Omer Levy. 2014. word2vec explained: deriving mikolov et al.'s negative- sampling word-embedding method. arXiv preprint arXiv:1402.3722.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Towards a formal distributional semantics",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 2013,
"venue": "Simulating logical calculi with tensors",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1304.5823"
]
},
"num": null,
"urls": [],
"raw_text": "Edward Grefenstette. 2013. Towards a formal distri- butional semantics: Simulating logical calculi with tensors. arXiv preprint arXiv:1304.5823.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A vector space for distributional semantics for entailment",
"authors": [
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Diana",
"middle": [
"Nicoleta"
],
"last": "Popa",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.03780"
]
},
"num": null,
"urls": [],
"raw_text": "James Henderson and Diana Nicoleta Popa. 2016. A vector space for distributional semantics for entail- ment. arXiv preprint arXiv:1607.03780.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Building a shared world: Mapping distributional to modeltheoretic semantic spaces",
"authors": [
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "Herbelot",
"suffix": ""
},
{
"first": "Eva",
"middle": [
"Maria"
],
"last": "Vecchi",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "22--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aur\u00e9lie Herbelot and Eva Maria Vecchi. 2015. Build- ing a shared world: Mapping distributional to model- theoretic semantic spaces. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 22-32.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "41",
"issue": "4",
"pages": "665--695",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41(4):665-695.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Measures of distributional similarity",
"authors": [
{
"first": "Lillian",
"middle": [
"Lee"
],
"last": "",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lillian Lee. 1999. Measures of distributional similarity. In Proceedings of the 37th Annual Meeting of the As- sociation for Computational Linguistics, pages 25- 32.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Do multi-sense embeddings improve natural language understanding?",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.01070"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li and Dan Jurafsky. 2015. Do multi-sense em- beddings improve natural language understanding? arXiv preprint arXiv:1506.01070.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Fuzzy cross-entropy",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of Uncertainty Analysis and Applications",
"volume": "3",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Li. 2015. Fuzzy cross-entropy. Journal of Un- certainty Analysis and Applications, 3(1):2.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Information theory, inference and learning algorithms",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "David",
"suffix": ""
},
{
"first": "David Jc Mac",
"middle": [],
"last": "Mackay",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David JC MacKay and David JC Mac Kay. 2003. In- formation theory, inference and learning algorithms. Cambridge university press.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 conference of the north american chapter of the association for computational linguistics: Human language technologies",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 conference of the north american chapter of the as- sociation for computational linguistics: Human lan- guage technologies, pages 746-751.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Making and using word lists for language learning and testing",
"authors": [],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Stephen Paul Nation. 2016. Making and using word lists for language learning and testing. John Benjamins Publishing Company.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Evaluating vector-space models of word representation, or, the unreasonable effectiveness of counting words near other words",
"authors": [
{
"first": "Aida",
"middle": [],
"last": "Nematzadeh",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Stephan",
"suffix": ""
},
{
"first": "Thomas L",
"middle": [],
"last": "Meylan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Griffiths",
"suffix": ""
}
],
"year": 2017,
"venue": "CogSci",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aida Nematzadeh, Stephan C Meylan, and Thomas L Griffiths. 2017. Evaluating vector-space models of word representation, or, the unreasonable effective- ness of counting words near other words. In CogSci.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Notes on the history of correlation",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Pearson",
"suffix": ""
}
],
"year": 1920,
"venue": "Biometrika",
"volume": "13",
"issue": "1",
"pages": "25--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Pearson. 1920. Notes on the history of correlation. Biometrika, 13(1):25-45.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Finding synonyms using automatic word alignment and measures of distributional similarity",
"authors": [
{
"first": "Lonneke",
"middle": [],
"last": "Van Der Plas",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL on Main conference poster sessions",
"volume": "",
"issue": "",
"pages": "866--873",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lonneke Van der Plas and J\u00f6rg Tiedemann. 2006. Find- ing synonyms using automatic word alignment and measures of distributional similarity. In Proceed- ings of the COLING/ACL on Main conference poster sessions, pages 866-873. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "How naked is the naked truth? a multilingual lexicon of nominal compound compositionality",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Ramisch",
"suffix": ""
},
{
"first": "Silvio",
"middle": [],
"last": "Cordeiro",
"suffix": ""
},
{
"first": "Leonardo",
"middle": [],
"last": "Zilio",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Idiart",
"suffix": ""
},
{
"first": "Aline",
"middle": [],
"last": "Villavicencio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "156--161",
"other_ids": {
"DOI": [
"10.18653/v1/P16-2026"
]
},
"num": null,
"urls": [],
"raw_text": "Carlos Ramisch, Silvio Cordeiro, Leonardo Zilio, Marco Idiart, and Aline Villavicencio. 2016. How naked is the naked truth? a multilingual lexicon of nominal compound compositionality. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 156-161, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The proof and measurement of association between two things",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Spearman",
"suffix": ""
}
],
"year": 1987,
"venue": "The American journal of psychology",
"volume": "100",
"issue": "3/4",
"pages": "441--471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Spearman. 1987. The proof and measurement of association between two things. The American journal of psychology, 100(3/4):441-471.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Features of similarity. Psychological review",
"authors": [
{
"first": "Amos",
"middle": [],
"last": "Tversky",
"suffix": ""
}
],
"year": 1977,
"venue": "",
"volume": "84",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amos Tversky. 1977. Features of similarity. Psycho- logical review, 84(4):327.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Word representations via gaussian embedding",
"authors": [
{
"first": "Luke",
"middle": [],
"last": "Vilnis",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6623"
]
},
"num": null,
"urls": [],
"raw_text": "Luke Vilnis and Andrew McCallum. 2014. Word rep- resentations via gaussian embedding. arXiv preprint arXiv:1412.6623.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Entropy, distance measure and similarity measure of fuzzy sets and their relations. Fuzzy sets and systems",
"authors": [
{
"first": "",
"middle": [],
"last": "Liu Xuecheng",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "52",
"issue": "",
"pages": "305--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu Xuecheng. 1992. Entropy, distance measure and similarity measure of fuzzy sets and their relations. Fuzzy sets and systems, 52(3):305-318.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">FBF \\B</td></tr><tr><td>french</td><td>isles</td><td>communaut</td></tr><tr><td colspan=\"2\">english colonial</td><td>aise</td></tr><tr><td>france</td><td colspan=\"2\">subcontinent langue</td></tr><tr><td colspan=\"2\">german cinema</td><td>monet</td></tr><tr><td colspan=\"2\">spanish boer</td><td>dictionnaire</td></tr><tr><td>british</td><td>canadians</td><td>gascon</td></tr><tr><td>F</td><td>B</td><td>F \u2212 B</td></tr><tr><td>french</td><td>scottish</td><td>ranjit</td></tr><tr><td colspan=\"2\">english american</td><td>privatised</td></tr><tr><td>france</td><td>thatcherism</td><td>tardis</td></tr><tr><td colspan=\"2\">german netherlands</td><td>molloy</td></tr><tr><td colspan=\"2\">spanish hillier</td><td>isaacs</td></tr><tr><td>british</td><td>cukcs</td><td>raj</td></tr></table>",
"text": ""
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "An example of feature difference, along with a possible word2vec analogue (vector difference)."
},
"TABREF3": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>Example 1</td><td>D(ganges || delta) D(delta || ganges)</td><td>6.3105 6.3040</td></tr><tr><td>Example 2</td><td colspan=\"2\">D(north \u2229 korea || china) D(china || north \u2229 korea) 10.60665 1.02923</td></tr></table>",
"text": "On the left: Top 15 words with highest entropy with frequency \u2265 100 (note that all of them are function words). On the right: Top 15 words with the highest frequency. The non-function words have been emphasized for comparison."
},
"TABREF4": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">\u2229MN \u2229\u011c</td></tr><tr><td>nobility</td><td>metal</td><td>bad</td><td>fusible</td><td>good</td></tr><tr><td>isotope</td><td>fusible</td><td>manners</td><td colspan=\"2\">unreactive dharma</td></tr><tr><td>fujwara</td><td colspan=\"4\">ductility happiness metalloids morals</td></tr><tr><td>feudal</td><td>with</td><td>evil</td><td>ductility</td><td>virtue</td></tr><tr><td>clan</td><td>alnico</td><td>excellent</td><td>heavy</td><td>righteous</td></tr><tr><td>N</td><td>M</td><td>G</td><td>N + M</td><td>N + G</td></tr><tr><td>noblest</td><td colspan=\"2\">trivalent bad</td><td>fusible</td><td>gracious</td></tr><tr><td colspan=\"3\">auctoritas carbides natured</td><td>metals</td><td>virtuous</td></tr><tr><td>abies</td><td>metallic</td><td colspan=\"2\">humoured sulfides</td><td>believeth</td></tr><tr><td>eightfold</td><td colspan=\"2\">corrodes selfless</td><td>finntroll</td><td>savages</td></tr><tr><td>vojt</td><td>alloying</td><td>gracious</td><td>rhodium</td><td>hedonist</td></tr></table>",
"text": "For reference, the word the has an entropy of 6.2934.NM\u011cN"
},
"TABREF5": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Polysemy of the word noble, in the context of the words good and metal. noble is represented by N , metal by M and good by G. We also provide the word2vec analogues of the same."
}
}
}
}