ACL-OCL / Base_JSON /prefixR /json /repl4nlp /2021.repl4nlp-1.22.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:59:17.401010Z"
},
"title": "Syntagmatic Word Embeddings for Unsupervised Learning of Selectional Preferences",
"authors": [
{
"first": "Renjith",
"middle": [
"P"
],
"last": "Ravindran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Hyderabad",
"location": {}
},
"email": ""
},
{
"first": "Akshay",
"middle": [],
"last": "Badola",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Hyderabad",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Kavi",
"middle": [],
"last": "Narayana Murthy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Hyderabad",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Selectional Preference (SP) captures the tendency of a word to semantically select other words to be in direct syntactic relation with it, and thus informs us about syntactic word configurations that are meaningful. Therefore SP is a valuable resource for Natural Language Processing (NLP) systems and for semanticists. Learning SP has generally been seen as a supervised task, because it requires a parsed corpus as a source of syntactically related word pairs. In this paper we show that simple distributional analysis can learn a good amount of SP without the need for an annotated corpus. We extend the general word embedding technique with directional word context windows giving word representations that better capture syntagmatic relations. We test on the SP-10K dataset and demonstrate that syntagmatic embeddings outperform the paradigmatic embeddings. We also evaluate supervised version of these embeddings and show that unsupervised syntagmatic embeddings can be as good as supervised embeddings. We also make available the source code of our implementation 1 .",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Selectional Preference (SP) captures the tendency of a word to semantically select other words to be in direct syntactic relation with it, and thus informs us about syntactic word configurations that are meaningful. Therefore SP is a valuable resource for Natural Language Processing (NLP) systems and for semanticists. Learning SP has generally been seen as a supervised task, because it requires a parsed corpus as a source of syntactically related word pairs. In this paper we show that simple distributional analysis can learn a good amount of SP without the need for an annotated corpus. We extend the general word embedding technique with directional word context windows giving word representations that better capture syntagmatic relations. We test on the SP-10K dataset and demonstrate that syntagmatic embeddings outperform the paradigmatic embeddings. We also evaluate supervised version of these embeddings and show that unsupervised syntagmatic embeddings can be as good as supervised embeddings. We also make available the source code of our implementation 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Selectional Preference (SP) (Wilks, 1975) encodes the syntagmatic relatedness between two words. Relations between words are either syntagmatic or paradigmatic (de Saussure, 1916) . Two words are said to be paradigmatically related if one word can replace the other in a sentence. Words belonging to a narrow semantic class, such as 'cat', 'dog' can often be substituted with each other in a sentence. Syntagmatic relations are between syntactically related co-occurring words in a sentence. Such word relations encode both syntactic and semantic aspects of words. A noun may be modified by an adjective, but any particular instance of a noun tends to go more with some adjectives than others. For example black dog is more likely than green dog. SP deals with such semantic preferences between syntactically related word pairs. Common SP relations include 'adjective-noun', 'subjectverb', 'verb-object'. SP finds use in important NLP tasks like sense disambiguation (Resnik, 1997) , semantic role classification (Zapirain et al., 2013) , co-reference resolution (Hobbs, 1978; Zhang et al., 2019c) , etc.",
"cite_spans": [
{
"start": 28,
"end": 41,
"text": "(Wilks, 1975)",
"ref_id": "BIBREF26"
},
{
"start": 160,
"end": 179,
"text": "(de Saussure, 1916)",
"ref_id": "BIBREF23"
},
{
"start": 967,
"end": 981,
"text": "(Resnik, 1997)",
"ref_id": "BIBREF20"
},
{
"start": 1013,
"end": 1036,
"text": "(Zapirain et al., 2013)",
"ref_id": "BIBREF28"
},
{
"start": 1063,
"end": 1076,
"text": "(Hobbs, 1978;",
"ref_id": "BIBREF6"
},
{
"start": 1077,
"end": 1097,
"text": "Zhang et al., 2019c)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A computational method to induce SP from instances of syntactically related word pairs in a parsed corpus was introduced by Resnik (1996) . In order to generalize to unseen data, this method made use of ontological classes obtained from WordNet (Miller, 1995) . Rooth et al. (1999) showed that the dependence on external knowledge resources could be removed by learning the classes from the corpus itself using the EM algorithm. Erk (2007) showed that generalization is also possible via co-occurrence similarity between seen and unseen words. SP models are usually evaluated using the Pseudo-word Disambiguation task (Van de Cruys, 2014) which requires the identification of the more probable dependent word, from a less probable (random) word, given the head word and a syntactic relation. The dataset is generally created from the unseen part of a parsed corpus used for learning the model. Therefore this task measures only how well the model fits the corpus, which may be biased, and not how well it learns SP as perceived by humans. Recently, Zhang et al. (2019b) introduced SP-10K, a dataset for SP evaluation across 5 syntactic relations with a total of 10,000 items each with a human-annotated plausibility score. SP-10K measures the correlation between a model's SP score for a given word pair and the average human score. Therefore it is a better test for SP learning.",
"cite_spans": [
{
"start": 124,
"end": 137,
"text": "Resnik (1996)",
"ref_id": "BIBREF19"
},
{
"start": 245,
"end": 259,
"text": "(Miller, 1995)",
"ref_id": "BIBREF14"
},
{
"start": 262,
"end": 281,
"text": "Rooth et al. (1999)",
"ref_id": "BIBREF21"
},
{
"start": 429,
"end": 439,
"text": "Erk (2007)",
"ref_id": "BIBREF4"
},
{
"start": 1049,
"end": 1069,
"text": "Zhang et al. (2019b)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The current state-of-the-art on SP-10K is reported by Multiplex Word Embeddings (MWE) (Zhang et al., 2019a) . It is a negative sampling based word embedding model, trained on relationspecific word pairs from a parsed corpus. Compared to unsupervised embedding models such as Word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) , MWE provides a substantial boost in SP learning as it has access to syntactic relations. It also improves over D-embeddings (Levy and Goldberg, 2014a) which is a supervised embedding model. However, a dependency-parsed corpus is not readily available in many languages. Therefore the need for an effective unsupervised SP induction technique is palpable in the wider NLP community.",
"cite_spans": [
{
"start": 86,
"end": 107,
"text": "(Zhang et al., 2019a)",
"ref_id": "BIBREF29"
},
{
"start": 284,
"end": 306,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 317,
"end": 342,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF16"
},
{
"start": 469,
"end": 495,
"text": "(Levy and Goldberg, 2014a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we show that unsupervised word embeddings can easily be extended to get better at learning SP. We do this by taking directional (left/right) word context windows unlike symmetric windows of Word2vec, GloVe, etc. Having directional context windows gives two embeddings per word, one of its left context and other of its right context. This allows us to approximate syntactic relations with directions; all relations that happen to the left of a word are captured by the left embedding and those that happen to the right of a word are captured by the right embedding. Then the cosine similarity between the right embedding of a word and left embedding of another word indicates how likely the two are to be syntagmatically related.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, our contributions are: 1) We provide a simple and effective method to capture selectional preference, called syntagmatic embeddings 2) Demonstrate that syntagmatic embeddings are superior to paradigmatic embeddings 3) We also show that our unsupervised syntagmatic representations can be as good as their supervised counterparts, therefore showing that a good range of SP information can be learned even without a dependency-parsed corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Symmetric and non-directional context windows in embedding techniques, such as GloVe, relate words that have similar (paradigmatic) contexts. Context words are other words that are in the immediate vicinity of a target word. A symmetric window considers equal number of words on the left and right as context words. Though syntagmatically re-lated words may have similar contexts, a symmetric window tends to encode more of paradigmatic relations. But these paradigmatic embedding spaces do encode syntagmatic properties to a certain degree. For example, we may find that the cosine similarity between 'coffee' and 'cup' is generally greater than 'coffee' and 'car'. These embeddings are considered unsupervised as they are learned from a plain un-annotated corpus. Since their contexts are not dictated by syntactic relations they are generally inferior, at learning SP, compared to an embedding technique that has access to such information (Zhang et al., 2019a) . Also, there is no direct way to extract syntagmatically related words. The nearest neighbours of a given word will largely be all paradigmatically related. Though it may include, given a larger context window, associated words ('coffee', 'cup') which have a syntagmatic nature.",
"cite_spans": [
{
"start": 943,
"end": 964,
"text": "(Zhang et al., 2019a)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syntagmatic Representation",
"sec_num": "2"
},
{
"text": "Exact learning of SP requires word co-occurrence in a sentence to be defined as a pair of syntactically related words, which is available only in a dependency-parsed corpus. We can obtain a less exact representation for SP by replacing syntactic relations with directions, because in word-ordered languages, word-order or direction plays a major role in assigning syntactic relations. For example in an English sentence, the adjectival modifier of a noun is always found to its left. The nominal subject of a verb is found to its left and direct object to its right. The technique explored here exploits this fact to learn a substantial amount of selectional preference without the need for a large dependencyparsed corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relations as Directions",
"sec_num": "2.1"
},
{
"text": "Word embeddings are low-rank representations of row/column vectors in a word co-occurrence matrix (Levy and Goldberg, 2014b) . Here, we consider unweighted factorisation of a word co-occurrence matrix using Truncated Singular Value Decomposition (SVD) (Kalman, 1996) . Let M be the cooccurrence matrix of size v \u00d7 v, where v is the size of the vocabulary. Instead of a symmetric context window, we use non-symmetric and directional windows, directions being left and right. Let M i,j be the number of times word i co-occurred to the left of word j within a distance of k throughout the corpus, where k is the size of the co-occurrence window. Consequently, M j,i becomes the number of times word i co-occurred to the right of word j. Thus the row i of matrix M gives the representation of word i using its left context words. And column j gives the representations of word j using its right context words. These two representations are different because our co-occurrence matrix is not symmetric. However, raw co-occurrence representation is very high-dimensional, highly sparse and noisy. A major component of word embedding techniques is dimensionality reduction, by approximating the original co-occurrence matrix with its low-rank representationM . Dimensionality reduction is found to reduce noise in the data matrix by eliminating the low principle components of the data, thus increasing generalisation. We use Truncated SVD 2 to obtain rank d approximation. Equation 1 gives the factorisation of the matrix M .",
"cite_spans": [
{
"start": 98,
"end": 124,
"text": "(Levy and Goldberg, 2014b)",
"ref_id": "BIBREF11"
},
{
"start": 252,
"end": 266,
"text": "(Kalman, 1996)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unweighted Factorisation Model",
"sec_num": "2.2"
},
{
"text": "M \u223cM =\u00db\u015cV (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unweighted Factorisation Model",
"sec_num": "2.2"
},
{
"text": "Where,\u00db v\u00d7d ,\u015c d\u00d7d ,V v\u00d7d are the factor matrices (singular vectors and singular values) obtained in SVD as, M = U SV , but truncated to keep only the top d principle components.\u00db andV gives the left context and right context representations of words respectively, in terms of the leading d singular vectors. The singular values\u015c gives the relative weightage of corresponding singular vectors, which may be used to scale the singular vectors appropriately. Our word representations are obtained by scaling the singular vectors by an exponential factor of their singular values. Thus, the final left embedding is given as L =\u00db\u015c p and the right embedding is R =\u015c pV . Caron (2001) showed that the exponential weighting factor p allows for a softer rank selection such that p > 0 gives more weightage to the leading components and p < 0 gives weightage 2 randomized svd from scikit-learn to the lower components, allowing the fine tuning of embeddings for different tasks. The number of components (dimension), exponential weighting factor, and co-occurrence window size are three important parameters that influence the performance of these embeddings. Our experiments include yet another parameter, the term-weight. So far we have assumed that M contains raw co-occurrence values, or the frequency count of two words to co-occur in the corpus. Various term-weighting schemes can be applied to transform the raw frequencies. We experiment with log, PMI (Point-wise Mutual Information) and PPMI (Positive Point-wise Mutual Information) term-weights along with the raw frequency counts.",
"cite_spans": [
{
"start": 666,
"end": 678,
"text": "Caron (2001)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unweighted Factorisation Model",
"sec_num": "2.2"
},
{
"text": "A factorisation model like the one presented in the previous section gives equal weightage to all errors in the low-rank approximation process. It has been shown that weighting errors from each co-occurrence term, by a function of their co-occurrence frequency yields better word embeddings (Levy and Goldberg, 2014b) . Neural embedding techniques such as Word2vec do such weighting implicitly (Levy and Goldberg, 2014b) , whereas techniques that makes use of cooccurrence matrix, such as GloVe, do this explicitly. For evaluating the performance of weighted factorisation on selectional preference, we minimally modify the GloVe model to get syntagmatic embeddings.",
"cite_spans": [
{
"start": 291,
"end": 317,
"text": "(Levy and Goldberg, 2014b)",
"ref_id": "BIBREF11"
},
{
"start": 394,
"end": 420,
"text": "(Levy and Goldberg, 2014b)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Factorisation Model",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = V,V i,j=1,1 f (M i,j )(u w i v w j + b i + b j \u2212 log M i,j ) 2",
"eq_num": "(2)"
}
],
"section": "Weighted Factorisation Model",
"sec_num": "2.3"
},
{
"text": "Equation 2 gives the loss function L for approximating the log co-occurrence with the dot product of the left embedding (u w ) and the right embedding (v w ). M here is the co-occurrence matrix and b i , b j are bias terms. With symmetric context, the final embeddings in the GloVe model are either just the left embeddings or the sum of left and right embeddings. But with asymmetric context, left and right embeddings are used distinctly. The weighting function (f ) is given by equation 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Factorisation Model",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (x) = (x/x max ) 3 4 , if x < x max 1, otherwise",
"eq_num": "(3)"
}
],
"section": "Weighted Factorisation Model",
"sec_num": "2.3"
},
{
"text": "x max is generally taken as 100. GloVe's weighting function mainly reduces the influence of rarely cooccurring words which tend to be noisy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Factorisation Model",
"sec_num": "2.3"
},
{
"text": "Let \u2190 l i be the left embedding of word i , i.e. i th row of L, and \u2192 r j be the right embedding of word j, i.e the j th column of R. Since Table 1 gives few examples of left and right associations from syntagmatic embeddings. These examples have been filtered to remove words that tend to appear as both left and right associates. Let l and r be the set of left associates and right associates of a given word in the embedding space, then the examples given here are l \u2212 r (left) and r \u2212 l (right). We see that the left associates of a noun (car) tends to have adjectives (vintage) and verbs (buy) that take the noun as its direct object. Right associates of the noun are found to be verbs (collided) that take the noun as its subject. With a verb (eat) we see that its left associates are other verbs (want) to which the given verb is an open clausal component. The right associates are its direct objects (salad). With an adjective (blue) we see that its left associates are other adjectives (vivid) that act as intensifiers and verbs (wore) whose direct objects are modified by the given adjective. The right associates are nouns (scarf) that are modified by the adjective.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Syntagmatic Association",
"sec_num": "2.4"
},
{
"text": "Examples of word association in the previous section gives a qualitative feel about the degree to which syntagmatic embeddings can capture selectional preference. In the next section we follow this up with detailed analysis using quantitative studies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SP Evaluation",
"sec_num": "3"
},
{
"text": "We use the SP-10K (Zhang et al., 2019b) dataset to quantify the correlation of between the SP information learned by our syntagmatic embeddings and that of human judgements. Other datasets with human scores for SP are McRae et al. (1998) ; Keller and Lapata (2003) ; Pad\u00f3 et al. (2006) . But compared to SP-10K these are much smaller in size. SP-10K has 3 direct relations and 2 indirect relations. For our evaluation we only use the direct relations -amod, nsubj and dobj. In SP-10K there are 2000 evaluation instances under each relation class. Each instance is a triplet (word1, word2, human-score), where word1 is the head and word2 is a dependent, and human-score gives the plausibility of word2 being dependent on word1, via the given relation, as judged by humans on a 0-10 scale. For amod relation, a noun is the head and an adjective is the dependent. For nsubj and dobj a verb is the head and a noun is the dependent. Table 2 gives some examples from the dataset. The model's capacity for SP is judged by the correlation (Spearman's) between the association score given by the model and the human-score. The modelscore for a given head-dependent pair is the cosine similarity between the head and the dependent in the embedding space.",
"cite_spans": [
{
"start": 18,
"end": 39,
"text": "(Zhang et al., 2019b)",
"ref_id": "BIBREF30"
},
{
"start": 218,
"end": 237,
"text": "McRae et al. (1998)",
"ref_id": "BIBREF12"
},
{
"start": 240,
"end": 264,
"text": "Keller and Lapata (2003)",
"ref_id": "BIBREF8"
},
{
"start": 267,
"end": 285,
"text": "Pad\u00f3 et al. (2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 928,
"end": 935,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "Since the syntagmatic embeddings relegate relations to left and right directions, the cosine similarity for each of the relations are computed as: amod: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "\u2192 r d \u2022 \u2190 l h , nsubj: \u2192 r d \u2022 \u2190 l h , dobj: \u2192 r h \u2022 \u2190 l d ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "We compare our syntagmatic model with 3 paradigmatic models: Word2vec (Mikolov et al., 2013) , GloVe (Pennington et al., 2014) and DSG (Song et al., 2018) . Both Word2vec (w2v) and GloVe (glove) are typical paradigmatic embeddings. DSG (Directional Skip-Gram) is a variant of Word2vec that claims to encode directional information by predicting the co-occurring words and also their directions. However, unlike syntagmatic embeddings DSG gives only one embedding per word. The best reported supervised model on SP-10K is Multiplex Word Embeddings (MWE). However, we could not use 3 the available implementation 4 for our experiments. Older supervised models for SP, that are not based on embeddings, have been previously evaluated on SP-10K (Zhang et al., 2019a), therefore we do not include those here.",
"cite_spans": [
{
"start": 70,
"end": 92,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 101,
"end": 126,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF16"
},
{
"start": 135,
"end": 154,
"text": "(Song et al., 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": "3.2"
},
{
"text": "We use the British National Corpus (BNC-Consortium, 2007) as the source for word cooccurrences for the embeddings. Since BNC is sentence segmented, our co-occurrence counting never jumps across a sentence. The word casing is normalized to small, punctuations are removed, and the vocabulary is limited to words occurring at least 100 times in the corpus.",
"cite_spans": [
{
"start": 35,
"end": 57,
"text": "(BNC-Consortium, 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "3.3"
},
{
"text": "In the following experiments, we compare our syntagmatic embeddings with its paradigmatic counterpart, identify its best parameters, distinguish weighted from unweighted factorisation, evaluate against baseline embeddings and test how our unsupervised SP learning method compares with a supervised model. The parameters involved in the factorisation of the word co-occurrence matrix are: 1) size of the co-occurrence window (ws), 2) termweight or the co-occurrence weighting function (tw), 3) dimensionality of the embedding space or the number of principle components (dim), and 4) the exponential weight on singular values (p).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We experiment with the following parameter values: ws=[1, 2, 3, 4], dim=[20, 50, 100, 300], p=[-0.5, 0, 0.5, 1], tw= [raw, log, pmi, ppmi] . In termweights raw denotes the co-occurrence frequency of the word as it is , log is the log 2 of the raw cooccurrence frequency, pmi is the point-wise mutual information given by equation 4 where subscript '*' stands for a summation across a particular axis, and ppmi is the positive-only variant of pmi as given by equation 5.",
"cite_spans": [
{
"start": 117,
"end": 138,
"text": "[raw, log, pmi, ppmi]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P M I i,j = log M i,j M * , * M i, * M * ,j",
"eq_num": "(4)"
}
],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P P M I i,j = max(0, P M I i,j )",
"eq_num": "(5)"
}
],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In our first experiment we compare syntagmatic representation to paradigmatic representation. Here we consider only the unweighted factorisation model. The paradigmatic model is similar to the syntagmatic model described in section 2.2, but has a context window that is symmetric and nondirectional. To get a more realistic picture of these methods, we compare a cohort of syntagmatic and paradigmatic models that have different parameter values. Each of the 4 parameters have 4 chosen parameter values. Since each parameter value combination gives us a different model, we get a total of 256 syntagmatic and 256 paradigmatic models. For each model (parameter-value combination) we compute the average correlation over the 3 SP relations. We see that in 69% of the total parameter instances the syntagmatic model is better than paradigmatic model. In those instances, on average the syntagmatic model improves the correlation by 0.14 points, which is an improvement of 54%. The maximum correlation obtained by a syntagmatic model is 0.71 and by the paradigmatic model is 0.58. Figure 1 shows two line plots for the average correlation values of syntagmatic and paradigmatic embeddings. Each particular parameter-value combination is a value on the x-axis, for which the there are two correlation values on the y-axis; one of the syntagmatic model and the other of the paradigmatic model. Apart from showing that syntagmatic models are generally better than paradigmatic models, it shows that certain parameter combinations give syntagmatic models a much greater advantage. On the downside we see that for a good number of poorly performing paradigmatic models their, syntagmatic counterpart performed even worse. There are also certain pathological parameter combinations that substantially pull down syntagmatic representations compared to corresponding paradigmatic representation. But overall, this experiment shows that syntagmatic embeddings are substantially better at capturing SP.",
"cite_spans": [],
"ref_spans": [
{
"start": 1077,
"end": 1085,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Syntagmatic Vs Paradigmatic",
"sec_num": "4.1"
},
{
"text": "In our second experiment we try to understand the relative importance of each parameter-value. For this we look at all 256 syntagmatic models and compute the mean and standard deviation of the correlation score among those models that have a particular parameter-value. For example we take the parameter-value tw=log and look at all syntagmatic models with that particular parameter-value, and compute the mean and standard deviation of their correlation score. We do the same with all 16 parameter-values. Figure 2 gives the results of this experiment. We see that term-weight is the most important parameter, and tw=log the most significant parametervalue. No matter what the other parameters values are, using log as the term-weight gives on average a correlation score of 0.55 \u00b1 0.07. Further, we see that the dimensionality of the embedding space is the next most significant parameter. Here we see that higher values are better, but this is only because we didn't consider even higher 5 values in this experiment (>300). It is well understood that there is an optimal dimension which is task and corpus dependent, below which a model does not have enough capacity, and above which the model tends to pick up noise (Yin and Shen, 2018) . A more interesting aspect is the significance of the exponential weighting factor p. The SVD factorizes the co-occurrence matrix as M = U SV , which can be factored into left and right components as",
"cite_spans": [
{
"start": 1220,
"end": 1240,
"text": "(Yin and Shen, 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 507,
"end": 515,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Parameter Impact",
"sec_num": "4.2"
},
{
"text": "M = [U S 1 2 ][S 1 2 V ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Impact",
"sec_num": "4.2"
},
{
"text": "We see that p=0.5 is indeed the right 5 value for the exponential weight factor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Impact",
"sec_num": "4.2"
},
{
"text": "To understand the influence of weighted factorisation on syntagmatic embeddings, we compare the syntagmatic GloVe (s-glove) model, introduced in section 2.3 to our SVD based unweighted fac-torisation model. We choose our best performing SVD based syntagmatic model (tw=log, dim=300, p=0.5) naming it spvec. We also test the SkipGram Word2vec (w2v) and GloVe (glove), for providing a comparison with popular paradigmatic models, and DSG to compare against a model with directional information. Embedding sizes in all models are 300, and window-sizes 1 to 7 are evaluated. Other parameters of dsg, s-glove, glove, w2v are kept to the default values in their respective implementations. Figure 3 shows the results of the experiment. We find that our SVD based unweighted syntagmatic model outperforms all other models, including the weighted syntagmatic model based on GloVe.",
"cite_spans": [],
"ref_spans": [
{
"start": 684,
"end": 692,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Influence of Weighted Factorisation",
"sec_num": "4.3"
},
{
"text": "The s-glove model performed slightly worse than the paradigmatic glove (glove) model under low window-sizes. We tried increasing the number of iterations in the training process, from the default 5 to 10. The resulting model (s-glovei10) performed much better than than paradigmatic GloVe model. It is interesting to note that all weighted models behave similarly to increasing window-sizes. They perform better as window-sizes increase. Whereas, our SVD based unweighted model (spvec) gives a better performance at window-size 2 and 3 and gradually decreases in performance as window-size is further increased. The directional variant of Word2vec (dsg) performs better than Word2vec, but performs poorly compared to spvec. Comparing s-glovei10 and spvec, we see that even at much higher window-size of 15 (not shown in figure 3) , s-glovei10 barely reaches an average correlation of 0.69. spvec on the other hand gets an average correlation 0.71 at a much smaller window-sizes (2 and 3).",
"cite_spans": [],
"ref_spans": [
{
"start": 820,
"end": 829,
"text": "figure 3)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Influence of Weighted Factorisation",
"sec_num": "4.3"
},
{
"text": "Our syntagmatic word embedding model aims to provide an effective method to approach selectional preference in the absence of a parsed corpus. In this experiment we assess how deficient our unsupervised model is when compared to supervised models. Since we were not able to use the available implementation of MWE, we simply compare our unsupervised syntagmatic model (spvec) with supervised versions of itself. The supervised version of syntagmatic embeddings is obtained by defining word co-occurrence as a pair of words related by a dependency relation. For this we parse our corpus (BNC) using the Stanford dependency parser ( et al., 2020). In order to remain compatible with a syntagmatic model, we maintain word ordering of the co-occurrences. For example, the sentence 'big cat ate rat' gives three co-occurrences where the head and the dependent are ordered as they are found in the sentence: 'big cat', 'cat ate' and 'ate rat'. We test two supervised models 1) spvec-s: which uses all dependency related word pairs 2) spvec-sr: which uses only related word pairs in a particular dependency relation. spvec-sr thus has 3 distinct embedding pairs (left/right) per word, an embedding pair for each of the tested dependency relation: amod, nsubj, dobj. For comparison we also show the results of unsupervised paradigmatic models. Table 3 gives the results of this experiment. Surprisingly we see that our unsupervised model (spvec) is as good as its supervised counterparts (spvec-s and spvec-sr). The model trained on all dependency related word pairs scores lower than the fully unsupervised model. The model with relation specific embeddings improves on the fully unsupervised model only by a meager 0.4%. We clearly see that unsupervised syntagmatic embeddings are not deficient but may be as good as supervised models.",
"cite_spans": [
{
"start": 629,
"end": 630,
"text": "(",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1336,
"end": 1343,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Comparison to Supervised Models",
"sec_num": "4.4"
},
{
"text": "There have been previous studies that explored Syntagmatic representations. Rapp (2002) ; Sahlgren (2006) viewed syntagmatic representations as firstorder word co-occurrence statistics, and paradigmatic representations as second-order statistics. First-order models represent words using text units in which they appear. Text units are generally documents or large regions of text, like paragraphs. Thus, first order statistics come from a worddocument co-occurrence matrix, whereas paradig- matic representations come from word-word cooccurrence matrix and hence called second order. While their evaluation of paradigmatic representation as second-order statistics was appropriate, their claim of syntagmatic representation as first-order statistics is not well justified. This is because the evaluation datasets they used for first-order models were a mix of (mostly) paradigmatic and syntagmatic relations, and not purely syntagmatic. A large-scale study by Lapesa et al. (2014) showed that fine-tuned second-order statistics can capture both syntagmatic and paradigmatic relations. Different parametrisations, mainly window size and dimensionality reduction, were shown to adapt the second-order statistics to either relations accordingly.",
"cite_spans": [
{
"start": 76,
"end": 87,
"text": "Rapp (2002)",
"ref_id": "BIBREF18"
},
{
"start": 90,
"end": 105,
"text": "Sahlgren (2006)",
"ref_id": "BIBREF22"
},
{
"start": 961,
"end": 981,
"text": "Lapesa et al. (2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The notion of syntagmatic representation explored in our work is adapted from Sch\u00fctze and Pedersen (1993) , in which the syntagmatic representation is introduced qualitatively without resorting to any quantitative studies. Our study on the other hand applies syntagmatic representation to the task of selectional preference, exploring various model parametrisations.",
"cite_spans": [
{
"start": 78,
"end": 105,
"text": "Sch\u00fctze and Pedersen (1993)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Our experiments have shown that a weakly structured model can be as good as a strongly structured model. The spvec model, though unsupervised, incorporates a simple linguistically motivated bias/structure -directionality or word order. Such a weakly biased model, when coupled with low-rank embedding process, seems to pickup appropriate linguistic structure by effectively getting rid of noise. But why did the supervised mod-els not have a bigger advantage when compared to the unsupervised model? We can hypothesize that words that are not directly related by a dependency relation but are in the vicinity of a target word make substantial contribution to the semantics of the word which may not be captured by a dependency-parsed model. It can also be because the low-rank embedding process is as good at removing noise as a dependency parse. A closer look at the results reveal that amod and dobj relations do benefit from supervision, although it is minor. The effect of window-size on each of the dependency relation, may help us to better understand this ( figure 4) . In the unsupervised model, amod relation is maximized with a window-size of 1, but the results reported in table 3 are of window-size 3. Certainly, the excess window-size will result in noise which may be mitigated by a dependency parse, as seen in the results of supervised models. Similarly, dobj relation which is maximized in the unsupervised model at window-size of 4 also benefits from the dependency parse. However, the case of nsubj relation does not fit this reasoning. nsubj is maximized in the unsupervised model at a window-size of 2, but even at window-size 3 it improves over the supervised model. Here we may have to consider the possibility that, words that are not directly related may contribute to the semantics, which is lost in a dependency-parsed model. We would also like to point out that parsing a large corpus can be resource intensive. Parsing the BNC consumed about 24 GPU 6 hours. However, our experiments show that the gains derived do not substantiate the compute incurred. The unsupervised spvec model performs the factorisation in less than 5 minutes on a 20-core CPU.",
"cite_spans": [],
"ref_spans": [
{
"start": 1065,
"end": 1074,
"text": "figure 4)",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Weighted factorisation of word co-occurrences is generally found to produce high quality word embeddings. Previously such embeddings showed improvements in tasks such as word similarity and solving word analogies. But we have shown that, when it comes to selectional preference and syntagmatic embeddings, weighted factorisation may be detrimental.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We also observe that appropriate co-occurrence term-weights are crucial for the performance. PPMI has been shown to work well for tasks that test paradigmatic nature such as word similarity (Bullinaria and Levy, 2007) . Pennington et al. (2014) remarked that log is better for solving word analogies than PPMI. Our experiments show that log is also valuable for learning selectional preference.",
"cite_spans": [
{
"start": 190,
"end": 217,
"text": "(Bullinaria and Levy, 2007)",
"ref_id": "BIBREF1"
},
{
"start": 220,
"end": 244,
"text": "Pennington et al. (2014)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Here we have tested our syntagmatic embeddings only on English, but it should be directly applicable to other word-ordered languages also.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "In this paper, we have introduced syntagmatic word embeddings, a simple and effective method, for learning selectional preference (SP). Our model is simple because it captures SP by direct factorisation of a word co-occurrence matrix. We have showed that by incorporating a weak linguistic bias of directionality as a proxy for syntactic relations, our model can be made as effective as a model with access to syntactic relations. This is important because SP has always been seen as a task that requires a dependency-parsed corpus, our work shows that it need not be the case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We hope that syntagmatic embeddings will be a valuable source of selectional preference information for resource-poor as well as resource-rich languages. We also hope that the structural bias of directionality will be further explored in simple models for other NLP tasks, instead of relying on models that are complex and opaque to interpretation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://github.com/renjithravindran/ spvec",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "it runs only on a given prepackaged corpus, we found it difficult to replicate their packaging for our corpus 4 https://github.com/HKUST-KnowComp/MWE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See figure 5 in appendix",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Nvidia RTX 2080 GPU",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Renjith P Ravindran is funded by Department of Science and Technology (DST), Government of India, under the Inspire Fellowship Programme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
},
{
"text": "A Detailed Parameter Study ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The british national corpus",
"authors": [
{
"first": "",
"middle": [],
"last": "Bnc-Consortium",
"suffix": ""
}
],
"year": 2007,
"venue": "Bodleian Libraries, University of Oxford",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "BNC-Consortium. 2007. The british national corpus, version 3 (bnc xml edition). Bodleian Libraries, Uni- versity of Oxford. Http://www.natcorp.ox.ac.uk/.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Extracting semantic representations from word cooccurrence statistics: A computational study",
"authors": [
{
"first": "John",
"middle": [
"A"
],
"last": "Bullinaria",
"suffix": ""
},
{
"first": "Joseph",
"middle": [
"P"
],
"last": "Levy",
"suffix": ""
}
],
"year": 2007,
"venue": "Behavior Research Methods",
"volume": "",
"issue": "",
"pages": "510--526",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John A. Bullinaria and Joseph P. Levy. 2007. Ex- tracting semantic representations from word co- occurrence statistics: A computational study. Behav- ior Research Methods, pages 510-526.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Experiments with LSA Scoring: Optimal Rank and Basis",
"authors": [
{
"first": "John",
"middle": [],
"last": "Caron",
"suffix": ""
}
],
"year": 2001,
"venue": "Society for Industrial and Applied Mathematics",
"volume": "",
"issue": "",
"pages": "157--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Caron. 2001. Experiments with LSA Scoring: Op- timal Rank and Basis, page 157-169. Society for In- dustrial and Applied Mathematics, USA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A neural network approach to selectional preference acquisition",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Van De Cruys",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "26--35",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1004"
]
},
"num": null,
"urls": [],
"raw_text": "Tim Van de Cruys. 2014. A neural network approach to selectional preference acquisition. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 26- 35, Doha, Qatar. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A simple, similarity-based model for selectional preferences",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Erk. 2007. A simple, similarity-based model for selectional preferences. In Proceedings of the 45th",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Annual Meeting of the Association of Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "216--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association of Computational Linguistics, pages 216-223, Prague, Czech Repub- lic. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Resolving pronoun references",
"authors": [
{
"first": "Jerry",
"middle": [
"R"
],
"last": "Hobbs",
"suffix": ""
}
],
"year": 1978,
"venue": "Lingua",
"volume": "44",
"issue": "4",
"pages": "311--338",
"other_ids": {
"DOI": [
"10.1016/0024-3841(78)90006-2"
]
},
"num": null,
"urls": [],
"raw_text": "Jerry R. Hobbs. 1978. Resolving pronoun references. Lingua, 44(4):311-338.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A singularly valuable decomposition: The svd of a matrix",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Kalman",
"suffix": ""
}
],
"year": 1996,
"venue": "The College Mathematics Journal",
"volume": "27",
"issue": "1",
"pages": "2--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Kalman. 1996. A singularly valuable decomposi- tion: The svd of a matrix. The College Mathematics Journal, 27(1):2-23.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Using the web to obtain frequencies for unseen bigrams",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2003,
"venue": "Comput. Linguist",
"volume": "29",
"issue": "3",
"pages": "459--484",
"other_ids": {
"DOI": [
"10.1162/089120103322711604"
]
},
"num": null,
"urls": [],
"raw_text": "Frank Keller and Mirella Lapata. 2003. Using the web to obtain frequencies for unseen bigrams. Comput. Linguist., 29(3):459-484.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Contrasting syntagmatic and paradigmatic relations: Insights from distributional semantic models",
"authors": [
{
"first": "Gabriella",
"middle": [],
"last": "Lapesa",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Evert",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Third Joint Conference on Lexical and Computational Semantics (*SEM 2014)",
"volume": "",
"issue": "",
"pages": "160--170",
"other_ids": {
"DOI": [
"10.3115/v1/S14-1020"
]
},
"num": null,
"urls": [],
"raw_text": "Gabriella Lapesa, Stefan Evert, and Sabine Schulte im Walde. 2014. Contrasting syntagmatic and paradig- matic relations: Insights from distributional seman- tic models. In Proceedings of the Third Joint Con- ference on Lexical and Computational Semantics (*SEM 2014), pages 160-170, Dublin, Ireland. As- sociation for Computational Linguistics and Dublin City University.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Dependencybased word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "302--308",
"other_ids": {
"DOI": [
"10.3115/v1/P14-2050"
]
},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014a. Dependency- based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 302-308. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Neural word embedding as implicit matrix factorization",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 27th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "2177--2185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014b. Neural word embedding as implicit matrix factorization. In Pro- ceedings of the 27th International Conference on Neural Information Processing Systems -Volume 2, NIPS'14, pages 2177-2185, Cambridge, MA, USA. MIT Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Modeling the influence of thematic fit (and other constraints) in on-line sentence comprehension",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Mcrae",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"K"
],
"last": "Spivey-Knowlton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tanenhaus",
"suffix": ""
}
],
"year": 1998,
"venue": "Journal of Memory and Language",
"volume": "38",
"issue": "3",
"pages": "283--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ken McRae, Michael J Spivey-Knowlton, and Michael K Tanenhaus. 1998. Modeling the influ- ence of thematic fit (and other constraints) in on-line sentence comprehension. Journal of Memory and Language, 38(3):283-312.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their composition- ality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, pages 3111-3119. Curran Associates, Inc.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Wordnet: A lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Commun. ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {
"DOI": [
"10.1145/219717.219748"
]
},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39-41.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Combining syntax and thematic fit in a probabilistic model of sentence processing",
"authors": [
{
"first": "Ulrike",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"W"
],
"last": "Crocker",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 28th CogSci",
"volume": "",
"issue": "",
"pages": "657--662",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulrike Pad\u00f3, Frank Keller, and Matthew W Crocker. 2006. Combining syntax and thematic fit in a prob- abilistic model of sentence processing. In Proceed- ings of the 28th CogSci, pages 657-662.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Stanza: A Python natural language processing toolkit for many human languages",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuhui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Bolton",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The computation of word associations: Comparing syntagmatic and paradigmatic approaches",
"authors": [
{
"first": "Reinhard",
"middle": [],
"last": "Rapp",
"suffix": ""
}
],
"year": 2002,
"venue": "COLING 2002: The 19th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reinhard Rapp. 2002. The computation of word asso- ciations: Comparing syntagmatic and paradigmatic approaches. In COLING 2002: The 19th Interna- tional Conference on Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Selectional constraints: An information-theoretic model and its computational realization",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1996,
"venue": "Cognition",
"volume": "",
"issue": "",
"pages": "127--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik. 1996. Selectional constraints: An information-theoretic model and its computational realization. Cognition, 61(1-2):127-159.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Selectional preference and sense disambiguation",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1997,
"venue": "Tagging Text with Lexical Semantics: Why, What, and How",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik. 1997. Selectional preference and sense disambiguation. In Tagging Text with Lexical Se- mantics: Why, What, and How?",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Inducing a semantically annotated lexicon via em-based clustering",
"authors": [
{
"first": "Mats",
"middle": [],
"last": "Rooth",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
},
{
"first": "Detlef",
"middle": [],
"last": "Prescher",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics on Computational Linguistics, ACL '99",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {
"DOI": [
"10.3115/1034678.1034703"
]
},
"num": null,
"urls": [],
"raw_text": "Mats Rooth, Stefan Riezler, Detlef Prescher, Glenn Car- roll, and Franz Beil. 1999. Inducing a semantically annotated lexicon via em-based clustering. In Pro- ceedings of the 37th Annual Meeting of the Asso- ciation for Computational Linguistics on Computa- tional Linguistics, ACL '99, page 104-111, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The Word-Space Model: Using distributional analysis to represent syntagmatic and paradigmatic relations between words in highdimensional vector spaces",
"authors": [
{
"first": "Magnus",
"middle": [],
"last": "Sahlgren",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Magnus Sahlgren. 2006. The Word-Space Model: Us- ing distributional analysis to represent syntagmatic and paradigmatic relations between words in high- dimensional vector spaces. Ph.D. thesis, Institutio- nen f\u00f6r lingvistik, Stockholm University.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Cours de linguistique g\u00e9n\u00e9rale",
"authors": [
{
"first": "Ferdinand",
"middle": [],
"last": "De",
"suffix": ""
},
{
"first": "Saussure",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1916,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ferdinand de Saussure. 1916. Cours de linguistique g\u00e9n\u00e9rale. Payot, Paris.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A vector model for syntagmatic and paradigmatic relatedness",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 1993,
"venue": "Making Sense of Words -Ninth Annual Conference of the UW Centre for the New OED and Text Re-search",
"volume": "",
"issue": "",
"pages": "104--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Sch\u00fctze and Jan Pedersen. 1993. A vector model for syntagmatic and paradigmatic relatedness. In Making Sense of Words -Ninth Annual Confer- ence of the UW Centre for the New OED and Text Re-search, pages 104-113.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Directional skip-gram: Explicitly distinguishing left and right context for word embeddings",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Shuming",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Haisong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "175--180",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2028"
]
},
"num": null,
"urls": [],
"raw_text": "Yan Song, Shuming Shi, Jing Li, and Haisong Zhang. 2018. Directional skip-gram: Explicitly distinguish- ing left and right context for word embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 2 (Short Papers), pages 175-180, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A preferential, pattern-seeking, semantics for natural language inference",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Wilks",
"suffix": ""
}
],
"year": 1975,
"venue": "Artif. Intell",
"volume": "6",
"issue": "",
"pages": "53--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Wilks. 1975. A preferential, pattern-seeking, se- mantics for natural language inference. Artif. Intell., 6:53-74.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "On the dimensionality of word embedding",
"authors": [
{
"first": "Zi",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Yuanyuan",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems 31",
"volume": "",
"issue": "",
"pages": "887--898",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zi Yin and Yuanyuan Shen. 2018. On the dimension- ality of word embedding. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 887-898. Curran As- sociates, Inc.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Selectional preferences for semantic role classification",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Be\u00f1at Zapirain",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Surdeanu",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "39",
"issue": "3",
"pages": "631--663",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00145"
]
},
"num": null,
"urls": [],
"raw_text": "Be\u00f1at Zapirain, Eneko Agirre, Llu\u00eds M\u00e0rquez, and Mi- hai Surdeanu. 2013. Selectional preferences for se- mantic role classification. Computational Linguis- tics, 39(3):631-663.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Multiplex word embeddings for selectional preference acquisition",
"authors": [
{
"first": "Hongming",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiaxin",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Changlong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Wilfred",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5247--5256",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1528"
]
},
"num": null,
"urls": [],
"raw_text": "Hongming Zhang, Jiaxin Bai, Yan Song, Kun Xu, Changlong Yu, Yangqiu Song, Wilfred Ng, and Dong Yu. 2019a. Multiplex word embeddings for se- lectional preference acquisition. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5247-5256, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "SP-10K: A large-scale evaluation set for selectional preference acquisition",
"authors": [
{
"first": "Hongming",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hantian",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "722--731",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1071"
]
},
"num": null,
"urls": [],
"raw_text": "Hongming Zhang, Hantian Ding, and Yangqiu Song. 2019b. SP-10K: A large-scale evaluation set for se- lectional preference acquisition. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 722-731, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Incorporating context and external knowledge for pronoun coreference resolution",
"authors": [
{
"first": "Hongming",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "872--881",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1093"
]
},
"num": null,
"urls": [],
"raw_text": "Hongming Zhang, Yan Song, and Yangqiu Song. 2019c. Incorporating context and external knowl- edge for pronoun coreference resolution. In Pro- ceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long and Short Papers), pages 872-881, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "\u2192 r j reflects the right context of word j and \u2190 l i reflects the left context of word i, similarity between \u2192 r j and \u2190 l i would reflect how often word j is found to the left of word i. Thus cosine similarity between \u2192 r j and \u2190 l i captures the association of word j to the left of word i, and the association of word i to the right of word j."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "where subscript h and d denotes head and dependent words respectively, and symbol '\u2022' denotes cosine similarity."
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Average correlation of syntagmatic and paradigmatic models over various parameter combinations."
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Average correlation (with standard deviation) in syntagmatic models that have the same parametervalue."
},
"FIGREF4": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Average correlation of weighted and unweighted models with varying window sizes."
},
"FIGREF5": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Window-size preferences of spvec for different relations."
},
"TABREF0": {
"num": null,
"content": "<table/>",
"text": "word associations car left: vintage, second-hand, oncoming, luxury, buying, toy, saloon, buy, mercedes... right: collided, sped, exploded, maker, skidded, swerved, belonging, makers, roared... eat left: want, wants, going, wanting, let, tend, ought, let's, allowed, prefer, supposed, able... right: salad, beans, soup, cakes, pork, peas, bacon, pasta, fresh, pie, biscuits... blue left: wore, vivid, dull, wear, luminous, wears, dazzling, plain, dim, dressed, dyed... right: scarf, stripe, livery, robe, beret, overalls, blazer, slacks, gloves... aggressive left: increasingly, extremely, equally, become, very, highly, particularly, becoming... right: behaviour, attitude, manner, response, towards, tactics, stance, attack, actions...",
"type_str": "table",
"html": null
},
"TABREF1": {
"num": null,
"content": "<table/>",
"text": "Examples of word associations from syntagmatic embeddings.",
"type_str": "table",
"html": null
},
"TABREF3": {
"num": null,
"content": "<table/>",
"text": "Samples from SP-10K dataset.",
"type_str": "table",
"html": null
},
"TABREF5": {
"num": null,
"content": "<table/>",
"text": "Spearman's correlation for supervised and unsupervised models on the SP-10K dataset.",
"type_str": "table",
"html": null
}
}
}
}