Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K17-1013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:07:59.972720Z"
},
"title": "Automatic Selection of Context Configurations for Improved Class-Specific Word Representations",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Lab, DTAL",
"institution": "University of Cambridge",
"location": {}
},
"email": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {}
},
"email": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hebrew University of Jerusalem",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Lab, DTAL",
"institution": "University of Cambridge",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper is concerned with identifying contexts useful for training word representation models for different word classes such as adjectives (A), verbs (V), and nouns (N). We introduce a simple yet effective framework for an automatic selection of class-specific context configurations. We construct a context configuration space based on universal dependency relations between words, and efficiently search this space with an adapted beam search algorithm. In word similarity tasks for each word class, we show that our framework is both effective and efficient. Particularly, it improves the Spearman's \u03c1 correlation with human scores on SimLex-999 over the best previously proposed class-specific contexts by 6 (A), 6 (V) and 5 (N) \u03c1 points. With our selected context configurations, we train on only 14% (A), 26.2% (V), and 33.6% (N) of all dependency-based contexts, resulting in a reduced training time. Our results generalise: we show that the configurations our algorithm learns for one English training setup outperform previously proposed context types in another training setup for English. Moreover, basing the configuration space on universal dependencies, it is possible to transfer the learned configurations to German and Italian. We also demonstrate improved per-class results over other context types in these two languages.",
"pdf_parse": {
"paper_id": "K17-1013",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper is concerned with identifying contexts useful for training word representation models for different word classes such as adjectives (A), verbs (V), and nouns (N). We introduce a simple yet effective framework for an automatic selection of class-specific context configurations. We construct a context configuration space based on universal dependency relations between words, and efficiently search this space with an adapted beam search algorithm. In word similarity tasks for each word class, we show that our framework is both effective and efficient. Particularly, it improves the Spearman's \u03c1 correlation with human scores on SimLex-999 over the best previously proposed class-specific contexts by 6 (A), 6 (V) and 5 (N) \u03c1 points. With our selected context configurations, we train on only 14% (A), 26.2% (V), and 33.6% (N) of all dependency-based contexts, resulting in a reduced training time. Our results generalise: we show that the configurations our algorithm learns for one English training setup outperform previously proposed context types in another training setup for English. Moreover, basing the configuration space on universal dependencies, it is possible to transfer the learned configurations to German and Italian. We also demonstrate improved per-class results over other context types in these two languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dense real-valued word representations (embeddings) have become ubiquitous in NLP, serving as invaluable features in a broad range of tasks (Turian et al., 2010; Collobert et al., 2011; Chen and Manning, 2014) . The omnipresent word2vec skip-gram model with negative sampling (SGNS) (Mikolov et al., 2013) is still considered a robust and effective choice for a word representation model, due to its simplicity, fast training, as well as its solid performance across semantic tasks (Baroni et al., 2014; Levy et al., 2015) . The original SGNS implementation learns word representations from local bag-of-words contexts (BOW). However, the underlying model is equally applicable with other context types (Levy and Goldberg, 2014a) .",
"cite_spans": [
{
"start": 140,
"end": 161,
"text": "(Turian et al., 2010;",
"ref_id": "BIBREF43"
},
{
"start": 162,
"end": 185,
"text": "Collobert et al., 2011;",
"ref_id": null
},
{
"start": 186,
"end": 209,
"text": "Chen and Manning, 2014)",
"ref_id": "BIBREF6"
},
{
"start": 283,
"end": 305,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF31"
},
{
"start": 482,
"end": 503,
"text": "(Baroni et al., 2014;",
"ref_id": "BIBREF2"
},
{
"start": 504,
"end": 522,
"text": "Levy et al., 2015)",
"ref_id": "BIBREF23"
},
{
"start": 703,
"end": 729,
"text": "(Levy and Goldberg, 2014a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent work suggests that \"not all contexts are created equal\". For example, reaching beyond standard BOW contexts towards contexts based on dependency parses (Bansal et al., 2014; Melamud et al., 2016) or symmetric patterns (Schwartz et al., 2015 (Schwartz et al., , 2016 ) yields significant improvements in learning representations for particular word classes such as adjectives (A) and verbs (V). Moreover, Schwartz et al. (2016) demonstrated that a subset of dependency-based contexts which covers only coordination structures is particularly effective for SGNS training, both in terms of the quality of the induced representations and in the reduced training time of the model. Interestingly, they also demonstrated that despite the success with adjectives and verbs, BOW contexts are still the optimal choice when learning representations for nouns (N) .",
"cite_spans": [
{
"start": 159,
"end": 180,
"text": "(Bansal et al., 2014;",
"ref_id": "BIBREF1"
},
{
"start": 181,
"end": 202,
"text": "Melamud et al., 2016)",
"ref_id": "BIBREF30"
},
{
"start": 225,
"end": 247,
"text": "(Schwartz et al., 2015",
"ref_id": "BIBREF40"
},
{
"start": 248,
"end": 272,
"text": "(Schwartz et al., , 2016",
"ref_id": "BIBREF41"
},
{
"start": 411,
"end": 433,
"text": "Schwartz et al. (2016)",
"ref_id": "BIBREF41"
},
{
"start": 856,
"end": 859,
"text": "(N)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we propose a simple yet effective framework for selecting context configurations, which yields improved representations for verbs, adjectives, and nouns. We start with a definition of our context configuration space (Sect. 3.1). Our basic definition of a context refers to a single typed (or labeled) dependency link between words (e.g., the amod link or the dobj link). Our configuration space then naturally consists of all possible subsets of the set of labeled dependency links between words. We employ the universal dependencies (UD) scheme to make our framework applicable across languages. We then describe (Sect. 3.2) our adapted beam search algorithm that aims to select an optimal context configuration for a given word class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We show that SGNS requires different context configurations to produce improved results for each word class. For instance, our algorithm detects that the combination of amod and conj contexts is effective for adjective representation. Moreover, some contexts that boost representation learning for one word class (e.g., amod contexts for adjectives) may be uninformative when learning representations for another class (e.g., amod for verbs). By removing such dispensable contexts, we are able both to speed up the SGNS training and to improve representation quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first experiment with the task of predicting similarity scores for the A/V/N portions of the benchmarking SimLex-999 evaluation set, running our algorithm in a standard SGNS experimental setup (Levy et al., 2015) . When training SGNS with our learned context configurations it outperforms SGNS trained with the best previously proposed context type for each word class: the improvements in Spearman's \u03c1 rank correlations are 6 (A), 6 (V), and 5 (N) points. We also show that by building context configurations we obtain improvements on the entire SimLex-999 (4 \u03c1 points over the best baseline). Interestingly, this context configuration is not the optimal configuration for any word class.",
"cite_spans": [
{
"start": 196,
"end": 215,
"text": "(Levy et al., 2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We then demonstrate that our approach is robust by showing that transferring the optimal configurations learned in the above setup to three other setups yields improved performance. First, the above context configurations, learned with the SGNS training on the English Wikipedia corpus, have an even stronger impact on SimLex999 performance when SGNS is trained on a larger corpus. Second, the transferred configurations also result in competitive performance on the task of solving class-specific TOEFL questions. Finally, we transfer the learned context configurations across languages: these configurations improve the SGNS performance when trained with German or Italian corpora and evaluated on class-specific subsets of the multilingual SimLex-999 (Leviant and Reichart, 2015) , without any language-specific tuning.",
"cite_spans": [
{
"start": 754,
"end": 782,
"text": "(Leviant and Reichart, 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Word representation models typically train on (word, context) pairs. Traditionally, most models use bag-of-words (BOW) contexts, which represent a word using its neighbouring words, irrespective of the syntactic or semantic relations between them (Collobert et al., 2011; Mikolov et al., 2013; Mnih and Kavukcuoglu, 2013; Pennington et al., 2014, inter alia) . Several alternative context types have been proposed, motivated by the limitations of BOW contexts, most notably their focus on topical rather than functional similarity (e.g., coffee:cup vs. coffee:tea). These include dependency contexts (Pad\u00f3 and Lapata, 2007; Levy and Goldberg, 2014a) , pattern contexts (Baroni et al., 2010; Schwartz et al., 2015) and substitute vectors (Yatbaz et al., 2012; Melamud et al., 2015) .",
"cite_spans": [
{
"start": 247,
"end": 271,
"text": "(Collobert et al., 2011;",
"ref_id": null
},
{
"start": 272,
"end": 293,
"text": "Mikolov et al., 2013;",
"ref_id": "BIBREF31"
},
{
"start": 294,
"end": 321,
"text": "Mnih and Kavukcuoglu, 2013;",
"ref_id": "BIBREF32"
},
{
"start": 322,
"end": 358,
"text": "Pennington et al., 2014, inter alia)",
"ref_id": null
},
{
"start": 600,
"end": 623,
"text": "(Pad\u00f3 and Lapata, 2007;",
"ref_id": "BIBREF35"
},
{
"start": 624,
"end": 649,
"text": "Levy and Goldberg, 2014a)",
"ref_id": "BIBREF20"
},
{
"start": 669,
"end": 690,
"text": "(Baroni et al., 2010;",
"ref_id": "BIBREF3"
},
{
"start": 691,
"end": 713,
"text": "Schwartz et al., 2015)",
"ref_id": "BIBREF40"
},
{
"start": 737,
"end": 758,
"text": "(Yatbaz et al., 2012;",
"ref_id": "BIBREF48"
},
{
"start": 759,
"end": 780,
"text": "Melamud et al., 2015)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Several recent studies examined the effect of context types on word representation learning. Melamud et al. (2016) compared three context types on a set of intrinsic and extrinsic evaluation setups: BOW, dependency links, and substitute vectors. They show that the optimal type largely depends on the task at hand, with dependency-based contexts displaying strong performance on semantic similarity tasks. Vuli\u0107 and Korhonen (2016) extended the comparison to more languages, reaching similar conclusions. Schwartz et al. (2016) , showed that symmetric patterns are useful as contexts for V and A similarity, while BOW still works best for nouns. They also indicated that coordination structures, a particular dependency link, are more useful for verbs and adjectives than the entire set of dependencies. In this work, we generalise their approach: our algorithm systematically and efficiently searches the space of dependency-based context configurations, yielding class-specific representations with substantial gains for all three word classes.",
"cite_spans": [
{
"start": 93,
"end": 114,
"text": "Melamud et al. (2016)",
"ref_id": "BIBREF30"
},
{
"start": 406,
"end": 431,
"text": "Vuli\u0107 and Korhonen (2016)",
"ref_id": "BIBREF46"
},
{
"start": 505,
"end": 527,
"text": "Schwartz et al. (2016)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Previous attempts on specialising word representations for a particular relation (e.g., similarity vs relatedness, antonyms) operate in one of two frameworks: (1) modifying the prior or the regularisation of the original training procedure (Yu and Dredze, 2014; Wieting et al., 2015; Liu et al., 2015; Kiela et al., 2015; Ling et al., 2015b) ; (2) post-processing procedures which use lexical knowledge to refine previously trained word vectors (Faruqui et al., 2015; Wieting et al., 2015; Mrk\u0161i\u0107 et al., 2017) . Our work suggests that the induced representations can be specialised by directly training the word representation model with carefully selected contexts.",
"cite_spans": [
{
"start": 240,
"end": 261,
"text": "(Yu and Dredze, 2014;",
"ref_id": "BIBREF49"
},
{
"start": 262,
"end": 283,
"text": "Wieting et al., 2015;",
"ref_id": "BIBREF47"
},
{
"start": 284,
"end": 301,
"text": "Liu et al., 2015;",
"ref_id": "BIBREF26"
},
{
"start": 302,
"end": 321,
"text": "Kiela et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 322,
"end": 341,
"text": "Ling et al., 2015b)",
"ref_id": "BIBREF25"
},
{
"start": 445,
"end": 467,
"text": "(Faruqui et al., 2015;",
"ref_id": "BIBREF12"
},
{
"start": 468,
"end": 489,
"text": "Wieting et al., 2015;",
"ref_id": "BIBREF47"
},
{
"start": 490,
"end": 510,
"text": "Mrk\u0161i\u0107 et al., 2017)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The goal of our work is to develop a methodology for the identification of optimal context configura- Top: An example English sentence from (Levy and Goldberg, 2014a) , now UD-parsed. Middle: the same sentence in Italian, UD-parsed. Note the similarity between the two parses which suggests that our context selection framework may be extended to other languages. Bottom: prepositional arc collapsing. The uninformative short-range case arc is removed, while a \"pseudo-arc\" specifying the exact link (prep:with) between discovers and telescope is added.",
"cite_spans": [
{
"start": 140,
"end": 166,
"text": "(Levy and Goldberg, 2014a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context Selection: Methodology",
"sec_num": "3"
},
{
"text": "tions for word representation model training. We hope to get improved word representations and, at the same time, cut down the training time of the word representation model. Fundamentally, we are not trying to design a new word representation model, but rather to find valuable configurations for existing algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Selection: Methodology",
"sec_num": "3"
},
{
"text": "The motivation to search for such training context configurations lies in the intuition that the distributional hypothesis (Harris, 1954) should not necessarily be made with respect to BOW contexts. Instead, it may be restated as a series of statements according to particular word relations. For example, the hypothesis can be restated as: \"two adjectives are similar if they modify similar nouns\", which is captured by the amod typed dependency relation. This could also be reversed to reflect noun similarity by saying that \"two nouns are similar if they are modified by similar adjectives\". In another example, \"two verbs are similar if they are used as predicates of similar nominal subjects\" (the nsubj and nsubjpass dependency relations).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Selection: Methodology",
"sec_num": "3"
},
{
"text": "First, we have to define an expressive context configuration space that contains potential training configurations and is effectively decomposed so that useful configurations may be sought algorithmically. We can then continue by designing a search algorithm over the configuration space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Selection: Methodology",
"sec_num": "3"
},
{
"text": "We focus on the configuration space based on dependency-based contexts (DEPS) (Pad\u00f3 and Lapata, 2007; Utt and Pad\u00f3, 2014) . We choose this space due to multiple reasons. First, dependency structures are known to be very useful in capturing functional relations between words, even if these relations are long distance. Second, they have been proven useful in learning word embeddings (Levy and Goldberg, 2014a; Melamud et al., 2016) . Finally, owing to the recent development of the Universal Dependencies (UD) annotation scheme (McDonald et al., 2013; Nivre et al., 2016) 1 it is possible to reason over dependency structures in a multilingual manner (e.g., Fig. 1 ). Consequently, a search algorithm in such DEPS-based configuration space can be developed for multiple languages based on the same design principles. Indeed, in this work we show that the optimal configurations for English translate to improved representations in two additional languages, German and Italian.",
"cite_spans": [
{
"start": 78,
"end": 101,
"text": "(Pad\u00f3 and Lapata, 2007;",
"ref_id": "BIBREF35"
},
{
"start": 102,
"end": 121,
"text": "Utt and Pad\u00f3, 2014)",
"ref_id": "BIBREF44"
},
{
"start": 384,
"end": 410,
"text": "(Levy and Goldberg, 2014a;",
"ref_id": "BIBREF20"
},
{
"start": 411,
"end": 432,
"text": "Melamud et al., 2016)",
"ref_id": "BIBREF30"
},
{
"start": 529,
"end": 552,
"text": "(McDonald et al., 2013;",
"ref_id": "BIBREF28"
},
{
"start": 553,
"end": 572,
"text": "Nivre et al., 2016)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 659,
"end": 665,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Context Configuration Space",
"sec_num": "3.1"
},
{
"text": "And so, given a (UD-)parsed training corpus, for each target word w with modifiers m 1 , . . . , m k and a head h, the word w is paired with context elements",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Configuration Space",
"sec_num": "3.1"
},
{
"text": "m 1 _r 1 , . . . , m k _r k , h_r \u22121",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Configuration Space",
"sec_num": "3.1"
},
{
"text": "h , where r is the type of the dependency relation between the head and the modifier (e.g., amod), and r \u22121 denotes an inverse relation. To simplify the presentation, we adopt the assumption that all training data for the word representation model are in the form of such (word, context) pairs (Levy and Goldberg, 2014a,c) , where word is the current target word, and context is its observed context (e.g., BOW, positional, dependency-based). A naive version of DEPS extracts contexts from the parsed corpus without any post-processing. Given the example from Fig. 1 , the DEPS contexts of discovers are: scientist_nsubj, stars_dobj, telescope_nmod.",
"cite_spans": [
{
"start": 294,
"end": 322,
"text": "(Levy and Goldberg, 2014a,c)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 560,
"end": 566,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Context Configuration Space",
"sec_num": "3.1"
},
{
"text": "DEPS not only emphasises functional similarity, but also provides a natural implicit grouping of related contexts. For instance, all pairs with the shared relation r and r \u22121 are taken as an rbased context bag, e.g., the pairs {(scientist, Aus-tralian_amod), (Australian, scientist_amod \u22121 )} from Fig. 1 are inserted into the amod context bag, while {(discovers, stars_dobj), (stars, discovers_dobj \u22121 )} are labelled with dobj.",
"cite_spans": [],
"ref_spans": [
{
"start": 298,
"end": 304,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Context Configuration Space",
"sec_num": "3.1"
},
{
"text": "Assume that we have obtained M distinct dependency relations r 1 , . . . , r M after parsing and postprocessing the corpus. The j-th individual context",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Configuration Space",
"sec_num": "3.1"
},
{
"text": "ri + rj + rk + rl ri + rj + rk ri + rj + rl ri + rk + rl rj + rk + rl ri + rj ri + rk rj + rk ri + rl rj + rl rk + rl ri rj rk rl E(R P ool \u00acri ) > E(R P ool ) E(R P ool ) E(R P ool \u00acrl ) < E(R P ool ) E(R P ool \u00acri\u00acrj ) < E(R P ool \u00acri )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Configuration Space",
"sec_num": "3.1"
},
{
"text": "Figure 2: An illustration of Alg. 1. The search space is presented as a DAG with direct links between origin configurations (e.g., r i + r j + r k ) and all its children configurations obtained by removing exactly one individual bag from the origin (e.g., r i + r j , r j + r k ). After automatically constructing the initial pool (line 1), the entry point of the algorithm is the R P ool configuration (line 2). Thicker blue circles denote visited configurations, while the gray circle denotes the best configuration found.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Configuration Space",
"sec_num": "3.1"
},
{
"text": "bag, j = 1, . . . , M , labelled r j , is a bag (or a multiset) of (word, context) pairs where context has one of the following forms: v_r j or v_r \u22121 j , where v is some vocabulary word. A context configuration is then simply a set of individual context bags, e.g., R = {r i , r j , r k }, also labelled as R:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Configuration Space",
"sec_num": "3.1"
},
{
"text": "r i + r j + r k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Configuration Space",
"sec_num": "3.1"
},
{
"text": "We call a configuration consisting of K individual context bags a K-set configuration (e.g., in this example, R is a 3-set configuration). 2 Although a brute-force exhaustive search over all possible configurations is possible in theory and for small pools (e.g., for adjectives, see Tab. 2), it becomes challenging or practically infeasible for large pools and large training data. For instance, based on the pool from Tab. 2, the search for the optimal configuration would involve trying out 2 10 \u22121 = 1023 configurations for nouns (i.e., training 1023 different word representation models). Therefore, to reduce the number of visited configurations, we present a simple heuristic search algorithm inspired by beam search (Pearl, 1984 ).",
"cite_spans": [
{
"start": 139,
"end": 140,
"text": "2",
"ref_id": null
},
{
"start": 724,
"end": 736,
"text": "(Pearl, 1984",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context Configuration Space",
"sec_num": "3.1"
},
{
"text": "2 A note on the nomenclature and notation: Each context configuration may be seen as a set of context bags, as it does not allow for repetition of its constituent context bags. For simplicity and clarity of presentation, we use dependency relation types (e.g., ri = amod, rj = acl) as labels for context bags. The reader has to be aware that a configuration R = {ri, rj, r k } is not by any means a set of relation types/names, but is in fact a multiset of all (word, context) pairs belonging to the corresponding context bags labelled with ri, rj, r k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Configuration Space",
"sec_num": "3.1"
},
{
"text": "Input :Set of M individual context bags:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: Best Configuration Search",
"sec_num": null
},
{
"text": "S = {r 1 , r 2 , . . . , r M } 1 build: pool of those K \u2264 M candidate individual context bags {r1, . . . , rK } for which E(ri) >= threshold, i \u2208 {1, . . . , M }, where E(\u2022)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: Best Configuration Search",
"sec_num": null
},
{
"text": "is a fitness function. 2 build: K-set configuration R P ool = {r1, . . . , rK } ; 3 initialize: (1) set of candidate configurations R = {R P ool } ; (2) current level l = K ; (3) best configuration Ro = \u2205 ; 4 search:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: Best Configuration Search",
"sec_num": null
},
{
"text": "5 repeat 6 Rn \u2190 \u2205 ; 7 Ro \u2190 arg max R\u2208R\u222a{Ro} E(R) ; 8 foreach R \u2208 R do 9 foreach ri \u2208 R do 10 build new (l \u2212 1)-set context configuration R\u00acr i = R \u2212 {ri} ; 11 if E(R\u00acr i ) \u2265 E(R) then 12 Rn \u2190 Rn \u222a {R\u00acr i } ; 13 l \u2190 l \u2212 1 ; 14 R \u2190 Rn ; 15 until l == 0 or R == \u2205;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: Best Configuration Search",
"sec_num": null
},
{
"text": "Output :Best configuration Ro",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: Best Configuration Search",
"sec_num": null
},
{
"text": "Alg. 1 provides a high-level overview of the algorithm. An example of its flow is given in Fig. 2 . Starting from S, the set of all possible M individual context bags, the algorithm automatically detects the subset S K \u2286 S, |S K | = K, of candidate individual bags that are used as the initial pool (line 1 of Alg. 1). The selection is based on some fitness (goal) function E. In our setup, E(R) is Spearman's \u03c1 correlation with human judgment scores obtained on the development set after training the word representation model with the configuration R. The selection step relies on a simple threshold: we use a threshold of \u03c1 \u2265 0.2 without any finetuning in all experiments with all word classes. We find this step to facilitate efficiency at a minor cost for accuracy. For example, since amod denotes an adjectival modifier of a noun, an efficient search procedure may safely remove this bag from the pool of candidate bags for verbs.",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 97,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Class-Specific Configuration Search",
"sec_num": "3.2"
},
{
"text": "The search algorithm then starts from the full K-set R P ool configuration (line 3) and tests K (K \u2212 1)-set configurations where exactly one individual bag r i is removed to generate each such configuration (line 10). It then retains only the set of configurations that score higher than the origin K-set configuration (lines 11-12, see Fig. 2 ). Using this principle, it continues searching only over lower-level (l \u2212 1)-set configurations that further improve performance over their l-set origin configuration. It stops if it reaches the lowest level or if it cannot improve the goal function any more (line 15). The best scoring configuration is returned (n.b., not guaranteed to be the global optimum).",
"cite_spans": [],
"ref_spans": [
{
"start": 337,
"end": 343,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Class-Specific Configuration Search",
"sec_num": "3.2"
},
{
"text": "In our experiments with this heuristic, the search for the optimal configuration for verbs is performed only over 13 1-set configurations plus 26 other configurations (39 out of 133 possible configurations). 3 For nouns, the advantage of the heuristic is even more dramatic: only 104 out of 1026 possible configurations were considered during the search. 4",
"cite_spans": [
{
"start": 208,
"end": 209,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Class-Specific Configuration Search",
"sec_num": "3.2"
},
{
"text": "4 Experimental Setup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Class-Specific Configuration Search",
"sec_num": "3.2"
},
{
"text": "Word Representation Model We experiment with SGNS (Mikolov et al., 2013) , the standard and very robust choice in vector space modeling (Levy et al., 2015) . In all experiments we use word2vecf, a reimplementation of word2vec able to learn from arbitrary (word, context) pairs. 5 For details concerning the implementation, we refer the reader to (Goldberg and Levy, 2014; Levy and Goldberg, 2014a) .",
"cite_spans": [
{
"start": 50,
"end": 72,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF31"
},
{
"start": 136,
"end": 155,
"text": "(Levy et al., 2015)",
"ref_id": "BIBREF23"
},
{
"start": 346,
"end": 371,
"text": "(Goldberg and Levy, 2014;",
"ref_id": "BIBREF13"
},
{
"start": 372,
"end": 397,
"text": "Levy and Goldberg, 2014a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.1"
},
{
"text": "The SGNS preprocessing scheme was replicated from (Levy and Goldberg, 2014a; Levy et al., 2015) . After lowercasing, all words and contexts that appeared less than 100 times were filtered. When considering all dependency types, the vocabulary spans approximately 185K word types. 6 Further, all representations were trained with d = 300 (very similar trends are observed with d = 100, 500).",
"cite_spans": [
{
"start": 50,
"end": 76,
"text": "(Levy and Goldberg, 2014a;",
"ref_id": "BIBREF20"
},
{
"start": 77,
"end": 95,
"text": "Levy et al., 2015)",
"ref_id": "BIBREF23"
},
{
"start": 280,
"end": 281,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.1"
},
{
"text": "The same setup was used in prior work (Schwartz et al., 2016; Vuli\u0107 and Korhonen, 2016) . Keeping the representation model fixed across experiments and varying only the context type allows us to attribute any differences in results to a sole factor: the context type. We plan to experiment with other representation models in future work.",
"cite_spans": [
{
"start": 38,
"end": 61,
"text": "(Schwartz et al., 2016;",
"ref_id": "BIBREF41"
},
{
"start": 62,
"end": 87,
"text": "Vuli\u0107 and Korhonen, 2016)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.1"
},
{
"text": "Universal Dependencies as Labels The adopted UD scheme leans on the universal Stanford dependencies (de Marneffe et al., 2014) complemented with the universal POS tagset (Petrov et al., 2012) . It is straightforward to \"translate\" previous annotation schemes to UD (de Marneffe et al., 2014) . Providing a consistently annotated inventory of categories for similar syntactic constructions across languages, the UD scheme facilitates representation learning in languages other than English, as shown in (Vuli\u0107 and Korhonen, 2016; Vuli\u0107, 2017) .",
"cite_spans": [
{
"start": 100,
"end": 126,
"text": "(de Marneffe et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 170,
"end": 191,
"text": "(Petrov et al., 2012)",
"ref_id": "BIBREF39"
},
{
"start": 262,
"end": 291,
"text": "UD (de Marneffe et al., 2014)",
"ref_id": null
},
{
"start": 502,
"end": 528,
"text": "(Vuli\u0107 and Korhonen, 2016;",
"ref_id": "BIBREF46"
},
{
"start": 529,
"end": 541,
"text": "Vuli\u0107, 2017)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.1"
},
{
"text": "Individual Context Bags Standard post-parsing steps are performed in order to obtain an initial list of individual context bags for our algorithm:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.1"
},
{
"text": "(1) Prepositional arcs are collapsed ( (Levy and Goldberg, 2014a; Vuli\u0107 and Korhonen, 2016) , see Fig. 1 ). Following this procedure, all pairs where the relation r has the form prep:X (where X is a preposition) are subsumed to a context bag labelled prep;",
"cite_spans": [
{
"start": 39,
"end": 65,
"text": "(Levy and Goldberg, 2014a;",
"ref_id": "BIBREF20"
},
{
"start": 66,
"end": 91,
"text": "Vuli\u0107 and Korhonen, 2016)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [
{
"start": 98,
"end": 104,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.1"
},
{
"text": "(2) Similar labels are merged into a single label (e.g., direct (dobj) and indirect objects (iobj) are merged into obj); (3) Pairs with infrequent and uninformative labels are removed (e.g., punct, goeswith, cc).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.1"
},
{
"text": "Coordination-based contexts are extracted as in prior work (Schwartz et al., 2016) , distinguishing between left and right contexts extracted from the conj relation; the label for this bag is conjlr. We also utilise the variant that does not make the distinction, labeled conjll. If both are used, the label is simply conj=conjlr+conjll. 7 Consequently, the individual context bags we use in all experiments are: subj, obj, comp, nummod, appos, nmod, acl, amod, prep, adv, compound, conjlr, conjll.",
"cite_spans": [
{
"start": 59,
"end": 82,
"text": "(Schwartz et al., 2016)",
"ref_id": "BIBREF41"
},
{
"start": 338,
"end": 339,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.1"
},
{
"text": "We run the algorithm for context configuration selection only once, with the SGNS training setup described below. Our main evaluation setup is presented below, but the learned configurations are tested in additional setups, detailed in Sect. 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Evaluation",
"sec_num": "4.2"
},
{
"text": "Training Data Our training corpus is the cleaned and tokenised English Polyglot Wikipedia data (Al-Rfou et al., 2013), 8 consisting of approxi-mately 75M sentences and 1.7B word tokens. The Wikipedia data were POS-tagged with universal POS (UPOS) tags (Petrov et al., 2012) using the state-of-the art TurboTagger (Martins et al., 2013). 9 The parser was trained using default settings (SVM MIRA with 20 iterations, no further parameter tuning) on the TRAIN+DEV portion of the UD treebank annotated with UPOS tags. The data were then parsed with UD using the graph-based Mate parser v3.61 (Bohnet, 2010) 10 with standard settings on TRAIN+DEV of the UD treebank.",
"cite_spans": [
{
"start": 252,
"end": 273,
"text": "(Petrov et al., 2012)",
"ref_id": "BIBREF39"
},
{
"start": 337,
"end": 338,
"text": "9",
"ref_id": null
},
{
"start": 588,
"end": 602,
"text": "(Bohnet, 2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Evaluation",
"sec_num": "4.2"
},
{
"text": "Evaluation We experiment with the verb pair (222 pairs), adjective pair (111 pairs), and noun pair (666 pairs) portions of SimLex-999. We report Spearman's \u03c1 correlation between the ranks derived from the scores of the evaluated models and the human scores. Our evaluation setup is borrowed from Levy et al. 2015: we perform 2-fold cross-validation, where the context configurations are optimised on a development set, separate from the unseen test data. Unless stated otherwise, the reported scores are always the averages of the 2 runs, computed in the standard fashion by applying the cosine similarity to the vectors of words participating in a pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Evaluation",
"sec_num": "4.2"
},
{
"text": "Baseline Context Types We compare the context configurations found by Alg. 1 against baseline contexts from prior work: -BOW: Standard bag-of-words contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "-POSIT: Positional contexts (Sch\u00fctze, 1993; Levy and Goldberg, 2014b; Ling et al., 2015a) , which enrich BOW with information on the sequential position of each context word. Given the example from Fig. 1 , POSIT with the window size 2 extracts the following contexts for discovers: Australian_-2, scientist_-1, stars_+2, with_+1.",
"cite_spans": [
{
"start": 28,
"end": 43,
"text": "(Sch\u00fctze, 1993;",
"ref_id": null
},
{
"start": 44,
"end": 69,
"text": "Levy and Goldberg, 2014b;",
"ref_id": "BIBREF21"
},
{
"start": 70,
"end": 89,
"text": "Ling et al., 2015a)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 198,
"end": 204,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "-DEPS-All: All dependency links without any context selection, extracted from dependency-parsed data with prepositional arc collapsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "-COORD: Coordination-based contexts are used as fast lightweight contexts for improved representations of adjectives and verbs (Schwartz et al., 2016) . This is in fact the conjlr context bag, a subset of DEPS-All.",
"cite_spans": [
{
"start": 127,
"end": 150,
"text": "(Schwartz et al., 2016)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "-SP: Contexts based on symmetric patterns (SPs, (Davidov and Rappoport, 2006; Schwartz et al., 2015) ). For example, if the word X and the word 9 http://www.cs.cmu.edu/~ark/TurboParser/ 10 https://code.google.com/archive/p/mate-tools/ Y appear in the lexico-syntactic symmetric pattern \"X or Y\" in the SGNS training corpus, then Y is an SP context instance for X, and vice versa.",
"cite_spans": [
{
"start": 48,
"end": 77,
"text": "(Davidov and Rappoport, 2006;",
"ref_id": "BIBREF9"
},
{
"start": 78,
"end": 100,
"text": "Schwartz et al., 2015)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "The development set was used to tune the window size for BOW and POSIT (to 2) and the parameters of the SP extraction algorithm. 11",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "Baseline Greedy Search Algorithm We also compare our search algorithm to its greedy variant: at each iteration of lines 8-12 in Alg. 1, R n now keeps only the best configuration of size l \u2212 1 that perform better than the initial configuration of size l, instead of all such configurations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "Not All Context Bags are Created Equal First, we test the performance of individual context bags across SimLex-999 adjective, verb, and noun subsets. Besides providing insight on the intuition behind context selection, these findings are important for the automatic selection of class-specific pools (line 1 of Alg. 1). The results are shown in Tab. 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Evaluation Setup",
"sec_num": "5.1"
},
{
"text": "The experiment supports our intuition (see Sect. 3.2): some context bags are definitely not useful for some classes and may be safely removed Table 3 : Results on the SimLex-999 test data over (a) verbs and (b) nouns subsets. Only a selection of context configurations optimised for verb and noun similarity are shown. POOL-ALL denotes a configuration where all individual context bags from the verbs/nouns-oriented pools (see Table 2 ) are used. BEST denotes the best performing configuration found by Alg. 1. Other configurations visited by Alg. 1 that score higher than the best scoring baseline context type for each word class are in gray. Scores obtained using a greedy search algorithm instead of Alg. 1 are in italic, marked with a cross ( \u2020). Table 4 : Results on the SimLex-999 adjectives subset with adjective-specific configurations.",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 149,
"text": "Table 3",
"ref_id": null
},
{
"start": 427,
"end": 434,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 752,
"end": 759,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Main Evaluation Setup",
"sec_num": "5.1"
},
{
"text": "when performing the class-specific SGNS training. For instance, the amod bag is indeed important for adjective and noun similarity, and at the same time it does not encode any useful information regarding verb similarity. compound is, as expected, useful only for nouns. Tab. 1 also suggests that some context bags (e.g., nummod) do not encode any informative contextual evidence regarding similarity, therefore they can be discarded. The initial results with individual context bags help to reduce the pool of candidate bags (line 1 in Alg. 1), see Tab. 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Evaluation Setup",
"sec_num": "5.1"
},
{
"text": "Searching for Improved Configurations Next, we test if we can improve class-specific representations by selecting class-specific configurations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Evaluation Setup",
"sec_num": "5.1"
},
{
"text": "Results are summarised in Tables 3 and 4 . Indeed, class-specific configurations yield better representations, as is evident from the scores: the improve-ments with the best class-specific configurations found by Alg. 1 are approximately 6 \u03c1 points for adjectives, 6 points for verbs, and 5 points for nouns over the best baseline for each class. The improvements are visible even with configurations that simply pool all candidate individual bags (POOL-ALL), without running Alg. 1 beyond line 1. However, further careful context selection, i.e., traversing the configuration space using Alg. 1 leads to additional improvements for V and N (gains of 3 and 2.2 \u03c1 points). Very similar improved scores are achieved with a variety of configurations (see Tab. 3), especially in the neighbourhood of the best configuration found by Alg. 1. This indicates that the method is quite robust: even sub-optimal 12 solutions result in improved class-specific representations. Furthermore, our algorithm is able to find better configurations for verbs and nouns compared to its greedy variant. Finally, our algorithm generalises well: the best scoring configuration on the dev set is always the best one on the test set.",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 40,
"text": "Tables 3 and 4",
"ref_id": null
},
{
"start": 752,
"end": 756,
"text": "Tab.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Main Evaluation Setup",
"sec_num": "5.1"
},
{
"text": "Training: Fast and/or Accurate? Carefully selected configurations are also likely to reduce SGNS training times. Indeed, the configurationbased model trains on only 14% (A), 26.2% (V), and 33.6% (N) of all dependency-based contexts. The training times and statistics for each context type are displayed in Tab. 5. All models were trained using parallel training on 10 Intel(R) Xeon(R) E5-2667 2.90GHz processors. The results indicate that class-specific configurations are not as lightweight and fast as SP or COORD contexts (Schwartz et al., 2016) . However, they also suggest that such configurations provide a good balance between accuracy and speed: they reach peak performances for each class, outscoring all baseline context types (including SP and COORD), while training is still much faster than with \"heavyweight\" context types such as BOW, POSIT or DEPS-All. Now that we verified the decrease in training time our algorithm provides for the final training, it makes sense to ask whether the configurations it finds are valuable in other setups. This will make the fast training of practical importance.",
"cite_spans": [
{
"start": 525,
"end": 548,
"text": "(Schwartz et al., 2016)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Main Evaluation Setup",
"sec_num": "5.1"
},
{
"text": "Another Training Setup We first test whether the context configurations learned in Sect. 5.1 are useful when SGNS is trained in another English setup (Schwartz et al., 2016) , with more training data and other annotation and parser choices, while evaluation is still performed on SimLex-999.",
"cite_spans": [
{
"start": 150,
"end": 173,
"text": "(Schwartz et al., 2016)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalisation: Configuration Transfer",
"sec_num": "5.2"
},
{
"text": "In this setup the training corpus is the 8B words corpus generated by the word2vec script. 13 A preprocessing step now merges common word pairs and triplets to expression tokens (e.g., Bilbo_Baggins). The corpus is parsed with labelled Stanford dependencies (de Marneffe and Manning, 2008) using the Stanford POS Tagger (Toutanova et al., 2003) and the stack version of the MALT parser (Goldberg and Nivre, 2012) . SGNS preprocessing and parameters are also replicated; we now 13 code.google.com/p/word2vec/source/browse/trunk/ Table 6 : Results on the A/V/N SimLex-999 subsets, and on the entire set (All) in the setup from Schwartz et al. (2016) . d = 500. BEST-* are again the best class-specific configs returned by Alg. 1.",
"cite_spans": [
{
"start": 320,
"end": 344,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF42"
},
{
"start": 386,
"end": 412,
"text": "(Goldberg and Nivre, 2012)",
"ref_id": "BIBREF14"
},
{
"start": 625,
"end": 647,
"text": "Schwartz et al. (2016)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [
{
"start": 528,
"end": 535,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generalisation: Configuration Transfer",
"sec_num": "5.2"
},
{
"text": "train 500-dim embeddings as in prior work. 14 Results are presented in Tab. 6. The imported class-specific configurations, computed using a much smaller corpus (Sect. 5.1), again outperform competitive baseline context types for adjectives and nouns. The BEST-VERBS configuration is outscored by SP, but the margin is negligible. We also evaluate another configuration found using Alg. 1 in Sect. 5.1, which targets the overall improved performance without any finer-grained division to classes (BEST-ALL). This configuration (amod+subj+obj+compound+prep+adv+conj) outperforms all baseline models on the entire benchmark. Interestingly, the non-specific BEST-ALL configuration falls short of A/V/N-specific configurations for each class. This unambiguously implies that the \"trade-off\" configuration targeting all three classes at the same time differs from specialised class-specific configurations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalisation: Configuration Transfer",
"sec_num": "5.2"
},
{
"text": "We next test whether the optimal context configurations computed in Sect. 5.1 with English training data are also useful for other languages. For this, we train SGNS models on the Italian (IT) and German (DE) Polyglot Wikipedia corpora with those configurations, and evaluate on the IT and DE multilingual SimLex-999 (Leviant and Reichart, 2015) . 15 Our results demonstrate similar patterns as for English, and indicate that our framework can be easily applied to other languages. For instance, the BEST-ADJ configuration (the same configuration as in Tab. 4 and Tab. 7) yields an improvement of 8 Table 7 : Results on the A/V/N TOEFL question subsets. The reported scores are in the following form: correct_answers/overall_questions. Adj-Q refers to the subset of TOEFL questions targeting adjectives; similar for Verb-Q and Noun-Q. BEST-* refer to the best class-specific configurations from Tab. 3 and Tab. 4.",
"cite_spans": [
{
"start": 317,
"end": 345,
"text": "(Leviant and Reichart, 2015)",
"ref_id": "BIBREF19"
},
{
"start": 348,
"end": 350,
"text": "15",
"ref_id": null
}
],
"ref_spans": [
{
"start": 599,
"end": 606,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments on Other Languages",
"sec_num": null
},
{
"text": "\u03c1 points and 4 \u03c1 points over the strongest adjectives baseline in IT and DE, respectively. We get similar improvements for nouns (IT: 3 \u03c1 points, DE: 2 \u03c1 points), and verbs (IT: 2, DE: 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on Other Languages",
"sec_num": null
},
{
"text": "We also verify that the selection of class-specific configurations (Sect. 5.1) is useful beyond the core SimLex evaluation. For this aim, we evaluate on the A, V, and N TOEFL questions (Landauer and Dumais, 1997) . The results are summarised in Tab. 7. Despite the limited size of the TOEFL dataset, we observe positive trends in the reported results (e.g., V-specific configurations yield a small gain on verb questions), showcasing the potential of class-specific training in this task.",
"cite_spans": [
{
"start": 185,
"end": 212,
"text": "(Landauer and Dumais, 1997)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TOEFL Evaluation",
"sec_num": null
},
{
"text": "We have presented a novel framework for selecting class-specific context configurations which yield improved representations for prominent word classes: adjectives, verbs, and nouns. Its design and dependence on the Universal Dependencies annotation scheme makes it applicable in different languages. We have proposed an algorithm that is able to find a suitable class-specific configuration while making the search over the large space of possible context configurations computationally feasible. Each word class requires a different class-specific configuration to produce improved results on the class-specific subset of SimLex-999 in English, Italian, and German. We also show that the selection of context configurations is robust as once learned configuration may be effectively transferred to other data setups, tasks, and languages without additional retraining or fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "In future work, we plan to test the framework with finer-grained contexts, investigating beyond POS-based word classes and dependency links. Exploring more sophisticated algorithms that can efficiently search richer configuration spaces is also an intriguing direction. Another research avenue is application of the context selection idea to other representation models beyond SGNS tested in this work, and experimenting with assigning weights to context subsets. Finally, we plan to test the portability of our approach to more languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "http://universaldependencies.org/ (V1.4 used)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The total is 133 as we have to include 6 additional 1-set configurations that have to be tested (line 1 of Alg. 1) but are not included in the initial pool for verbs (line 2).4 We also experimented with a less conservative variant which does not stop when lower-level configurations do not improve E; it instead follows the path of the best-scoring lower-level configuration even if its score is lower than that of its origin. As we do not observe any significant improvement with this variant, we opt for the faster and simpler one.5 https://bitbucket.org/yoavgo/word2vecf 6 SGNS for all models was trained using stochastic gradient descent and standard settings: 15 negative samples, global learning rate: 0.025, subsampling rate: 1e \u2212 4, 15 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Given the coordination structure boys and girls, conjlr training pairs are (boys, girls_conj), (girls, boys_conj \u22121 ), while conjll pairs are (boys, girls_conj), (girls, boys_conj).8 https://sites.google.com/site/rmyeid/projects/polyglot",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The SP extraction algorithm is available online: homes.cs.washington.edu/\u223croysch/software/dr06/dr06.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The term optimal here and later in the text refers to the best configuration returned by our algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The \"translation\" from labelled Stanford dependencies into UD is performed using the mapping from de Marneffe et al. (2014), e.g., nn is mapped into compound, and rcmod, partmod, infmod are all mapped into one bag: acl.15 http://leviants.com/ira.leviant/MultilingualVSMdata.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). Roy Schwartz was supported by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI). The authors are grateful to the anonymous reviewers for their helpful and constructive suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Polyglot: Distributed word representations for multilingual NLP",
"authors": [
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Perozzi",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Skiena",
"suffix": ""
}
],
"year": 2013,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "183--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual NLP. In CoNLL. pages 183-192. http://www.aclweb.org/anthology/W13-3520.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Tailoring continuous word representations for dependency parsing",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "809--815",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014. Tailoring continuous word representations for dependency parsing. In ACL. pages 809-815. http://www.aclweb.org/anthology/P14-2131.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "238--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In ACL. pages 238-247. http://www.aclweb.org/anthology/P14-",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Strudel: A corpusbased semantic model based on properties and types",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Barbu",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2010,
"venue": "Cognitive Science",
"volume": "",
"issue": "",
"pages": "222--254",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Brian Murphy, Eduard Barbu, and Massimo Poesio. 2010. Strudel: A corpus- based semantic model based on properties and types. Cognitive Science pages 222-254.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Top accuracy and fast dependency parsing is not a contradiction",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
}
],
"year": 2010,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "89--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a con- tradiction. In COLING. pages 89-97.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A fast and accurate dependency parser using neural networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "740--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In EMNLP. pages 740-750.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Pavel",
"middle": [
"P"
],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pavel P. Kuksa. 2011. Natural language pro- cessing (almost) from scratch. Journal of Machine Learning Research 12:2493-2537.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Efficient unsupervised discovery of word categories using symmetric patterns and high frequency words",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Davidov",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2006,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "297--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Davidov and Ari Rappoport. 2006. Ef- ficient unsupervised discovery of word cat- egories using symmetric patterns and high frequency words. In ACL. pages 297-304.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Universal Stanford dependencies: A cross-linguistic typology",
"authors": [
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Silveira",
"suffix": ""
},
{
"first": "Katri",
"middle": [],
"last": "Haverinen",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "4585--4592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine de Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, and Christopher D. Manning. 2014. Univer- sal Stanford dependencies: A cross-linguistic typol- ogy. In LREC. pages 4585-4592. http://www.lrec- conf.org/proceedings/lrec2014/summaries/1062.html.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The Stanford typed dependencies representation",
"authors": [
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Workshop on Cross-Framework and Cross-Domain Parser Evaluation",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine de Marneffe and Christopher D. Man- ning. 2008. The Stanford typed dependencies repre- sentation. In Proceedings of the Workshop on Cross- Framework and Cross-Domain Parser Evaluation. pages 1-8. http://www.aclweb.org/anthology/W08- 1301.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Retrofitting word vectors to semantic lexicons",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "Sujay",
"middle": [],
"last": "Kumar Jauhar",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "1606--1615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In NAACL-HLT. pages 1606-1615.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Word2vec explained: Deriving Mikolov et al.'s negative-sampling word-embedding method",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Omer Levy. 2014. Word2vec ex- plained: Deriving Mikolov et al.'s negative-sampling word-embedding method. CoRR abs/1402.3722. http://arxiv.org/abs/1402.3722.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A dynamic oracle for arc-eager dependency parsing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2012,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "959--976",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Joakim Nivre. 2012. A dynamic oracle for arc-eager dependency parsing. In COLING. pages 959-976.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Distributional structure",
"authors": [
{
"first": "S",
"middle": [],
"last": "Zellig",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1954,
"venue": "Word",
"volume": "10",
"issue": "23",
"pages": "146--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig S. Harris. 1954. Distributional structure. Word 10(23):146-162.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Specializing word embeddings for similarity or relatedness",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "2044--2048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Felix Hill, and Stephen Clark. 2015. Specializing word embeddings for similarity or relatedness. In EMNLP. pages 2044-2048.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Solutions to Plato's problem: The Latent Semantic Analysis theory of acquisition, induction, and representation of knowledge",
"authors": [
{
"first": "K",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dumais",
"suffix": ""
}
],
"year": 1997,
"venue": "Psychological Review",
"volume": "104",
"issue": "2",
"pages": "211--240",
"other_ids": {
"DOI": [
"10.1037/0033-295X.104.2.211"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas K. Landauer and Susan T. Dumais. 1997. Solutions to Plato's problem: The Latent Seman- tic Analysis theory of acquisition, induction, and representation of knowledge. Psychological Re- view 104(2):211-240. https://doi.org/10.1037/0033- 295X.104.2.211.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Separated by an un-common language: Towards judgment language informed vector space modeling",
"authors": [
{
"first": "Ira",
"middle": [],
"last": "Leviant",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ira Leviant and Roi Reichart. 2015. Separated by an un-common language: Towards judgment lan- guage informed vector space modeling. CoRR abs/1508.00106. http://arxiv.org/abs/1508.00106.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Dependencybased word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "302--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014a. Dependency- based word embeddings. In ACL. pages 302-308. http://www.aclweb.org/anthology/P14-2050.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Linguistic regularities in sparse and explicit word representations",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "171--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014b. Lin- guistic regularities in sparse and explicit word representations. In CoNLL. pages 171-180.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Neural word embedding as implicit matrix factorization",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "2177--2185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014c. Neu- ral word embedding as implicit matrix fac- torization. In NIPS. pages 2177-2185.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Improving distributional similarity with lessons learned from word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the ACL",
"volume": "3",
"issue": "",
"pages": "211--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the ACL 3:211-225.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Two/too simple adaptations of Word2Vec for syntax problems",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
}
],
"year": 2015,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "1299--1304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Chris Dyer, Alan W. Black, and Isabel Trancoso. 2015a. Two/too simple adaptations of Word2Vec for syntax prob- lems. In NAACL-HLT. pages 1299-1304.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Not all contexts are created equal: Better word representations with variable attention",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Silvio",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Ramon",
"middle": [],
"last": "Fermandez",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
},
{
"first": "Chu-Cheng",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1367--1372",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Yulia Tsvetkov, Silvio Amir, Ramon Fer- mandez, Chris Dyer, Alan W Black, Isabel Tran- coso, and Chu-Cheng Lin. 2015b. Not all contexts are created equal: Better word representations with variable attention. In EMNLP. pages 1367-1372. http://aclweb.org/anthology/D15-1161.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning semantic word embeddings based on ordinal knowledge constraints",
"authors": [
{
"first": "Quan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Zhen-Hua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "1501--1511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quan Liu, Hui Jiang, Si Wei, Zhen-Hua Ling, and Yu Hu. 2015. Learning semantic word embeddings based on ordinal knowledge constraints. In ACL. pages 1501-1511.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Turning on the Turbo: Fast third-order non-projective turbo parsers",
"authors": [
{
"first": "F",
"middle": [
"T"
],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "Miguel",
"middle": [
"B"
],
"last": "Martins",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Almeida",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "617--622",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andr\u00e9 F. T. Martins, Miguel B. Almeida, and Noah A. Smith. 2013. Turning on the Turbo: Fast third-order non-projective turbo parsers. In ACL. pages 617- 622. http://www.aclweb.org/anthology/P13-2109.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Universal dependency annotation for multilingual parsing",
"authors": [
{
"first": "Ryan",
"middle": [
"T"
],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Yvonne",
"middle": [],
"last": "Quirmbach-Brundage",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Keith",
"middle": [
"B"
],
"last": "Hall",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Bedini",
"suffix": ""
}
],
"year": 2013,
"venue": "N\u00faria Bertomeu Castell\u00f3, and Jungmee Lee",
"volume": "",
"issue": "",
"pages": "13--2017",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan T. McDonald, Joakim Nivre, Yvonne Quirmbach-Brundage, Yoav Goldberg, Dipan- jan Das, Kuzman Ganchev, Keith B. Hall, Slav Petrov, Hao Zhang, Oscar T\u00e4ckstr\u00f6m, Claudia Bedini, N\u00faria Bertomeu Castell\u00f3, and Jungmee Lee. 2013. Universal dependency annotation for multilingual parsing. In ACL. pages 92-97. http://www.aclweb.org/anthology/P13-2017.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Modeling word meaning in context with substitute vectors",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Melamud",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2015,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "472--482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Melamud, Ido Dagan, and Jacob Goldberger. 2015. Modeling word meaning in context with sub- stitute vectors. In NAACL-HLT. pages 472-482. http://www.aclweb.org/anthology/N15-1050.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The role of context types and dimensionality in learning word embeddings",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Melamud",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Melamud, David McClosky, Siddharth Patwardhan, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embeddings. In NAACL-HLT.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed repre- sentations of words and phrases and their composi- tionality. In NIPS. pages 3111-3119.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Learning word embeddings efficiently with noise-contrastive estimation",
"authors": [
{
"first": "Andriy",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
}
],
"year": 2013,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "2265--2273",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In NIPS. pages 2265-2273.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Semantic specialisation of distributional word vector spaces using monolingual and cross-lingual constraints",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "\u00d3",
"middle": [],
"last": "Diarmuid",
"suffix": ""
},
{
"first": "Ira",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Leviant",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Mrk\u0161i\u0107, Ivan Vuli\u0107, Diarmuid \u00d3 S\u00e9aghdha, Ira Leviant, Roi Reichart, Milica Ga\u0161i\u0107, Anna Korho- nen, and Steve Young. 2017. Semantic specialisa- tion of distributional word vector spaces using mono- lingual and cross-lingual constraints. Transactions of the ACL https://arxiv.org/abs/1706.00374.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Universal Dependencies 1.4. LINDAT/CLARIN digital library at Institute of Formal and Applied Linguistics",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre et al. 2016. Universal Dependencies 1.4. LINDAT/CLARIN digital library at Institute of For- mal and Applied Linguistics, Charles University in Prague.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Dependencybased construction of semantic space models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "161--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Pad\u00f3 and Mirella Lapata. 2007. Dependency- based construction of semantic space mod- els. Computational Linguistics 33(2):161-199.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Heuristics: Intelligent search strategies for computer problem solving",
"authors": [
{
"first": "Judea",
"middle": [],
"last": "Pearl",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Judea Pearl. 1984. Heuristics: Intelligent search strate- gies for computer problem solving .",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP. pages 1532-1543.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Hinrich Sch\u00fctze. 1993. Part-of-speech induction from scratch",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ryan",
"middle": [
"T"
],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2012,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "251--258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov, Dipanjan Das, and Ryan T. McDon- ald. 2012. A universal part-of-speech tagset. In LREC. pages 2089-2096. http://www.lrec- conf.org/proceedings/lrec2012/summaries/274.html. Hinrich Sch\u00fctze. 1993. Part-of-speech induc- tion from scratch. In ACL. pages 251-258.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Symmetric pattern based word embeddings for improved word similarity prediction",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2015,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "258--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Schwartz, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for im- proved word similarity prediction. In CoNLL. pages 258-267. http://www.aclweb.org/anthology/K15-",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Symmetric patterns and coordinations: Fast and enhanced representations of verbs and adjectives",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "499--505",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Schwartz, Roi Reichart, and Ari Rappoport. 2016. Symmetric patterns and coordinations: Fast and enhanced representations of verbs and adjectives. In NAACL-HLT. pages 499-505.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Feature-rich part-of-speech tagging with a cyclic dependency network",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Dan Klein, Christopher D. Man- ning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In NAACL-HLT. pages 173-180.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Word representations: A simple and general method for semisupervised learning",
"authors": [
{
"first": "Joseph",
"middle": [
"P"
],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev-Arie",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "384--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph P. Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representa- tions: A simple and general method for semi- supervised learning. In ACL. pages 384-394.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Crosslingual and multilingual construction of syntax-based vector space models",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Utt",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the ACL",
"volume": "2",
"issue": "",
"pages": "245--258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Utt and Sebastian Pad\u00f3. 2014. Crosslingual and multilingual construction of syntax-based vector space models. Transactions of the ACL 2:245-258.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Cross-lingual syntactically informed distributed word representations",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2017,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "408--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107. 2017. Cross-lingual syntactically informed distributed word representations. In EACL. pages 408-414. http://www.aclweb.org/anthology/E17-",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Is \"universal syntax\" universally useful for learning distributed word representations?",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "518--524",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107 and Anna Korhonen. 2016. Is \"universal syntax\" universally useful for learning distributed word representations? In ACL. pages 518-524. http://anthology.aclweb.org/P16-2084.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "From paraphrase database to compositional paraphrase model and back",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the ACL",
"volume": "3",
"issue": "",
"pages": "345--358",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compositional paraphrase model and back. Transactions of the ACL 3:345-358. http://aclweb.org/anthology/Q15-1025.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Learning syntactic categories using paradigmatic representations of word context",
"authors": [
{
"first": "Enis",
"middle": [],
"last": "Mehmet Ali Yatbaz",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Sert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2012,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "940--951",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehmet Ali Yatbaz, Enis Sert, and Deniz Yuret. 2012. Learning syntactic categories using paradigmatic representations of word context. In EMNLP. pages 940-951. http://www.aclweb.org/anthology/D12-",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Improving lexical embeddings with semantic knowledge",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "545--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mo Yu and Mark Dredze. 2014. Improving lexical em- beddings with semantic knowledge. In ACL. pages 545-550. http://www.aclweb.org/anthology/P14- 2089.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Extracting dependency-based contexts.",
"type_str": "figure"
},
"TABREF2": {
"num": null,
"content": "<table><tr><td colspan=\"3\">: 2-fold cross-validation results for an illus-</td></tr><tr><td colspan=\"3\">trative selection of individual context bags. Results</td></tr><tr><td colspan=\"3\">are presented for the noun, verb and adjective sub-</td></tr><tr><td colspan=\"3\">sets of SimLex-999. Values in parentheses denote</td></tr><tr><td colspan=\"3\">the class-specific initial pools to which each context</td></tr><tr><td colspan=\"3\">is selected based on its \u03c1 score (line 1 of Alg. 1).</td></tr><tr><td colspan=\"2\">Adjectives Verbs</td><td>Nouns</td></tr><tr><td>amod,</td><td>prep,</td><td>amod, prep,</td></tr><tr><td>conjlr,</td><td>acl, obj,</td><td>compound, subj,</td></tr><tr><td>conjll</td><td>comp, adv,</td><td>obj, appos, acl,</td></tr><tr><td/><td>conjlr,</td><td>nmod, conjlr,</td></tr><tr><td/><td>conjll</td><td>conjll</td></tr></table>",
"html": null,
"text": "",
"type_str": "table"
},
"TABREF3": {
"num": null,
"content": "<table/>",
"html": null,
"text": "Automatically constructed initial pools of candidate bags for each word class (Sect. 3.2).",
"type_str": "table"
},
"TABREF6": {
"num": null,
"content": "<table><tr><td>: Training time (wall-clock time reported) in</td></tr><tr><td>minutes for SGNS (d = 300) with different context</td></tr><tr><td>types. BEST-* denotes the best scoring configura-</td></tr><tr><td>tion for each class found by Alg. 1. #Pairs shows</td></tr><tr><td>a total number of pairs used in SGNS training for</td></tr><tr><td>each context type.</td></tr></table>",
"html": null,
"text": "",
"type_str": "table"
}
}
}
}