|
{ |
|
"paper_id": "E17-1006", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:51:53.934372Z" |
|
}, |
|
"title": "Learning Compositionality Functions on Word Embeddings for Modelling Attribute Meaning in Adjective-Noun Phrases", |
|
"authors": [ |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Hartung", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Semantic Computing Group CITEC", |
|
"institution": "Bielefeld University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Fabian", |
|
"middle": [], |
|
"last": "Kaupmann", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Semantic Computing Group CITEC", |
|
"institution": "Bielefeld University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Soufian", |
|
"middle": [], |
|
"last": "Jebbara", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Semantic Computing Group CITEC", |
|
"institution": "Bielefeld University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Cimiano", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Semantic Computing Group CITEC", |
|
"institution": "Bielefeld University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Word embeddings have been shown to be highly effective in a variety of lexical semantic tasks. They tend to capture meaningful relational similarities between individual words, at the expense of lacking the capabilty of making the underlying semantic relation explicit. In this paper, we investigate the attribute relation that often holds between the constituents of adjective-noun phrases. We use CBOW word embeddings to represent word meaning and learn a compositionality function that combines the individual constituents into a phrase representation, thus capturing the compositional attribute meaning. The resulting embedding model, while being fully interpretable, outperforms countbased distributional vector space models that are tailored to attribute meaning in the two tasks of attribute selection and phrase similarity prediction. Moreover, as the model captures a generalized layer of attribute meaning, it bears the potential to be used for predictions over various attribute inventories without retraining .", |
|
"pdf_parse": { |
|
"paper_id": "E17-1006", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Word embeddings have been shown to be highly effective in a variety of lexical semantic tasks. They tend to capture meaningful relational similarities between individual words, at the expense of lacking the capabilty of making the underlying semantic relation explicit. In this paper, we investigate the attribute relation that often holds between the constituents of adjective-noun phrases. We use CBOW word embeddings to represent word meaning and learn a compositionality function that combines the individual constituents into a phrase representation, thus capturing the compositional attribute meaning. The resulting embedding model, while being fully interpretable, outperforms countbased distributional vector space models that are tailored to attribute meaning in the two tasks of attribute selection and phrase similarity prediction. Moreover, as the model captures a generalized layer of attribute meaning, it bears the potential to be used for predictions over various attribute inventories without retraining .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Attributes such as SIZE, WEIGHT or COLOR are part of the building blocks of representing knowledge about real-world entities or events (Barsalou, 1992) . In natural language, formal attributes find their counterpart in attribute nouns which can be used in order to generalize over individual properties, e.g., big or small in case of SIZE, blue or red in case of COLOR (Hartung, 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 151, |
|
"text": "(Barsalou, 1992)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 369, |
|
"end": 384, |
|
"text": "(Hartung, 2015)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In order to ascribe such properties to entities or events, adjective-noun phrases are a very frequent linguistic pattern. In these constructions, attribute meaning is conveyed only implicitly, i.e., without being overtly realized at the phrasal surface. Hence, attribute selection has been defined as the task of predicting the hidden attribute meaning expressed by a property-denoting adjective in composition with a noun (Hartung and Frank, 2011b) , as in the following examples:", |
|
"cite_spans": [ |
|
{ |
|
"start": 423, |
|
"end": 449, |
|
"text": "(Hartung and Frank, 2011b)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1) a. hot summer \u2192 TEMPERATURE b. hot debate \u2192 EMOTIONALITY c. hot soup \u2192 TASTE/TEMPERATURE Previous work on this task has largely been carried out in distributional semantic models (cf. Hartung (2015) for an overview). In the face of the recent rise of distributed neural representations as a means of capturing lexical meaning in NLP tasks (Collobert et al., 2011; Mikolov et al., 2013a; Pennington et al., 2014) , our goal in this paper is to model attribute meaning based on word embeddings. In particular, we use CBOW embeddings of adjectives and nouns (Mikolov et al., 2013a) as underlying word representations and train a compositionality function in order to compute a phrase representation that is predictive of the implicitly conveyed attribute meaning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 343, |
|
"end": 367, |
|
"text": "(Collobert et al., 2011;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 368, |
|
"end": 390, |
|
"text": "Mikolov et al., 2013a;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 391, |
|
"end": 415, |
|
"text": "Pennington et al., 2014)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 559, |
|
"end": 582, |
|
"text": "(Mikolov et al., 2013a)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In fact, word embeddings (also referred to as predict models) have been shown to be highly effective in a variety of lexical semantic tasks (Baroni et al., 2014b), compared to \"traditional\" distributional semantic models (or count models) in the tradition of Harris (1954) . However, this finding has been refuted to a certain extent by Levy et al. (2015) , stating that much of the perceived superiority of word embeddings is due to hyperparameter optimizations rather than principled advantages. Moreover, the authors found that in many cases, tailoring count models to a particular task at hand is both feasible and beneficial in order to outperform the more generic embeddings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 259, |
|
"end": 272, |
|
"text": "Harris (1954)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 355, |
|
"text": "Levy et al. (2015)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This sheds light on a definitive plus of count models, viz. their transparency and interpretability in the sense that their semantic similarity ratings can (under certain conditions) be traced back to particular semantic relations, whereas word embeddings typically yield rather vague and diversified similarities (Erk, 2016) . Due to this lack in interpretability, word embeddings are not easily interoperable with symbolic lexical resources or ontologies. Thus, we argue that modelling attribute meaning poses an interesting challenge to word embeddings for two reasons: First, being rooted in ontological knowledge, attribute meaning clearly draws on interpretability of the underlying model; second, attribute meaning in adjective-noun phrases is conveyed in compositional processes (cf. Ex. (1)) which are underresearched in the context of word embeddings so far (Manning, 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 314, |
|
"end": 325, |
|
"text": "(Erk, 2016)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 868, |
|
"end": 883, |
|
"text": "(Manning, 2015)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our main contributions in this paper are: (i) We demonstrate that word embeddings can be successfully harnessed for attribute selection -a task that requires both compositional and interpretable representations of phrase meaning. (ii) This is achieved via a learned compositionality function f on adjective and noun embeddings that carves out attribute meaning in their compositional phrase meaning. (iii) We show that f captures generalized attribute meaning (cf. Bride et al. (2015) ) that abstracts from individual attributes. Thus, after fitting the compositionality function, our model bears the potential of being applied to various application scenarios (e.g., aspect-based sentiment analysis) involving diverse attribute inventories. (iv) We show that the same model also scales to the task of predicting semantic similarity of adjectivenoun phrases, which indicates both the robustness of the model and the importance of attribute meaning as a major source of phrase similarity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 465, |
|
"end": 484, |
|
"text": "Bride et al. (2015)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Attribute Learning from Adjectives and Nouns. Adjective-centric approaches to attribute learning from text date back to Almuhareb (2006) and Cimiano (2006) . Bakhshandeh and Allen (2015) present a sequence tagging model in order to extract attribute nouns from adjective glosses in WordNet. Most recently, Petersen and Hellwig (2016) use a clustering approach based on adjective-noun co-occurrences in order to induce clusters of German adjectives that constitute the value space of an attribute. However, their approach falls short of making the respective attribute explicit.", |
|
"cite_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 136, |
|
"text": "Almuhareb (2006)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 141, |
|
"end": 155, |
|
"text": "Cimiano (2006)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 158, |
|
"end": 186, |
|
"text": "Bakhshandeh and Allen (2015)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "These approaches have in common that they do not consider the compositional semantics of an adjective in its phrasal context with a noun in order to derive attribute meaning. This is in contrast to Hartung and Frank (2010; 2011b) who frame attribute selection in a distributional count model which (i) encodes adjectives and nouns as distributional word vectors over attributes as shared dimensions of meaning and (ii) uses vector mixture operations in order to compose these word vectors into phrase reresentations that are predictive of compositional attribute meaning. Tandon et al. (2014) propose a semi-supervised method for populating a knowledge base with triples of nouns, attributes and adjectives that are acquired from adjective-noun phrases. Being based on label propagation over monosemous adjectives as seeds, their approach depends on a lexical resource providing initial mappings between adjectives and attributes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 222, |
|
"text": "Hartung and Frank (2010;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 223, |
|
"end": 229, |
|
"text": "2011b)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 572, |
|
"end": 592, |
|
"text": "Tandon et al. (2014)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The present approach and the work by Hartung and Frank may be considered as pairs of opposites in two respects: First, our model is based on pretrained CBOW word embeddings for representing adjective and noun meaning. Thus, we do not encode any attribute-specific lexical information explicitly at the level of word representation. Second, we apply function learning in order to empirically induce a compositionality function that is trained to promote aspects of attribute meaning in adjective-noun phrase embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Compositionality. Modelling compositional processes at the intersection of word and phrase meaning in distributional semantic models has attracted considerable attention in the last years (Erk, 2012) . Mitchell and Lapata (2010) have promoted a variety of vector mixture models for the task, which have been criticized for their syntactic agnosticism (Baroni and Zamparelli, 2010; Guevara, 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 199, |
|
"text": "(Erk, 2012)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 202, |
|
"end": 228, |
|
"text": "Mitchell and Lapata (2010)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 351, |
|
"end": 380, |
|
"text": "(Baroni and Zamparelli, 2010;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 381, |
|
"end": 395, |
|
"text": "Guevara, 2010)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Focussing on adjective-noun compositionality, the latter authors propose instead to model adjective meaning as matrices encoding linear mappings between noun vectors. These attempts to integrate formal semantic principles in the tradition of Frege (1892) into a distributional framework have been generalized to a \"program for compositional distributional semantics\" (Baroni et al., 2014a ) that is centered around functional application as the general process to model compositionality in semantic spaces, thus emphasizing the insight that different linguistic phenomena require to be modeled in corresponding algebraic structures and composition operators matching these structures (cf. Widdows (2008) , Grefenstette and Sadrzadeh (2011) , Grefenstette et al. (2014) ). Bride et al. (2015) observe that such composition operators, by being trained on empirical corpus data, can either be tailored to specific lexical types (i.e., individual composition functions for each adjective in the corpus), or designed to capture general compositional processes in syntactic configurations (i.e., a single lexical function for all adjective-noun phrases). In line with these authors, we aim at learning a lexical function which captures attribute meaning in the compositional semantics of adjective-noun phrases, while generalizing over individual attributes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 367, |
|
"end": 388, |
|
"text": "(Baroni et al., 2014a", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 689, |
|
"end": 703, |
|
"text": "Widdows (2008)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 706, |
|
"end": 739, |
|
"text": "Grefenstette and Sadrzadeh (2011)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 742, |
|
"end": 768, |
|
"text": "Grefenstette et al. (2014)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 772, |
|
"end": 791, |
|
"text": "Bride et al. (2015)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Contrary to distributional count models, there is relatively few work on applying word embeddings to linguistic problems or NLP tasks related to compositionality. Notable exceptions are Socher et al. (2013) for sentiment analysis, as well as Salehi et al. (2015) and Cordeiro et al. (2016) who focus on predicting the degree of compositionality in nominal compounds rather than carving out a particular semantic relation that is expressed in their compositional semantics.", |
|
"cite_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 206, |
|
"text": "Socher et al. (2013)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 262, |
|
"text": "Salehi et al. (2015)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 267, |
|
"end": 289, |
|
"text": "Cordeiro et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Natural language refers to ontological attributes in terms of attribute nouns such as color, size or shape (Guarino, 1992; L\u00f6bner, 2013) . Therefore, despite remaining mostly implicit in adjectivenoun phrases (cf. Ex. (1) above), we hypothesize that attribute meaning can be learned from contextual patterns of attribute nouns in natural language text. This leads us to the assumption that adjectives, nouns and attributes (via attribute nouns) can be embedded in the same semantic space.", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 122, |
|
"text": "(Guarino, 1992;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 123, |
|
"end": 136, |
|
"text": "L\u00f6bner, 2013)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attribute Meaning in Natural Language", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In this work, we aim at a compositional approach to attribute meaning in adjective-noun phrases. As a consequence of the above assumption, our model represents adjectives, nouns and attributes as vec-tors a, n and attr , respectively, in one and the same embedding space S \u2286 R d . By designing a composition function f ( a, n) that produces phrase representations p \u2208 S, we can use nearest neighbour search in S in order to predict the attribute attr that is most likely expressed in the compositional semantics of an adjective-noun phrase p:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositional Models of Attribute Meaning", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "attr := arg max attr \u2208A cos( p, attr )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Compositional Models of Attribute Meaning", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where p = f ( a, n), cos denotes cosine vector similarity and A the set of all attributes considered. The compositional functions that we use in this work can be divided into baseline models, largely derived from Mitchell and Lapata (2010) , and trainable models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 239, |
|
"text": "Mitchell and Lapata (2010)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositional Models of Attribute Meaning", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Adjective or Noun. The simplest model is to skip any composition and just use the representation of the adjective or the noun as a surrogate: p = a or p = n, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Models", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "Pointwise Vector Addition. The first step in the direction of compositionality is pointwise vector addition: p = a + n. According to Mitchell and Lapata (2010) , the commutativity of addition is a disadvantage because the model ignores word order and thus syntactic information is lost.", |
|
"cite_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 159, |
|
"text": "Mitchell and Lapata (2010)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Models", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "Weighted Vector Addition. For the latter reason, Mitchell and Lapata (2010) also propose a weighted variant of pointwise vector addition. In order to account for possibly different contributions of the constituents to phrasal composition, scalar weights \u03b1 and \u03b2 are applied to the word vectors before pointwise addition: p = \u03b1 a + \u03b2 n.", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 75, |
|
"text": "Mitchell and Lapata (2010)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Models", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "Pointwise Vector Multiplication. This composition function multiplies the individual dimensions of the adjective and noun vector:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Models", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "p i = a i \u2022 b i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Models", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "Mitchell and Lapata (2010) point out that vector multiplication can be seen as equivalent to logical intersection. In previous work on attribute selection in a count-based distributional framework, the best results were obtained using pointwise multiplication (Hartung, 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 275, |
|
"text": "(Hartung, 2015)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Models", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "Dilation. The dilation model of Mitchell and Lapata (2010) dilates one vector in the direction of the other. This is inspired by the dilation effect of matrix multiplication, but is specifically designed to be basis-independent:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Models", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "p = ( n \u2022 n) a + (\u03bb \u2212 1)( n \u2022 a) a (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Models", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "Here, n is stretched by a factor \u03bb to emphasize the contribution of a. \u03bb is a parameter that has to be chosen manually. Analogously, dilation of the adjective is possible as well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Models", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "In this section, we present a method for supervised training of compositionality functions. We propose additive and multiplicative models that use weighting matrices or tensors to balance the contributions of adjectives and nouns. The composition is trained to specifically capture attribute meaning in the resulting phrase representation. The weights are trained as part of a shallow neural network (see Section 3.2.3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainable Models", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Full Weighted Additive Model. Following Guevara (2010) , the full additive model capitalizes on vector addition with weighting matrices for adjective and noun:", |
|
"cite_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 54, |
|
"text": "Guevara (2010)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainable Models", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p = A \u2022 a + N \u2022 n", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Trainable Models", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "As initializations of the weighting matrices, we use an identity matrix 1 , which is equivalent to non-parametric vector addition. As weighting schemes, we use one of (i) weighting only the adjective or noun, respectively, or (ii) weighting both adjective and noun distinctly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainable Models", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Note that, in line with Guevara (2010) , this model makes use of weight matrices in order to balance the contribution of adjectives and nouns to phrasal attribute meaning, whereas Mitchell and Lapata (2010) use scalar weights in their pointwise additive model (cf. Section 3.2.1). Our intuition is that full additive models should be better suited to model compositonal processes that involve interactions between dimensions of meaning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 38, |
|
"text": "Guevara (2010)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 180, |
|
"end": 206, |
|
"text": "Mitchell and Lapata (2010)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainable Models", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Trained Tensor Product. As a weighted multiplicative model, we use multiplication of adjective and noun representations with a learned thirdorder tensor T , following Bride et al. (2015) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 186, |
|
"text": "Bride et al. (2015)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainable Models", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "p = a T \u2022 T [1:d] \u2022 n (5) with a \u2208 R d , n \u2208 R d , T [1:d] \u2208 R d\u00d7d\u00d7d", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainable Models", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "In order to compose a phrase representation p from a and n, T is applied to the adjective vector in a tensor dot product. The tensor dot product multiplies components of vector and tensor and sums along the third axis of the tensor:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainable Models", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "X i,j = d k=1 a k \u2022 T i,j,k", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Trainable Models", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "with d being the dimensionality of the word embeddings. Equation 6results in a matrix X that is multiplied with the noun vector in a second step using common matrix multiplication: p = X \u2022 n.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainable Models", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Note that the latter step corresponds to functional application of the adjective to the noun as rooted in compositional distributional semantics (Baroni et al., 2014a) . The result is a phrase vector with the same dimensionality as adjective and noun. For initialization, we use an identity matrix for each second-order tensor along the third axis 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 167, |
|
"text": "(Baroni et al., 2014a)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainable Models", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "The weights of the models in Section 3.2.2 are trained as part of a shallow neural network with no hidden layer. For each adjective-noun phrase and the corresponding ground truth attribute in the training dataset, the respective 300-dimensional vectors 3 a, n and attr are obtained by performing a look-up in the pre-trained word embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Method", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "With a and n as its inputs, the neural network computes a phrase representation p \u2208 R 300 at the output layer. The error of the computed phrase representation to the expected attribute representation attr is computed using the mean squared error between the two vectors and is used as the training signal for the network parameters. Note that we do not train the embedding vectors along with the connection weights. While this could potentially benefit the results, we aim to explore whether generally trained word embeddings can be used to retrieve attribute meaning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Method", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "For our network architectures and computations, we use the deep learning library keras (Chollet, 2016) . Training takes 10 iterations over the training data; weights are optimized using the stochastic optimization method Adam (Kingma and Ba, 2015) . For the use of pre-trained word vectors (Mikolov et al., 2013b) 4 in a Python environment, we rely on the Gensim library (\u0158eh\u016f\u0159ek and Sojka, 2010).", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 102, |
|
"text": "(Chollet, 2016)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 226, |
|
"end": 247, |
|
"text": "(Kingma and Ba, 2015)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 313, |
|
"text": "(Mikolov et al., 2013b)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Method", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "In this experiment, we evaluate the compositional models defined in Section 3.2 on the attribute selection task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attribute Selection Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We use the HeiPLAS data set (Hartung, 2015) which contains adjective-attribute-noun triples that were heuristically extracted from WordNet (Miller and Fellbaum, 1998) and manually filtered by linguistic curators. The data is separated into development and test set (comprising 869 and 729 triples, respectively, which correspond to a total of 254 target attributes). The target attributes are subdivided into various semantically homogeneous subsets, as shown in Table 1 . Due to coverage issues in the pre-trained word2vec embeddings (Mikolov et al., 2013a) , some adjectives and nouns from HeiPLAS cannot be projected into the embedding space 5 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 43, |
|
"text": "(Hartung, 2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 139, |
|
"end": 166, |
|
"text": "(Miller and Fellbaum, 1998)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 535, |
|
"end": 558, |
|
"text": "(Mikolov et al., 2013a)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 463, |
|
"end": 470, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Experimental Procedure. Composition models as described in Section 3.2.2 are trained on all triples in HeiPLAS-Dev (following the procedure described in Section 3.2.3) and evaluated on HeiPLAS-Test. The word vector representations corresponding to the adjective and the noun in a test triple are composed into a phrase vector by applying the trained composition function. Using nearest neighbour search in S as described in Section 3.2, all test attributes are ranked wrt. their similarity to the composed phrase vector. For evaluation, we use precision-at-rank to measure the number of times the correct attribute is ranked as most similar to the phrase vector or among the first five ranks (P@1 and P@5, respectively).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment 1: Large-scale Attribute Selection", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Baseline Semantic Spaces. We directly compare our approach against the results of two countbased distributional models, C-LDA and L-LDA (Hartung, 2015) , on the same evaluation data. C-LDA and L-LDA induce distributional adjective and noun vectors over attributes as dimensions of meaning, which are composed into phrase representations using pointwise vector multiplication. Using these models for comparison enables us to assess both the impact of different types of word representations (dense CBOW word embeddings vs. specifically tailored attribute-based distributional word vectors) and different approaches to compositionality (pre-defined vector mixture operations on attribute-specific word representations vs. trained composition functions for promoting generalized attribute meaning in word embeddings).", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 151, |
|
"text": "(Hartung, 2015)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment 1: Large-scale Attribute Selection", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Results. Results of Experiment 1 are shown in Table 2 . The upper part of the table contains the results based on word embeddings (comprising non-parametric, parametric, dilation and trainable composition models); the count-based C-LDA and L-LDA baselines are displayed below.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 53, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment 1: Large-scale Attribute Selection", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Focussing on the non-parametric models first, we find that relying on the adjective embedding as a surrogate of a composed representation already outperforms both count models by a wide margin. This indicates a clear advantage of CBOW embeddings over count-based representations for capturing attribute meaning at the word level. However, this holds only for adjectives; noun embeddings in isolation perform much worse. This is confirmed by the dilation results: Dilating the noun representation into the direction of the adjective performs considerably better than vice versa, while there is no improvement beyond the non-compositional adjective baseline. These findings are in line with Hartung (2015) and Hartung and Frank (2011a) who also observed that adjective representations capture more of the compositional attribute semantics in adjective-noun phrases than noun representations do.", |
|
"cite_spans": [ |
|
{ |
|
"start": 689, |
|
"end": 703, |
|
"text": "Hartung (2015)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment 1: Large-scale Attribute Selection", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Considering the trained composition models, we find that weighting either the adjective or the noun in a full additive model substantially outperforms the respective non-compositional baseline. The overall best results are obtained by assigning trained weights to both the adjective and the noun embedding (P@1=0.56). This model also outperforms weighted vector addition 6 using scalar weights by great margins. (Hartung, 2015) 0.09 n/a L-LDA (Hartung, 2015) 0.16 n/a In comparison to the best full additive model, the tensor product underperforms by more than 10 points in P@1 and also falls short of weighting only the adjective. This is in line with a general preference of word embeddings for additive models (Mikolov et al., 2013a) , which is also confirmed by the non-parametric composition functions. On the other hand, we conjecture that the relatively small size of the training set used here is not sufficient for optimally tuning the 300 3 parameters in the learned tensor.", |
|
"cite_spans": [ |
|
{ |
|
"start": 412, |
|
"end": 427, |
|
"text": "(Hartung, 2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 443, |
|
"end": 458, |
|
"text": "(Hartung, 2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 713, |
|
"end": 736, |
|
"text": "(Mikolov et al., 2013a)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment 1: Large-scale Attribute Selection", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this experiment, we are interested in assessing the generalization power of the best-performing composition function as trained in Experiment 1. More precisely, we investigate the hypothesis that a full additive model captures a generalized compositional process in the semantics of attributedenoting adjective-noun phrases rather than the lexical meaning of individual attributes (cf. Bride et al. (2015) ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 389, |
|
"end": 408, |
|
"text": "Bride et al. (2015)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment 2: Generalization Power", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We evaluate this hypothesis wrt. (i) the fit of the composition function to different subsets of testing Subsets of Testing Attributes. First, we compare the fit of the composition function that has been trained on all attributes (cf. Experiment 1) on the different subsets of attributes in HeiPLAS-Test, as displayed in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 321, |
|
"end": 328, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment 2: Generalization Power", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The results of this experiment are shown in Figure 1. As can be seen from the solid bars in the plot, the attribute selection performance on individual subsets is considerably stronger than on the entire inventory, ranging from P@1=0.82 on the Core subset to P@1=0.64 on the Property and Measurable subsets (compared to P@1=0.56 on all attributes; cf. Table 2 ). The cross-hatched bars in the figure indicate the relative differences that result from re-training a composition function on the specific subset of interest. The improvements are consistently small (max. +0.08 on the Selected and Measurable subsets); in case of the Property subset, there is no difference at all.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 50, |
|
"text": "Figure", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 352, |
|
"end": 359, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment 2: Generalization Power", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Zero-Shot Learning. As defined by Palatucci et al. (2009) , zero-shot learning is the task of learning a classifier for predicting novel class labels un-seen during training. In order to assess the selection performance of our model in a zero-shot setting, we create four zero-shot training sets by removing from HeiPLAS-Train all attributes that are contained in each of the subsets described in Table 1, respectively. The corresponding subset from HeiPLAS-Test is used for evaluation afterwards.", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 57, |
|
"text": "Palatucci et al. (2009)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment 2: Generalization Power", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The zero-shot results are shown by the diagonally hatched bars in Fig. 1 . We find that Core attributes, without being seen during training, can be predicted at a performance of P@1=0.68. On larger subsets, zero-shot performance decreases (down to P@1=0.32 on Property attributes). Yet, we consider these results very decent overall, given that they are largely comparable or even superior (except for the Selected subset) to the best scores of the distributional L-LDA model (Hartung, 2015) as shown by the plain bars in Fig. 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 72, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 522, |
|
"end": 528, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment 2: Generalization Power", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Even though benefits from attribute-specific training cannot be denied, we find that the trained compositionality function is largely capable of generalizing over individual target attributes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment 2: Generalization Power", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Our experiments on attribute selection show that CBOW word embeddings can be effectively harnessed for carving out attribute meaning from adjective-noun phrases. Observed improvements over the previous state-of-the-art are due to the type of word representation as such (dense neural embeddings vs. distributional count models) as well as a learned compositionality function based on a full additive model capitalizing on weight matrices for balancing the contributions of adjectives and nouns. Moreover, we were able to show that the compositionality function captures a generalized compositional process in the semantics of attribute-denoting adjective-noun phrases rather than the lexical meaning of individual attributes. Therefore, the proposed approach (i) poses an interesting alternative to previous distributional models which explicitly encode attribute meaning in word vectors and rely on vector mixture operations in order to compose them into attribute-based phrase representations, and (ii) bears the potential of being used as a generalized attribute extraction model on various domains of applications that demand for different attribute inventories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "In this experiment, we assess the scalability of the previously trained composition models to different tasks by applying them to the prediction of semantic similarity in pairs of adjective-noun phrases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarity Prediction Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our experiments are based on the adjective-noun section of the evaluation data set released by Mitchell and Lapata (2010) . It consists of 108 pairs of adjective-noun phrases that were rated for similarity on a 7-point scale 7 by 54 human judges. In total, the data set comprises 1944 data points.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 121, |
|
"text": "Mitchell and Lapata (2010)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Adjective-Noun Phrase Similarity Experimental Procedure. For a given pair of adjective-noun phrases, we compute two phrase representations using word embeddings as word representations and compositionality functions trained on the HeiPLAS-Core subset, which achieved the best attribute selection results in Experiments 1 and 2. In the next step, we compute the cosine similarity between these two phrase representations. We correlate the results with human similarity ratings using Spearman's \u03c1 and compare the resulting correlation scores to the reported results of Mitchell and Lapata (2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 567, |
|
"end": 593, |
|
"text": "Mitchell and Lapata (2010)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment 3: Predicting", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Baseline Models. We compare our models against the following approaches from the literature which were evaluated on the same data set: C-LDA (Hartung and Frank, 2011a) , M&L-BoW and M&L-Topic (both by Mitchell and Lapata (2010) ). All baseline models are count-based distributional models which differ in their underlying representation of word meaning: M&L-BoW relies on bag-of-words context windows, M&L-Topic and C-LDA use topics and attribute nouns as dimensions of meaning, respectively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 167, |
|
"text": "(Hartung and Frank, 2011a)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 201, |
|
"end": 227, |
|
"text": "Mitchell and Lapata (2010)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment 3: Predicting", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Results. As shown in Table 3 , the best correlation scores between human similarity judgments and model predictions are achieved by our model that is built upon word embeddings and a trained full additive composition function based on weighting adjective and noun vectors (\u03c1=0.50). This model outperforms all distributional baseline models using vector mixtures as composition functions. With respect to weighted addition, all results reported in Table 3 are based on the weighting parameters (\u03b1=0.88; \u03b2=0.12) that have been found as optimal by Mitchell and Lapata (2010) . Based on a grid search, we find \u03b1=0.60 and \u03b2=0.40 to be the best weighting parameters on our data. In this setting, the performance of the weighted vector addition model on word2vec embeddings can be increased to \u03c1=0.47, which is still slightly below unweighted vector addition on embeddings (\u03c1=0.48). Apparently, scalar weights in pointwise vector addition are quite sensitive to the underlying word representation. In the particular case of using word embeddings for similarity prediction, the contribution of the noun to the compositional semantics of the phrase seems to be relatively stronger than in the attribute selection task (cf. Experiment 1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 545, |
|
"end": 571, |
|
"text": "Mitchell and Lapata (2010)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 28, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 447, |
|
"end": 454, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment 3: Predicting", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In total, these results indicate that compositionality functions optimized on the task of attribute selection can be effectively transferred to similarity prediction. This suggests that attribute meaning might be a prominent source of similarity in adjective-noun phrases, which will be subject to a closer investigation in the next experiment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment 3: Predicting", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Research in distributional semantics tends to focus on the degree of similarity between words or phrases, while the source of similarity is largely neglected (cf. Hartung (2015) ). In this experiment, we hypothesize that attribute meaning provides a plausible explanation for the observed degree of similarity in phrase pairs from the M&L data set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 177, |
|
"text": "Hartung (2015)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment 4: Interpreting the Source of Similarity", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Experimental Procedure. For a given phrase pair, we compute the top-5 most similar attributes for each phrase in terms of their nearest neighbours in S (cf. Section 3.2). Then, both phrases are compared wrt. the proportion of shared attributes within these top-5 predictions. Averaging this score over all phrase pairs which were assigned a particular similarity rating by the human judges yields an Average Shared Top-5 Attributes (ASTA-5) score for this similarity level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment 4: Interpreting the Source of Similarity", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Results. Figure 2 plots ASTA-5 scores at different levels of human similarity ratings. We observe a general trend across all compositionality functions investigated: The higher the rating cutoff, the higher the number of shared attributes. Thus, with increasing similarity between two phrases (according to human ratings), the proportion of shared attributes in their compositional semantics tends to increase as well. Moreover, for highly similar pairs (rating cutoff>5), the full additive vector addition model yields the highest ASTA-5 scores. Beyond this quantitative analysis, two of the authors manually investigated the shared attributes in 38 high-similarity phrase pairs (rating cutoff>4) as predicted by the weighted vector addition model wrt. their potential as plausible sources of similarity. We find that in 28 phrase pairs (73.6%), the predicted attribute is considered a plausible source of similarity, in eight others (26.4%), the predicted attribute does not explain the high similarity. The agreement between the annotators in terms of Fleiss' Kappa amounts to \u03ba = 0.62.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 17, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment 4: Interpreting the Source of Similarity", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Our results show that a full additive compositional model trained to target attribute meaning improves performance on similarity prediction. This supports the interpretation that attributes are (at least) a partial source of similarity between adjectivenoun phrases. In fact, this has been corroborated by a preliminary manual investigation of shared attributes between high-similarity phrases. However, there is also evidence for several cases in which attribute meaning falls short of explaining high phrase similarity. This holds for phrases involving abstract concepts, for instance (cf. Hartung (2015), Borghi and Binkofski (2014) ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 608, |
|
"end": 635, |
|
"text": "Borghi and Binkofski (2014)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Nevertheless, we consider it a strength of our model that it is capable of providing plausible explanations in cases where attribute meaning is the most prominent source of similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "We have presented a model of attribute meaning in adjective-noun phrases that capitalizes on CBOW word embeddings. In our experiments, the model proves remarkably versatile as it advances the state-of-the-art in the two tasks of attribute selection and phrase similarity prediction. In the latter task, the property of being fully interpretable wrt. attributes as the potential source of similarities became apparent as an additional asset rendering the model potentially interoperable with knowledge representation formalisms and resources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Improvements over previous distributional models can be traced back to two major sources: First, CBOW word embeddings work surprisingly well at the word level for capturing attribute meaning in adjectives (not for nouns, though). Future work should investigate whether further improvements can be obtained from more adjective-specific word embeddings that are trained on symmetric coordination patterns (Schwartz et al., 2016) . Second, a learned compositionality function is effective at promoting attribute meaning in composed phrase representations. Best performances across both tasks are achieved by a full additive model with distinct weight matrices for the adjective and noun constituent. A trained tensor product that comes closer to the linguistic notion of functional application also performs well beyond the previous state-of-the-art, while falling short of the additive model. Apparently, more training data is needed to exhaust the full potential of the tensor product. Alternatively, tensor decomposition techniques along the lines of Shah et al. (2015) may be a possible way of coping with the large parameter space of the tensor approach.", |
|
"cite_spans": [ |
|
{ |
|
"start": 403, |
|
"end": 426, |
|
"text": "(Schwartz et al., 2016)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 1051, |
|
"end": 1069, |
|
"text": "Shah et al. (2015)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Moreover, the learned compositionality function turns out to generalize well over individual attributes, which we consider a very promising result wrt. the suitability of the model in various NLP tasks such as aspect-based sentiment analysis. In future work, we are going to extend the present model to consider broader linguistic contexts and more varied syntactic configurations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We also experimented with different initializations such as random values, all-ones, or an identity matrix with additional small random values on non-diagonal elements, but found the identity matrix to work best.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We found a random initialization of all entries to perform substantially worse.3 This is the number of dimensions in the pre-trained word embeddings fromMikolov et al. (2013b).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Available from https://drive.google.com/ file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/ edit?usp=sharing 5 This affects 54 triples in HeiPLAS-Dev and 44 triples in HeiPLAS-Test, which were removed from the evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The weighted vector addition scores shown inTable 2are based on optimized parameters as reported byMitchell and Lapata (2010): \u03b1=0.88 and \u03b2=0.12. By shifting the parameters further into the direction of the adjective (i.e., \u03b1=0.90; \u03b2=0.10), P@1 slightly increases to 0.34.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A score of 1 expresses low similarity between phrases, 7 indicates high similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We gratefully acknowledge feedback and comments by the anonymous EACL reviewers, which considerably helped to improve the paper. This work was supported by the Cluster of Excellence Cognitive Interaction Technology 'CITEC' (EXC 277) at Bielefeld University, which is funded by the German Research Foundation (DFG), and by the German Federal Ministry of Education and Research (BMBF) in the KogniHome project.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Attributes in lexical acquisition", |
|
"authors": [ |
|
{ |
|
"first": "Abdulrahman", |
|
"middle": [], |
|
"last": "Almuhareb", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abdulrahman Almuhareb. 2006. Attributes in lexical acquisition. Ph.D. thesis, University of Essex.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "From Adjective Glosses to Attribute Concepts: Learning Different Aspects That an Adjective Can Describe", |
|
"authors": [ |
|
{ |
|
"first": "Omid", |
|
"middle": [], |
|
"last": "Bakhshandeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Allen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 11th International Conference on Computational Semantics (IWCS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "23--33", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Omid Bakhshandeh and James F. Allen. 2015. From Adjective Glosses to Attribute Concepts: Learning Different Aspects That an Adjective Can Describe. In Proceedings of the 11th International Conference on Computational Semantics (IWCS), pages 23-33, London, UK.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Zamparelli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1183--1193", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1183-1193. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Frege in Space: A Program for Compositional Distributional Semantics. Linguistic Issues in Language Technology", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raffaella", |
|
"middle": [], |
|
"last": "Bernardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Zamparelli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "241--346", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Baroni, Raffaella Bernardi, and Roberto Zam- parelli. 2014a. Frege in Space: A Program for Compositional Distributional Semantics. Linguistic Issues in Language Technology, 9:241-346.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Georgiana", |
|
"middle": [], |
|
"last": "Dinu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Germ\u00e0n", |
|
"middle": [], |
|
"last": "Kruszewski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "238--247", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e0n Kruszewski. 2014b. Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In Kristina Toutanova and Hua Wu, editors, Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 238-247, Baltimore, Maryland, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Frames, Concepts and Conceptual Fields", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Lawrence", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Barsalou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Frames, Fields and Contrasts", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "21--74", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lawrence W. Barsalou. 1992. Frames, Concepts and Conceptual Fields. In A. Lehrer and E.F. Kittay, ed- itors, Frames, Fields and Contrasts, pages 21-74.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Words as Social Tools: An Embodied View on Abstract Concepts", |
|
"authors": [ |
|
{ |
|
"first": "Anna", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Borghi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ferdinand", |
|
"middle": [], |
|
"last": "Binkofski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anna M. Borghi and Ferdinand Binkofski. 2014. Words as Social Tools: An Embodied View on Ab- stract Concepts. Springer Briefs in Cognition. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A Generalisation of Lexical Functions for Composition in Distributional Semantics", |
|
"authors": [ |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bride", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Van De Cruys", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Asher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "281--291", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antoine Bride, Tim Van de Cruys, and Nicholas Asher. 2015. A Generalisation of Lexical Functions for Composition in Distributional Semantics. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 281- 291, Beijing, China, July. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Ontology Learning and Population from Text. Algorithms, Evaluation and Applications", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Cimiano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Cimiano. 2006. Ontology Learning and Popu- lation from Text. Algorithms, Evaluation and Appli- cations. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Natural language processing (almost) from scratch", |
|
"authors": [ |
|
{ |
|
"first": "Ronan", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L\u00e9on", |
|
"middle": [], |
|
"last": "Bottou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Karlen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koray", |
|
"middle": [], |
|
"last": "Kavukcuoglu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Kuksa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2493--2537", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493-2537.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Predicting the Compositionality of Nominal Compounds: Giving Word Embeddings a Hard Time", |
|
"authors": [ |
|
{ |
|
"first": "Silvio", |
|
"middle": [], |
|
"last": "Cordeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Ramisch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Idiart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aline", |
|
"middle": [], |
|
"last": "Villavicencio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1986--1997", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Silvio Cordeiro, Carlos Ramisch, Marco Idiart, and Aline Villavicencio. 2016. Predicting the Com- positionality of Nominal Compounds: Giving Word Embeddings a Hard Time. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1986-1997, Berlin, Germany, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Vector space models of word meaning and phrase meaning: A survey", |
|
"authors": [ |
|
{ |
|
"first": "Katrin", |
|
"middle": [], |
|
"last": "Erk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Language and Linguistics Compass", |
|
"volume": "6", |
|
"issue": "10", |
|
"pages": "635--653", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katrin Erk. 2012. Vector space models of word mean- ing and phrase meaning: A survey. Language and Linguistics Compass, 6(10):635-653.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "What do you know about an alligator when you know the company it keeps?", |
|
"authors": [ |
|
{ |
|
"first": "Katrin", |
|
"middle": [], |
|
"last": "Erk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Semantics & Pragmatics", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "1--63", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katrin Erk. 2016. What do you know about an alliga- tor when you know the company it keeps? Seman- tics & Pragmatics, 9:1-63.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "\u00dcber Sinn und Bedeutung. Zeitschrift f\u00fcr Philosophie und philosophische Kritik", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "100", |
|
"issue": "", |
|
"pages": "25--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gottlob Frege. 1892.\u00dcber Sinn und Bedeutung. Zeitschrift f\u00fcr Philosophie und philosophische Kri- tik, 100:25-50.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Experimental Support for a Categorical Compositional Distributional Model of Meaning", |
|
"authors": [ |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehrnoosh", |
|
"middle": [], |
|
"last": "Sadrzadeh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1394--1404", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental Support for a Categorical Composi- tional Distributional Model of Meaning. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1394-1404. Association for Computiational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Concrete Sentence Spaces for Compositional Distributional Models of Meaning", |
|
"authors": [ |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehrnoosh", |
|
"middle": [], |
|
"last": "Sadrzadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bob", |
|
"middle": [], |
|
"last": "Coecke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Pulman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Computing Meaning", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "71--86", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward Grefenstette, Mehrnoosh Sadrzadeh, Stephen Clark, Bob Coecke, and Stephen Pulman. 2014. Concrete Sentence Spaces for Compositional Dis- tributional Models of Meaning. In Harry Bunt, Jo- han Bos, and Stephen Pulman, editors, Computing Meaning, volume 4, pages 71-86. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Concepts, Attributes and Arbitrary Relations", |
|
"authors": [ |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Guarino", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Data & Knowledge Engineering", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "249--261", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicola Guarino. 1992. Concepts, Attributes and Ar- bitrary Relations. Data & Knowledge Engineering, 8:249-261.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A regression model of adjective-noun compositionality in distributional semantics", |
|
"authors": [ |
|
{ |
|
"first": "Emiliano", |
|
"middle": [], |
|
"last": "Guevara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Workshop on Geometrical Models of Natural Language Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--37", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emiliano Guevara. 2010. A regression model of adjective-noun compositionality in distributional se- mantics. In Roberto Basili and Marco Pennac- chiotti, editors, Proceedings of the 2010 Workshop on Geometrical Models of Natural Language Se- mantics, pages 33-37, Uppsala, Sweden, July. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Distributional structure. Word", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Zellig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Harris", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1954, |
|
"venue": "", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "146--162", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zellig S. Harris. 1954. Distributional structure. Word, 10(2-3):146-162.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "A Structured Vector Space Model for Hidden Attribute Meaning in Adjective-Noun Phrases", |
|
"authors": [ |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Hartung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anette", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "430--438", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthias Hartung and Anette Frank. 2010. A Struc- tured Vector Space Model for Hidden Attribute Meaning in Adjective-Noun Phrases. In Proceed- ings of the 23rd International Conference on Com- putational Linguistics (COLING), Beijing, China, pages 430-438.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Assessing interpretable, attribute-related meaning representations for adjective-noun phrases in a similarity prediction task", |
|
"authors": [ |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Hartung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anette", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "52--61", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthias Hartung and Anette Frank. 2011a. Assessing interpretable, attribute-related meaning representa- tions for adjective-noun phrases in a similarity pre- diction task. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Lan- guage Semantics, pages 52-61, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Exploring Supervised LDA Models for Assigning Attributes to Adjective-Noun Phrases", |
|
"authors": [ |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Hartung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anette", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "540--551", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthias Hartung and Anette Frank. 2011b. Exploring Supervised LDA Models for Assigning Attributes to Adjective-Noun Phrases. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing, pages 540-551, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Distributional Semantic Models of Attribute Meaning in Adjectives and Nouns", |
|
"authors": [ |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Hartung", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthias Hartung. 2015. Distributional Seman- tic Models of Attribute Meaning in Adjectives and Nouns. Ph.D. thesis, Heidelberg University.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Adam: A Method for Stochastic Optimization. International Conference on Learning Representations", |
|
"authors": [ |
|
{ |
|
"first": "Diederik", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Improving distributional similarity with lessons learned from word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "211--225", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the Associ- ation for Computational Linguistics, 3:211-225.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Understanding Semantics. Routledge", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "L\u00f6bner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian L\u00f6bner. 2013. Understanding Semantics. Routledge, 2nd edition.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Computational Linguistics and Deep Learning", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "41", |
|
"issue": "", |
|
"pages": "701--707", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher D. Manning. 2015. Computational Lin- guistics and Deep Learning. Computational Lin- guistics, 41:701-707.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Distributed representations of words and phrases and their compositionality. In Proceed- ings of NIPS.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of ICLR Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Efficient estimation of word repre- sentations in vector space. In Proceedings of ICLR Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Wordnet: An electronic lexical database", |
|
"authors": [ |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christiane", |
|
"middle": [], |
|
"last": "Fellbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George Miller and Christiane Fellbaum. 1998. Word- net: An electronic lexical database.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Composition in Distributional Models of Semantics", |
|
"authors": [ |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Cognitive Science", |
|
"volume": "34", |
|
"issue": "8", |
|
"pages": "1388--1429", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeff Mitchell and Mirella Lapata. 2010. Composition in Distributional Models of Semantics. Cognitive Science, 34(8):1388-1429.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Zero-shot learning with semantic output codes", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Palatucci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dean", |
|
"middle": [], |
|
"last": "Pomerleau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Palatucci, Dean Pomerleau, Geoffrey Hinton, and Tom M. Mitchell. 2009. Zero-shot learn- ing with semantic output codes. In Proceedings of NIPS.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar, October. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Exploring the value space of attributes: Unsupervised bidirectional clustering of adjectives in German", |
|
"authors": [ |
|
{ |
|
"first": "Wiebke", |
|
"middle": [], |
|
"last": "Petersen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oliver", |
|
"middle": [], |
|
"last": "Hellwig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wiebke Petersen and Oliver Hellwig. 2016. Exploring the value space of attributes: Unsupervised bidirec- tional clustering of adjectives in German. In Pro- ceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Techni- cal Papers, pages 2839-2848, Osaka, Japan, De- cember. The COLING 2016 Organizing Committee. Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Cor- pora. In Ren\u00e9 Witte, Hamish Cunningham, Jon Patrick, Elena Beisswanger, Ekaterina Buyko, Udo Hahn, Karin Verspoor, and Anni R. Coden, editors, Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Val- letta, Malta, May. ELRA.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "A word embedding approach to predicting the compositionality of multiword expressions", |
|
"authors": [ |
|
{ |
|
"first": "Bahar", |
|
"middle": [], |
|
"last": "Salehi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Cook", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "977--983", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bahar Salehi, Paul Cook, and Timothy Baldwin. 2015. A word embedding approach to predicting the com- positionality of multiword expressions. In Proceed- ings of the 2015 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 977-983, Denver, Colorado, May-June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Symmetric patterns and coordinations: Fast and enhanced representations of verbs and adjectives", |
|
"authors": [ |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ari", |
|
"middle": [], |
|
"last": "Rappoport", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "499--505", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roy Schwartz, Roi Reichart, and Ari Rappoport. 2016. Symmetric patterns and coordinations: Fast and en- hanced representations of verbs and adjectives. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 499-505, San Diego, California, June. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Sparse and low-rank tensor decomposition", |
|
"authors": [ |
|
{ |
|
"first": "Parikshit", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikhil", |
|
"middle": [], |
|
"last": "Rao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gongguo", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Parikshit Shah, Nikhil Rao, and Gongguo Tang. 2015. Sparse and low-rank tensor decomposition. In Pro- ceedings of NIPS.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Perelygin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Chuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1631--1642", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Mod- els for Semantic Compositionality Over a Senti- ment Treebank. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA, October. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "WebChild: Harvesting and Organizing Commonsense Knowledge from the Web", |
|
"authors": [ |
|
{ |
|
"first": "Niket", |
|
"middle": [], |
|
"last": "Tandon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerard", |
|
"middle": [], |
|
"last": "De Melo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabian", |
|
"middle": [], |
|
"last": "Suchanek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 7th ACM International Conference on Web Search and Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "523--532", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Niket Tandon, Gerard de Melo, Fabian Suchanek, and Gerhard Weikum. 2014. WebChild: Harvesting and Organizing Commonsense Knowledge from the Web. In Proceedings of the 7th ACM International Conference on Web Search and Data Mining, pages 523-532, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Semantic Vector Products: Some Initial Investigations", |
|
"authors": [ |
|
{ |
|
"first": "Dominic", |
|
"middle": [], |
|
"last": "Widdows", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 2nd Conference on Quantum Interaction", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dominic Widdows. 2008. Semantic Vector Products: Some Initial Investigations. In Proceedings of the 2nd Conference on Quantum Interaction, Oxford, UK.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "Attribute selection performance of the full additive model after training on all attributes, specific subsets, and in zero-shot learning attributes, and (ii) its predictive capacity in a zeroshot learning scenario.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "ASTA-5 scores over different levels of human similarity ratings (cf. Experiment 4)", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"text": "WETNESS), scentless wisp (SMELL), vehement defense (STRENGTH)", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td/><td colspan=\"2\">Num. Attributes Train. Triples Num.</td><td>Example Phrases</td></tr><tr><td>Core</td><td>10</td><td>72</td><td>silvery hair (COLOR), huge wave (SIZE), longstanding conflict (DURATION)</td></tr><tr><td>Selected</td><td>23</td><td>153</td><td>sufficient food (QUANTITY), grave decision (IMPORTANCE), broad river (WIDTH)</td></tr><tr><td>Measurable</td><td>65</td><td>261</td><td>heavy load (WEIGHT), short hair (LENGTH), slow walker (SPEED)</td></tr><tr><td>Property</td><td>73</td><td>300</td><td>young people (AGE), high mountain (HEIGHT), straight line (SHAPE)</td></tr><tr><td>All</td><td>254</td><td>869</td><td>dry paint (</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"text": "Overview of subsets of attributes contained in HeiPLAS data, together with example phrases", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td/><td>Compositional Model</td><td colspan=\"2\">P@1 P@5</td></tr><tr><td/><td>Adjective</td><td>0.33</td><td>0.50</td></tr><tr><td/><td>Noun</td><td>0.03</td><td>0.10</td></tr><tr><td>predict models</td><td>Vector Addition (\u2295) Weighted Vector Addition Vector Multiplication ( ) Adj. Dilation (\u03bb = 2) Noun Dilation (\u03bb = 2) Full Add. Weighted Noun</td><td>0.24 0.33 0.00 0.06 0.33 0.33</td><td>0.45 0.51 0.02 0.18 0.51 0.54</td></tr><tr><td/><td>Full Add. Weighted Adjective</td><td>0.46</td><td>0.71</td></tr><tr><td/><td colspan=\"2\">Full Add. Weighted Adj. and Noun 0.56</td><td>0.75</td></tr><tr><td/><td>Trained Tensor Product (\u2297)</td><td>0.44</td><td>0.57</td></tr><tr><td>count</td><td>C-LDA</td><td/><td/></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"text": "", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF4": { |
|
"text": "", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>: Results of Experiment 3 (Spearman's \u03c1</td></tr><tr><td>between human judgments and model predictions)</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |