Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q14-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:11:21.844495Z"
},
"title": "Multi-Modal Models for Concrete and Abstract Concept Meaning",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Computer Laboratory University of Cambridge",
"location": {}
},
"email": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Multi-modal models that learn semantic representations from both linguistic and perceptual input outperform language-only models on a range of evaluations, and better reflect human concept acquisition. Most perceptual input to such models corresponds to concrete noun concepts and the superiority of the multimodal approach has only been established when evaluating on such concepts. We therefore investigate which concepts can be effectively learned by multi-modal models. We show that concreteness determines both which linguistic features are most informative and the impact of perceptual input in such models. We then introduce ridge regression as a means of propagating perceptual information from concrete nouns to more abstract concepts that is more robust than previous approaches. Finally, we present weighted gram matrix combination, a means of combining representations from distinct modalities that outperforms alternatives when both modalities are sufficiently rich.",
"pdf_parse": {
"paper_id": "Q14-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "Multi-modal models that learn semantic representations from both linguistic and perceptual input outperform language-only models on a range of evaluations, and better reflect human concept acquisition. Most perceptual input to such models corresponds to concrete noun concepts and the superiority of the multimodal approach has only been established when evaluating on such concepts. We therefore investigate which concepts can be effectively learned by multi-modal models. We show that concreteness determines both which linguistic features are most informative and the impact of perceptual input in such models. We then introduce ridge regression as a means of propagating perceptual information from concrete nouns to more abstract concepts that is more robust than previous approaches. Finally, we present weighted gram matrix combination, a means of combining representations from distinct modalities that outperforms alternatives when both modalities are sufficiently rich.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "What information is needed to learn the meaning of a word? Children learning words are exposed to a diverse mix of information sources. These include clues in the language itself, such as nearby words or speaker intention, but also what the child perceives about the world around it when the word is heard. Learning the meaning of words requires not only a sensitivity to both linguistic and perceptual input, but also the ability to process and combine information from these modalities in a productive way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many computational semantic models represent words as real-valued vectors, encoding their relative frequency of occurrence in particular forms and contexts in linguistic corpora (Sahlgren, 2006; Turney et al., 2010) . Motivated both by parallels with human language acquisition and by evidence that many word meanings are grounded in the perceptual system (Barsalou et al., 2003) , recent research has explored the integration into text-based models of input that approximates the visual or other sensory modalities (Silberer and Lapata, 2012; Bruni et al., 2014) . Such models can learn higher-quality semantic representations than conventional corpusonly models, as evidenced by a range of evaluations.",
"cite_spans": [
{
"start": 178,
"end": 194,
"text": "(Sahlgren, 2006;",
"ref_id": "BIBREF39"
},
{
"start": 195,
"end": 215,
"text": "Turney et al., 2010)",
"ref_id": "BIBREF43"
},
{
"start": 356,
"end": 379,
"text": "(Barsalou et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 516,
"end": 543,
"text": "(Silberer and Lapata, 2012;",
"ref_id": "BIBREF41"
},
{
"start": 544,
"end": 563,
"text": "Bruni et al., 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, the majority of perceptual input for the models in these studies corresponds directly to concrete noun concepts, such as chocolate or cheeseburger, and the superiority of the multi-modal over the corpus-only approach has only been established when evaluations include such concepts (Leong and Mihalcea, 2011; Bruni et al., 2012; Roller and Schulte im Walde, 2013; Silberer and Lapata, 2012) . It is thus unclear if the multi-modal approach is effective for more abstract words, such as guilt or obesity. Indeed, since empirical evidence indicates differences in the representational frameworks of both concrete and abstract concepts (Paivio, 1991; Hill et al., 2013) , and verb and noun concepts (Markman and Wisniewski, 1997) , perceptual information may not fulfill the same role in the representation of the various concept types. This potential challenge to the multi-modal approach is of particular practical importance since concrete nouns constitute only a small proportion of the open-class, meaning-bearing words in everyday language (Section 2).",
"cite_spans": [
{
"start": 291,
"end": 317,
"text": "(Leong and Mihalcea, 2011;",
"ref_id": "BIBREF27"
},
{
"start": 318,
"end": 337,
"text": "Bruni et al., 2012;",
"ref_id": "BIBREF4"
},
{
"start": 338,
"end": 372,
"text": "Roller and Schulte im Walde, 2013;",
"ref_id": "BIBREF36"
},
{
"start": 373,
"end": 399,
"text": "Silberer and Lapata, 2012)",
"ref_id": "BIBREF41"
},
{
"start": 642,
"end": 656,
"text": "(Paivio, 1991;",
"ref_id": "BIBREF33"
},
{
"start": 657,
"end": 675,
"text": "Hill et al., 2013)",
"ref_id": "BIBREF18"
},
{
"start": 705,
"end": 735,
"text": "(Markman and Wisniewski, 1997)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In light of these considerations, this paper addresses three questions: (1) Which information sources (modalities) are important for acquiring concepts of different types? (2) Can perceptual input be propagated effectively from concrete to more abstract words? (3) What is the best way to combine information from the different sources?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We construct models that acquire semantic representations for four sets of concepts: concrete nouns, abstract nouns, concrete verbs and abstract verbs. The linguistic input to the models comes from the recently released Google Syntactic N-Grams Corpus (Goldberg and Orwant, 2013) , from which a selection of linguistic features are extracted. Perceptual input is approximated by data from the McRae et al. (2005) norms, which encode perceptual properties of concrete nouns, and the ESPGame dataset (Von Ahn and Dabbish, 2004) , which contains manually generated descriptions of 100,000 images.",
"cite_spans": [
{
"start": 252,
"end": 279,
"text": "(Goldberg and Orwant, 2013)",
"ref_id": "BIBREF14"
},
{
"start": 393,
"end": 412,
"text": "McRae et al. (2005)",
"ref_id": "BIBREF30"
},
{
"start": 498,
"end": 525,
"text": "(Von Ahn and Dabbish, 2004)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address (1) we extract representations for each concept type from combinations of information sources. We first focus on different classes of linguistic features, before extending our models to the multi-modal context. While linguistic information overall effectively reflects the meaning of all concept types, we show that features encoding syntactic patterns are only valuable for the acquisition of abstract concepts. On the other hand, perceptual information, whether directly encoded or propagated through the model, plays a more important role in the representation of concrete concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addressing (2), we propose ridge regression (Myers, 1990) as a means of propagating features from concrete nouns to more abstract concepts. The regularization term in ridge regression encourages solutions that generalize well across concept types. We show that ridge regression effectively propagates perceptual information to abstract nouns and concrete verbs, and is overall preferable to both linear regression and the method of Johns and Jones (2012) applied to a similar task by Silberer and Lapata (2012) . However, for all propagation methods, the impact of integrating perceptual information depends on the concreteness of the target concepts. Indeed, for abstract verbs, the most abstract concept type in our evaluations, perceptual input actually degrades representation quality. This highlights the need to consider the concreteness of the target domain when constructing multi-modal models.",
"cite_spans": [
{
"start": 487,
"end": 513,
"text": "Silberer and Lapata (2012)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address (3), we present various means of combining information from different modalities. We propose weighted gram matrix combination, a technique in which representations of distinct modalities are mapped to a space of common dimension where coordinates reflect proximity to other concepts. This transformation, which has been shown to enhance semantic representations in the context of verbclustering (Reichart and Korhonen, 2013) , reduces representation sparsity and facilitates a productbased combination that results in greater inter-modal dependency. Weighted gram matrix combination outperforms alternatives such as concatenation and Canonical Correlation Analysis (CCA) (Hardoon et al., 2004) when combining representations from two similarly rich information sources.",
"cite_spans": [
{
"start": 406,
"end": 435,
"text": "(Reichart and Korhonen, 2013)",
"ref_id": "BIBREF35"
},
{
"start": 682,
"end": 704,
"text": "(Hardoon et al., 2004)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 3, we present experiments with linguistic features designed to address question (1). These analyses are extended to multi-modal models in Section 4, where we also address (2) and (3). We first discuss the relevance of concreteness and part-ofspeech (lexical function) to concept representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A large and growing body of psychological evidence indicates differences between abstract and concrete concepts. 1 It has been shown that concrete words are more easily learned, remembered and processed than abstract words (Paivio, 1991; Schwanenflugel and Shoben, 1983) , while neuroimaging studies demonstrate differences in brain activity when subjects are presented with stimuli corresponding to the two concept types (Binder et al., 2005) .",
"cite_spans": [
{
"start": 223,
"end": 237,
"text": "(Paivio, 1991;",
"ref_id": "BIBREF33"
},
{
"start": 238,
"end": 270,
"text": "Schwanenflugel and Shoben, 1983)",
"ref_id": "BIBREF40"
},
{
"start": 422,
"end": 443,
"text": "(Binder et al., 2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Concreteness and Word Meaning",
"sec_num": "2"
},
{
"text": "The abstract/concrete distinction is important to computational semantics for various reasons. While many models construct representations of concrete words (Andrews et al., 2009; Landauer and Dumais, 1997) , abstract words are in fact far more common in everyday language. For instance, based on an analysis of those noun concepts in the University of South Florida dataset (USF) and their occurrence in the British National Corpus (BNC) (Leech et al., 1994) , 72% of noun tokens in corpora are rated by human Average Concreteness Rating Figure 1 : Boxplot of concreteness distributions for noun and verb concepts in the USF data, with selected example concepts. The bold vertical line is the mean, boxes extend from the first to the third quartile, and dots represent outliers.",
"cite_spans": [
{
"start": 157,
"end": 179,
"text": "(Andrews et al., 2009;",
"ref_id": "BIBREF0"
},
{
"start": 180,
"end": 206,
"text": "Landauer and Dumais, 1997)",
"ref_id": "BIBREF25"
},
{
"start": 439,
"end": 459,
"text": "(Leech et al., 1994)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 539,
"end": 547,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Concreteness and Word Meaning",
"sec_num": "2"
},
{
"text": "judges as more abstract than the noun war, a concept that many would already consider quite abstract. 2 The recent interest in multi-modal semantics further motivates a principled modelling approach to lexical concreteness. Many multi-modal models implicitly distinguish concrete and abstract concepts since their perceptual input corresponds only to concrete words (Bruni et al., 2012; Silberer and Lapata, 2012; Roller and Schulte im Walde, 2013) . However, given that many abstract concepts express relations or modifications of concrete concepts (Gentner and Markman, 1997), it is reasonable to expect that perceptual information about concrete concepts could also enhance the quality of more abstract representations in an appropriately constructed model. Moreover, concreteness is closely related to more functional lexical distinctions, such as those between adjectives, nouns and verbs. An analysis of the USF dataset, which includes concreteness ratings for over 4,000 words collected from thousands of participants, indicates that on average verbs (mean concreteness, 3.64) are considered more abstract than nouns (mean concreteness, 4.91), an effect illustrated in Figure 1 . This connection between lexical function and concreteness suggests that a sensitivity to concreteness could improve models that already make principled distinctions between words based on their part-of-speech (POS) (Im Walde, 2006; Baroni and Zamparelli, 2010) .",
"cite_spans": [
{
"start": 102,
"end": 103,
"text": "2",
"ref_id": null
},
{
"start": 366,
"end": 386,
"text": "(Bruni et al., 2012;",
"ref_id": "BIBREF4"
},
{
"start": 387,
"end": 413,
"text": "Silberer and Lapata, 2012;",
"ref_id": "BIBREF41"
},
{
"start": 414,
"end": 448,
"text": "Roller and Schulte im Walde, 2013)",
"ref_id": "BIBREF36"
},
{
"start": 1419,
"end": 1447,
"text": "Baroni and Zamparelli, 2010)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 1176,
"end": 1184,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Concreteness and Word Meaning",
"sec_num": "2"
},
{
"text": "Although the focus of this paper is on multimodal models, few conventional semantic models make principled distinctions between concepts based on function or concreteness. Before turning to the multi-modal case, we thus investigate whether",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concreteness and Word Meaning",
"sec_num": "2"
},
{
"text": "It has long been known that aspects of word meaning can be inferred from nearby words in corpora. Approaches that exploit this fact are often called distributional models (Sahlgren, 2006; Turney et al., 2010) . We take a distributional approach to learning linguistic representations. The advantage of using distributional methods to learn representations from corpora versus approaches that rely on knowledge bases (Pedersen et al., 2004; Leong and Mihalcea, 2011) is that they are more scalable, easily applicable across languages and plausibly reflect the process of human word learning (Landauer and Dumais, 1997; Griffiths et al., 2007) . We group distributional features into three classes to test which forms of linguistic information are most pertinent to the abstract/concrete and verb/noun distinctions.",
"cite_spans": [
{
"start": 171,
"end": 187,
"text": "(Sahlgren, 2006;",
"ref_id": "BIBREF39"
},
{
"start": 188,
"end": 208,
"text": "Turney et al., 2010)",
"ref_id": "BIBREF43"
},
{
"start": 416,
"end": 439,
"text": "(Pedersen et al., 2004;",
"ref_id": "BIBREF34"
},
{
"start": 440,
"end": 465,
"text": "Leong and Mihalcea, 2011)",
"ref_id": "BIBREF27"
},
{
"start": 590,
"end": 617,
"text": "(Landauer and Dumais, 1997;",
"ref_id": "BIBREF25"
},
{
"start": 618,
"end": 641,
"text": "Griffiths et al., 2007)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Concreteness and Linguistic Features",
"sec_num": "3"
},
{
"text": "All features are extracted from The Google Syntactic N-grams Corpus. The dataset contains counted dependency-tree fragments for over 10bn words of the English Google Books Corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concreteness and Linguistic Features",
"sec_num": "3"
},
{
"text": "Lexical Features Our lexical features are the cooccurrence counts of a concept word with each of the other 2,529 concepts in the USF data. Cooccurrences are counted in a 5-word window, and, as elsewhere (Erk and Pad\u00f3, 2008) , weighted by pointwise mutual information (PMI) to control for the underlying frequency of both concept and word.",
"cite_spans": [
{
"start": 203,
"end": 223,
"text": "(Erk and Pad\u00f3, 2008)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Classes",
"sec_num": "3.1"
},
{
"text": "Many words function as more than one POS, and this variation can be indicative of meaning (Manning, 2011 nouns, such as shiver or walk, often refer to processes rather than entities. To capture such effects, we count the frequency of occurrence with the POS categories ajdective, adverb, noun and verb.",
"cite_spans": [
{
"start": 90,
"end": 104,
"text": "(Manning, 2011",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "POS-tag Features",
"sec_num": null
},
{
"text": "Grammatical Features Grammatical role is a strong predictor of semantics (Gildea and Jurafsky, 2002) . For instance, the subject of transitive verbs is more likely to refer to an animate entity than a noun chosen at random. Syntactic context also predicts verb semantics (Kipper et al., 2008) . We thus count the frequency of nouns in a range of (nonlexicalized) syntactic contexts, and of verbs in one of the six most common subcategorization-frame classes as defined in Van de Cruys et al. (2012). These contexts are detailed in Table 1 .",
"cite_spans": [
{
"start": 73,
"end": 100,
"text": "(Gildea and Jurafsky, 2002)",
"ref_id": "BIBREF13"
},
{
"start": 271,
"end": 292,
"text": "(Kipper et al., 2008)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 531,
"end": 538,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "POS-tag Features",
"sec_num": null
},
{
"text": "We create evaluation sets of abstract and concrete concepts, and introduce a complementary dichotomy between nouns and verbs, the two POS categories most fundamental to propositional meaning. To construct these sets, we extract nouns and verbs from word pairs in the USF data based on their majority POS-tag in the lemmatized BNC (Leech et al., 1994) , excluding any word not assigned to either of the POS categories in more than 70% of instances. From the resulting 2175 nouns and 354 verbs, the abstract-concrete distinction is drawn by ordering words according to concreteness and sampling at random from the first and fourth quartiles. Any concrete nouns not occurring in the McRae et al. 2005Property Norm dataset were also excluded. For each list of concepts L = concrete nouns, concrete verbs, abstract nouns, abstract verbs, together with lists all nouns and all verbs, a corresponding set of pairs {(w 1 , w 2 ) \u2208 U SF : w 1 , w 2 \u2208 L} is defined for evaluation. These details are summarized in Table 2 . Evaluation lists, sets of pairs and USF scores are downloadable from our website.",
"cite_spans": [
{
"start": 330,
"end": 350,
"text": "(Leech et al., 1994)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 1004,
"end": 1011,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Evaluation Sets",
"sec_num": "3.2"
},
{
"text": "All models are evaluated by measuring correlations with the free-association scores in the USF dataset (Nelson et al., 2004) . This dataset contains the freeassociation strength of over 150,000 word pairs. 3 These data reflect the cognitive proximity of concepts and have been widely used in NLP as a goldstandard for computational models (Andrews et al., 2009; Feng and Lapata, 2010; Silberer and Lapata, 2012; Roller and Schulte im Walde, 2013) .",
"cite_spans": [
{
"start": 103,
"end": 124,
"text": "(Nelson et al., 2004)",
"ref_id": "BIBREF32"
},
{
"start": 339,
"end": 361,
"text": "(Andrews et al., 2009;",
"ref_id": "BIBREF0"
},
{
"start": 362,
"end": 384,
"text": "Feng and Lapata, 2010;",
"ref_id": "BIBREF9"
},
{
"start": 385,
"end": 411,
"text": "Silberer and Lapata, 2012;",
"ref_id": "BIBREF41"
},
{
"start": 412,
"end": 446,
"text": "Roller and Schulte im Walde, 2013)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methodology",
"sec_num": "3.3"
},
{
"text": "For evaluation pairs (c 1 , c 2 ) we calculate the cosine similarity between our learned feature representations for c 1 and c 2 , a standard measure of the proximity of two vectors (Turney et al., 2010), and follow previous studies (Leong and Mihalcea, 2011; Huang et al., 2012) in using Spearman's \u03c1 as a measure of correlation between these values and our goldstandard. 4 Table 3 : Spearman correlation \u03c1 of cosine similarity between vector representations derived from three feature classes with USF scores. * indicates statistically significant correlations (p < 0.05 ).",
"cite_spans": [
{
"start": 373,
"end": 374,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 375,
"end": 382,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Methodology",
"sec_num": "3.3"
},
{
"text": "The performance of each feature class on the evaluation sets is detailed in Table 3 . When all linguistic features are included, performance is somewhat better on noun concepts (\u03c1 = 0.182) than verbs (\u03c1 = 0.172). However, while correlations are significant on concrete (\u03c1 = 0.181) and abstract nouns (\u03c1 = 0.247) and concrete verbs, the effect is not significant on abstract verbs (although it is on verbs overall). The highest correlations for the linguistic features together are on abstract nouns (\u03c1 = 0.247) and concrete verbs (\u03c1 = 0.267). Referring back to the continuum in Figure 1 , it is possible that there is an optimum concreteness level, exhibited by abstract nouns and concrete verbs, at which conceptual meaning is best captured by linguistic models.",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 83,
"text": "Table 3",
"ref_id": null
},
{
"start": 578,
"end": 586,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.4"
},
{
"text": "The results indicate that the three feature classes convey distinct information. It is perhaps unsurprising that lexical features produce the best performance in the majority of cases; the value of lexical co-occurrence statistics in conveying word meaning is expressed in the well known distributional hypothesis (Harris, 1954) . More interestingly, on abstract concepts the contribution of POS-tag (nouns, \u03c1 = 0.119; verbs, \u03c1 = 0.123 ) and grammatical features (nouns, \u03c1 = 0.121; verbs, \u03c1 = 0.114) is notably higher than on the corresponding concrete concepts. The importance of such features to modelling free-association between abstract concepts suggests that they may convey information about how concepts are (subjectively) organized and interrelated in the minds of language users, independent of their realisation in the physical world. Indeed, since abstract representations rely to a lesser extent than concrete representations on perceptual input (Section 4), it is perhaps unsurprising that more of their meaning is reflected in subtle linguistic patterns.",
"cite_spans": [
{
"start": 314,
"end": 328,
"text": "(Harris, 1954)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.4"
},
{
"text": "The results in this section demonstrate that differeach representation, then concatenate and then renormalize.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.4"
},
{
"text": "ent information is required to learn representations for abstract and concrete concepts and for noun and verb concepts. In the next section, we investigate how perceptual information fits into this equation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.4"
},
{
"text": "As noted in Section 2, there is experimental evidence that perceptual information plays a distinct role in the representation of different concept types. We explore whether this finding extends to computational models by integrating such information into our corpus-based approaches. We focus on two aspects of the integration process. Propagation: Can models infer useful information about abstract nouns and verbs from perceptual information corresponding to concrete nouns? And combination: How can linguistic and (propagated or actual) perceptual information be integrated into a single, multi-modal representation? We begin by introducing the two sources of perceptual information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acquiring Multi-Modal Representations",
"sec_num": "4"
},
{
"text": "The McRae Dataset The McRae et al. 2005Property Norms dataset is commonly used as a perceptual information source in cognitively-motivated semantic models (Kelly et al., 2010; Roller and Schulte im Walde, 2013). The dataset contains properties of over 500 concrete noun concepts produced by 30 human annotators. The proportion of subjects producing each property gives a measure of the strength of that property for a given concept. We encode this data in vectors with coordinates for each of the 2,526 properties in the dataset. A concept representation contains (real-valued) feature strengths in places corresponding to the features of that concept and zeros elsewhere. Having defined the concrete noun evaluation set as the 303 concepts found in both the USF and McRae datasets, this information is available for all concrete nouns.",
"cite_spans": [
{
"start": 155,
"end": 175,
"text": "(Kelly et al., 2010;",
"ref_id": "BIBREF22"
},
{
"start": 176,
"end": 186,
"text": "Roller and",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Perceptual Information Sources",
"sec_num": "4.1"
},
{
"text": "The ESP-Game Dataset To complement the cognitively-driven McRae data with a more explicitly visual information source, we also extract information from the ESP-Game dataset (Von Ahn and Dabbish, 2004) of 100,000 photographs, each annotated with a list of entities depicted in that image. This input enables connections to be made between concepts that co-occur in scenes, and thus might be experienced together by language learners at a given time. Because we want our models to reflect human concept learning in inferring conceptual knowledge from comparatively unstructured data, we use the ESP-Game dataset in preference to resources such as ImageNet (Deng et al., 2009) , in which the conceptual hierarchy is directly encoded by expert annotators. An additional motivation is that ESP-Game was produced by crowdsourcing a simple task with untrained annotators, and thus represents a more scalable class of data source. We represent the ESP-Game data in 100,000 dimensional vectors, with co-ordinates corresponding to each image in the dataset. A concept representation contains a 1 in any place that corresponds to an image in which the concept appears, and a 0 otherwise. Although it is possible to portray actions and processes in static images, and several of the ESP-Game images are annotated with verb concepts, for a cleaner analysis of the information propagation process we only include ESP input in our models for the concrete nouns in the evaluation set.",
"cite_spans": [
{
"start": 654,
"end": 673,
"text": "(Deng et al., 2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Perceptual Information Sources",
"sec_num": "4.1"
},
{
"text": "The data encoding outlined above results in perceptual representations of dimension \u2248 100, 000, for which, on average, fewer than 0.5% of entries are non-zero 6 . In contrast, in our full linguistic representations of nouns (dimension \u2248 4, 000) and verbs (dimension \u2248 8, 000) (Section 3), an average of 24% of entries are non-zero. One of the challenges for the propagation and combination methods described in the following subsections is therefore to manage the differences in dimension and sparsity between linguistic and perceptual representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Perceptual Information Sources",
"sec_num": "4.1"
},
{
"text": "Johns and Jones Silberer and Lapata (2012) apply a method designed by Johns and Jones (2012) to infer quasi-perceptual representations for a concept in the case that actual perceptual information is not available. Translating their approach to the present context, for verbs and abstract nouns we infer quasiperceptual representations based on the perceptual features of concrete nouns that are nearby in the semantic space defined by the linguistic features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Propagation",
"sec_num": "4.2"
},
{
"text": "In the first step of their two-step method, for each abstract noun or verb k, a quasi-perceptual representation is computed as an average of the perceptual representations of the concrete nouns, weighted by the proximity between these nouns and k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Propagation",
"sec_num": "4.2"
},
{
"text": "k p = c\u2208C S(k l , c l ) \u03bb \u2022 c p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Propagation",
"sec_num": "4.2"
},
{
"text": "whereC is the set of concrete nouns, c p and k p are the perceptual representations for c and k respectively, and c l and k l the linguistic representations. The exponent parameter \u03bb reflects the learning rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Propagation",
"sec_num": "4.2"
},
{
"text": "Following Johns and Jones (2012), we define the proximity function S between noun concepts to be cosine similarity. However, because our verb and noun representations are of different dimension, we take verb-noun proximity to be the PMI between the two words in the corpus, with co-occurrences counted within a 5-word window.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Propagation",
"sec_num": "4.2"
},
{
"text": "In step two, the initial quasi-perceptual representations are inferred for a second time, but with the weighted average calculated over the perceptual or initial quasi-perceptual representations of all other words, not just concrete nouns. As with Johns and Jones (2012), we set the learning rate parameter \u03bb to be 3 in the first step and 13 in the second.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Propagation",
"sec_num": "4.2"
},
{
"text": "As an alternative propagation method we propose ridge regression (Myers, 1990) . Ridge regression is a variant of least squares regression in which a regularization term is added to the training objective to favor solutions with certain properties. Here we apply it to learn parameters for linear maps from linguistic representations of concrete nouns to features in their perceptual representations. For concepts with perceptual representations of dimension n p , we learn n p linear functions f i : R n l \u2192 R that map the linguistic representations (of dimension n l ) to a particular perceptual feature i. These functions are then applied together to map the linguistic representations of abstract nouns and verbs to full quasi-perceptual representations. 7 As our model is trained on concrete nouns but applied to other concept types, we do not wish the mapping to reflect the training data too faithfully. To mitigate against this we define our regularization term as the Euclidian l 2 norm of the inferred parameter vector. This term ensures that the regression favors lower coefficients and a smoother solution function, which should provide better generalization performance than simple linear regression. The objective for learning the f i is then to minimize",
"cite_spans": [
{
"start": 65,
"end": 78,
"text": "(Myers, 1990)",
"ref_id": "BIBREF31"
},
{
"start": 759,
"end": 760,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ridge Regression",
"sec_num": null
},
{
"text": "aX \u2212 Y i 2 2 + a 2 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ridge Regression",
"sec_num": null
},
{
"text": "where a is the vector of regression coefficients, X is a matrix of linguistic representations and Y i a vector of perceptual feature i for the set of concrete nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ridge Regression",
"sec_num": null
},
{
"text": "We now investigate ways in which the (quasi-) perceptual representations acquired via these methods can be combined with linguistic representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ridge Regression",
"sec_num": null
},
{
"text": "Canonical Correlation Analysis Canonical correlation analysis (CCA) (Hardoon et al., 2004) is an established statistical method for exploring relationships between two sets of random variables. The method determines a linear transformation of the space spanned by each of the sets of variables, such that the correlations between the sets of transformed variables is maximized. Silberer and Lapata (2012) apply CCA in the present context of information fusion, with one set of random variables corresponding to perceptual features and another corresponding to linguistic features. Applied in this way, CCA provides a mechanism for reducing the dimensionality of the linguistic and perceptual representations such that the important interactions between them are preserved. 8 The transformed linguistic and perceptual vectors are then concatenated. We follow Silberer and Lapata by applying a kernalized variant of CCA. 9",
"cite_spans": [
{
"start": 68,
"end": 90,
"text": "(Hardoon et al., 2004)",
"ref_id": "BIBREF16"
},
{
"start": 378,
"end": 404,
"text": "Silberer and Lapata (2012)",
"ref_id": "BIBREF41"
},
{
"start": 773,
"end": 774,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information Combination",
"sec_num": "4.3"
},
{
"text": "7 Because the POS-tag and grammatical features are different for nouns and for verbs, we exclude them from our linguistic representations when implementing ridge regression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Combination",
"sec_num": "4.3"
},
{
"text": "8 Dimensionality reduction is desirable in the present context because of the sparsity of our perceptual representations. 9 The KernelCCA package in Python: http://pythonhosted.org/apgl/KernelCCA.html",
"cite_spans": [
{
"start": 122,
"end": 123,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information Combination",
"sec_num": "4.3"
},
{
"text": "Weighted Gram Matrix Combination The method we propose as an alternative means of fusing linguistic and extra-linguistic information is weighted gram matrix combination, which derives from an information combination technique applied to verb clustering by Reichart and Korhonen (2013) . For a set of concepts C = {c 1 , . . . , c n } with representations {r 1 , . . . , r n }, the method involves creating an n \u00d7 n weighted gram matrix L in which",
"cite_spans": [
{
"start": 256,
"end": 284,
"text": "Reichart and Korhonen (2013)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information Combination",
"sec_num": "4.3"
},
{
"text": "L ij = S(r i , r j ) \u2022 \u03c6(r i ) \u2022 \u03c6(r j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Combination",
"sec_num": "4.3"
},
{
"text": "Here, S is again a similarity function (we use cosine similarity), and \u03c6(r) is the quality score of r.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Combination",
"sec_num": "4.3"
},
{
"text": "The quality scoring function \u03c6 can be any mapping R n \u2192 R that reflects the importance of a concept relative to other concepts in C. In the present context, we follow Reichart and Korhonen (2013) in defining a quality score \u03c6 as the average cosine similarity of a concept with all other concepts in C",
"cite_spans": [
{
"start": 167,
"end": 195,
"text": "Reichart and Korhonen (2013)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information Combination",
"sec_num": "4.3"
},
{
"text": "\u03c6(r j ) = 1 n n i=1 S(r i , r j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Combination",
"sec_num": "4.3"
},
{
"text": "For c j \u2208 C, the matrix L then encodes a scalar projection of r j onto the other members r i\u2264n , weighted by their quality. Each word representation in the set is thus mapped into a new space of dimension n determined by the concepts in C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Combination",
"sec_num": "4.3"
},
{
"text": "Converting concept representations to weighted gram matrix form has several advantages in the present context. First, both when evaluating and applying semantic representations, we generally require models to determine relations between concepts relative to others. We might, for instance, require close associates of a given word, a selection of potential synonyms, or the two most similar search queries in a given set. This relative nature of semantics is reflected by projecting representations into a space defined by the set of concepts themselves, rather than low-level features. It is also captured by the quality weighting, which lends primacy to concept dimensions that are central to the space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Combination",
"sec_num": "4.3"
},
{
"text": "Second, mapping representations of different dimension into vector spaces of equal dimension results in dense representations of equal dimension for each modality. This naturally lends equal weighting or status to each modality and resolves any issues of representations sparsity. In addition, the dimension equality in particular enables a wider range of mathematical operations for combining information sources. Here, we follow Reichart and Korhonen (2013) in taking the product of the linguistic and perceptual weighted gram matrices L and P , producing a new matrix containing fused representations for each concept",
"cite_spans": [
{
"start": 431,
"end": 459,
"text": "Reichart and Korhonen (2013)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information Combination",
"sec_num": "4.3"
},
{
"text": "M = LP P L.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Combination",
"sec_num": "4.3"
},
{
"text": "By taking the composite product LP P L rather than LP or P L, M is symmetric and no ad hoc status is conferred to one modality over the other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Combination",
"sec_num": "4.3"
},
{
"text": "The experiments in this section were designed to address the three questions specified in Section 1: (1) Which information sources are important for acquiring word concepts of different types? (2) Can perceptual information be propagated from concrete to abstract concepts? (3) What is the best way to combine the information from the different sources?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "Question (1) To build on insights from Section 3, we first examined how perceptual input interacts with the three classes of linguistic features defined there. Figure 2 shows the additive difference in correlation between (i) models in which perceptual and particular linguistic features are concatenated and (ii) models based on just the linguistic features.",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 168,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "For concrete nouns and concrete verbs, (actual or inferred) perceptual information was beneficial in almost all cases. The largest improvement for both concept types was over grammatical features, achieved by including only the McRae data. This signals from this perceptual input and the grammatical features clearly reflect complementary aspects of the meaning of these concepts. We hypothesize that grammatical features (and POS features, which also perform strongly in this combination) confer information to concrete representations about the function and mutual interaction of concepts (the most 'relational' aspects of their meaning (Gentner, 1978) ) which complements the more intrinsic properties conferred by perceptual features.",
"cite_spans": [
{
"start": 639,
"end": 654,
"text": "(Gentner, 1978)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "For abstract concepts, it is perhaps unsurprising that the overall contribution of perceptual information was smaller. Indeed, combining linguistic and perceptual information actually harmed performance on abstract verbs in all cases. For these concepts, the inferred perceptual features seem to obscure or contradict some of the information conveyed in the linguistic representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "While the McRae data was clearly the most valuable source of perceptual input for concrete nouns and concrete verbs, for abstract nouns the combination of ESP-Game and McRae data was most informative. Both inspection of the data and cognitive theories (Rosch et al., 1976) suggest that entities identified in scenes, as in the ESP-Game dataset, generally correspond to a particular (basic) level of the conceptual hierarchy. The ESP-Game data reflects relations between these basic-level concepts in the world, whereas the McRae data typically describes their (intrinsic) properties. Together, these sources seem to combine information on the properties of, and relations between, concepts in a way that particularly facilitates the learning of abstract nouns.",
"cite_spans": [
{
"start": 252,
"end": 272,
"text": "(Rosch et al., 1976)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "Question 2The performance of different methods of information propagation and combination is presented in Table 4 . The underlying linguistic representations in this case contained all three distributional feature classes. For more robust conclusions, in addition to the USF gold-standard we also measured the correlation between model output and the WordNet path similarity of words in our evaluation pairs. The path similarity between words w 1 and w 2 is the shortest distance between synsets of w 1 and w 2 in the WordNet taxonomy (Fellbaum, 1999) , which correlates significantly with human judgements of concept similarity (Pedersen et al., 2004) . 10 The correlations with the USF data (left hand column, Table 4 ) of our linguistic-only models (\u03c1 = 0.094 \u2212 0.233) and best performing multi-modal models (on both concrete nouns, \u03c1 = 0.397, and more abstract concepts, \u03c1 = 0.095 \u2212 0.301) were higher than the best comparable models described elsewhere (Feng and Lapata, 2010; Silberer and Lapata, 2012; Silberer et al., 2013) . 11 This confirms 10 Other widely-used evaluation gold-standards, such as WordSim 353 and the MEN dataset, do not contain a sufficient number of abstract concepts for the current purpose.",
"cite_spans": [
{
"start": 535,
"end": 551,
"text": "(Fellbaum, 1999)",
"ref_id": null
},
{
"start": 629,
"end": 652,
"text": "(Pedersen et al., 2004)",
"ref_id": "BIBREF34"
},
{
"start": 655,
"end": 657,
"text": "10",
"ref_id": null
},
{
"start": 958,
"end": 981,
"text": "(Feng and Lapata, 2010;",
"ref_id": "BIBREF9"
},
{
"start": 982,
"end": 1008,
"text": "Silberer and Lapata, 2012;",
"ref_id": "BIBREF41"
},
{
"start": 1009,
"end": 1031,
"text": "Silberer et al., 2013)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [
{
"start": 106,
"end": 113,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 712,
"end": 719,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "11 Feng and Lapata (2010) report \u03c1 = .08 for language-only both that the underlying linguistic space is of high quality and that the ESP and McRae perceptual input is similarly or more informative than the input applied in previous work. Consistent with previous studies, adding perceptual input improved the quality of concrete noun representations as measured against both USF and path similarity gold-standards. Further, effective information propagation was indeed possible for both abstract nouns (USF evaluation) and concrete verbs (both evaluations). Interestingly, however, this was not the case for abstract verbs, for which no mix of propagation and combination methods produced an improvement on the linguistic-only model on either evaluation set. Indeed, as shown in Figure 2 , no type of perceptual input generated an improvement in abstract verb representations, regardless of the underlying class of linguistic features.",
"cite_spans": [
{
"start": 3,
"end": 25,
"text": "Feng and Lapata (2010)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 779,
"end": 787,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "This result underlines the link between concreteness, cognition and perception proposed in the psychological literature. More practically, it shows that concreteness can determine if propagation of perceptual input will be effective and, if so, the potential degree of improvement over text-only models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "Turning to means of propagation, both the Johns and Jones method and ridge regression outperformed the linear regression baseline on the majority of concept types in our evaluation. Across the five sets and ten evaluations on which propagation and .12 for multi-modal models evaluated on USF over concrete and abstract concepts. Silberer and Lapata (2012) report \u03c1 = .14 (language-only) and .35 (multi-modal) over concrete nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "takes place (All Nouns, Abstract Nouns, All Verbs, Abstract Verbs and Concrete Verbs), ridge regression performed more robustly, achieving the best performance on six evaluation sets compared to two for the Johns and Jones method. 12 Question (3) Weighted gram matrix multiplication (\u03c1 = 0.397 on USF and \u03c1 = 0.523 on path similarity) outperformed both simple vector concatenation (\u03c1 = 0.258 and \u03c1 = 0.442) and CCA (\u03c1 = 0.001 and \u03c1 = 0.067) on concrete nouns. In the case of both abstract nouns and concrete verbs, however, the most effective means of combining quasiperceptual information with linguistic representations was concatenation (abstract nouns, \u03c1 = 0.248 and \u03c1 = 0.343, concrete verbs, \u03c1 = 0.301 and \u03c1 = 0.484). One evident drawback of multiplicative methods such as weighted gram matrix combination is the greater inter-dependence of the information sources; a weak signal from one modality can undermine the contribution of the other modality. We hypothesize that this underlines the comparatively poor performance of the method on verbs and abstract nouns, as the perceptual input for concrete nouns is clearly a richer information source than the propagated features of more abstract concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "Motivated by the inherent difference between abstract and concrete concepts and the observation that abstract words occur more frequently in language, in this paper we have addressed the question of whether multi-modal models can enhance semantic representations of both concept types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In Section 3, we demonstrated that different information sources are important for acquiring concrete and abstract noun and verb concepts. Within the linguistic modality, while lexical features are informative for all concept types, syntactic features are only significantly informative for abstract concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In contrast, in Section 4 we observed that perceptual input is a more valuable information source for concrete concepts than abstract concepts. Nevertheless, perceptual input can be effectively propagated from concrete nouns to enhance representations of both abstract nouns and concrete verbs. In-deed, conceptual concreteness appears to determine the degree to which perceptual input is beneficial, since representations of abstract verbs, the most abstract concepts in our experiments, were actually degraded by this additional information. One important contribution of this work is therefore an insight into when multi-modal models should or should not aim to combine and/or propagate perceptual input to ensure that optimal representations are learned. In this respect, our conclusions align with the findings of Kiela and Hill (2014) , who take an explicitly visual approach to resolving the same question.",
"cite_spans": [
{
"start": 819,
"end": 840,
"text": "Kiela and Hill (2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Various methods for propagating and combining perceptual information with linguistic input were presented. We proposed ridge regression for inferring perceptual representations for abstract concepts, which proved more robust than alternatives across the range of concept types. This approach is particularly simple to implement, since it is based on an established statistical prodedure. In addition, we introduced weighted gram matrix combination for combining representations from distinct modalities of differing sparsity and dimension. This method produces the highest quality composite representations for concrete nouns, where both modalities represent high quality information sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Overall, our results demonstrate that the potential practical benefits of multi-modal models extend beyond concrete domains into a significant proportion of the lexical concepts found in language. In future work we aim to extend our experiments to concept types such as adjectives and adverbs, and to develop models that further improve the propagation and combination of extra-linguistic input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Moreover, while we cannot draw definitive conclusions about human language processing, the effectiveness of the methods presented in this paper offer tentative support for the idea that even abstract concepts are grounded in the perceptual system (Barsalou et al., 2003) . As such, it may be that, even in the more abstract cases of human communication, we find ways to see what people mean precisely by finding ways to see what they mean.",
"cite_spans": [
{
"start": 247,
"end": 270,
"text": "(Barsalou et al., 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Transactions of the Association for Computational Linguistics, 2 (2014) 285-296. Action Editor: Rada Mihalcea.Submitted 12/2013; Revised 6/2014; Published 10/2014. c 2014 Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Here concreteness is understood intuitively, as per the psychological literature(Rosen, 2001;Gallese and Lakoff, 2005).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This sample covers 15.2% of all noun tokens in the BNC. these distinctions are pertinent to text-only models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Free-association strength is measured by presenting subjects with a cue word and asking them to produce the first word they can think of that is associated with that cue word.4 We consider Spearman's \u03c1, a non-parametric ranking correlation, to be more appropriate than Pearson's r for free association data, which is naturally skewed and non-continuous.5 When combining multiple representations we normalize",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The ESP-Game and McRae representations are of approximately equal sparsity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For these comparisons, the optimal combination method is selected in each case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank The Royal Society and St John's College for their support.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Integrating experiential and distributional data to learn semantic representations. Psychological Review",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Andrews",
"suffix": ""
},
{
"first": "Gabriella",
"middle": [],
"last": "Vigliocco",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vinson",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "116",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Andrews, Gabriella Vigliocco, and David Vinson. 2009. Integrating experiential and distributional data to learn semantic representations. Psychological Re- view, 116(3):463.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1183--1193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of the 2010 Conference on Empiri- cal Methods in Natural Language Processing, pages 1183-1193. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Grounding conceptual knowledge in modality-specific systems",
"authors": [
{
"first": "W",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Barsalou",
"suffix": ""
},
{
"first": "Aron",
"middle": [
"K"
],
"last": "Simmons",
"suffix": ""
},
{
"first": "Christine",
"middle": [
"D"
],
"last": "Barbey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2003,
"venue": "Trends in cognitive sciences",
"volume": "7",
"issue": "2",
"pages": "84--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence W Barsalou, W Kyle Simmons, Aron K Bar- bey, and Christine D Wilson. 2003. Grounding conceptual knowledge in modality-specific systems. Trends in cognitive sciences, 7(2):84-91.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Distinct brain systems for processing concrete and abstract concepts",
"authors": [
{
"first": "R",
"middle": [],
"last": "Jeffrey",
"suffix": ""
},
{
"first": "Chris",
"middle": [
"F"
],
"last": "Binder",
"suffix": ""
},
{
"first": "Kristen",
"middle": [
"A"
],
"last": "Westbury",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mckiernan",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Edward",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Possing",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Medler",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Cognitive Neuroscience",
"volume": "17",
"issue": "6",
"pages": "905--917",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey R Binder, Chris F Westbury, Kristen A McKier- nan, Edward T Possing, and David A Medler. 2005. Distinct brain systems for processing concrete and ab- stract concepts. Journal of Cognitive Neuroscience, 17(6):905-917.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Distributional semantics in technicolor",
"authors": [
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Nam-Khanh",
"middle": [],
"last": "Tran",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "136--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elia Bruni, Gemma Boleda, Marco Baroni, and Nam- Khanh Tran. 2012. Distributional semantics in tech- nicolor. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguistics: Long Papers-Volume 1, pages 136-145. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Multimodal distributional semantics",
"authors": [
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Nam",
"middle": [
"Khanh"
],
"last": "Tran",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Artificial Intelligence Research",
"volume": "49",
"issue": "",
"pages": "1--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Arti- ficial Intelligence Research, 49:1-47.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Imagenet: A large-scale hierarchical image database",
"authors": [
{
"first": "Jia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Li-Jia",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2009,
"venue": "Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "248--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hier- archical image database. In Computer Vision and Pat- tern Recognition, 2009. CVPR 2009, pages 248-255. IEEE.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A structured vector space model for word meaning in context",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "897--906",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Erk and Sebastian Pad\u00f3. 2008. A structured vec- tor space model for word meaning in context. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, pages 897-906. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Visual information in semantic representation",
"authors": [
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "91--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yansong Feng and Mirella Lapata. 2010. Visual infor- mation in semantic representation. In Human Lan- guage Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 91-99. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The brain's concepts: The role of the sensory-motor system in conceptual knowledge",
"authors": [
{
"first": "Vittorio",
"middle": [],
"last": "Gallese",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Lakoff",
"suffix": ""
}
],
"year": 2005,
"venue": "Cognitive neuropsychology",
"volume": "22",
"issue": "3-4",
"pages": "455--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vittorio Gallese and George Lakoff. 2005. The brain's concepts: The role of the sensory-motor system in con- ceptual knowledge. Cognitive neuropsychology, 22(3- 4):455-479.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Structure mapping in analogy and similarity",
"authors": [
{
"first": "Dedre",
"middle": [],
"last": "Gentner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Arthur B Markman",
"suffix": ""
}
],
"year": 1997,
"venue": "American psychologist",
"volume": "52",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dedre Gentner and Arthur B Markman. 1997. Structure mapping in analogy and similarity. American psychol- ogist, 52(1):45.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "On relational meaning: The acquisition of verb meaning",
"authors": [
{
"first": "Dedre",
"middle": [],
"last": "Gentner",
"suffix": ""
}
],
"year": 1978,
"venue": "Child development",
"volume": "",
"issue": "",
"pages": "988--998",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dedre Gentner. 1978. On relational meaning: The ac- quisition of verb meaning. Child development, pages 988-998.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatic labeling of semantic roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational linguistics",
"volume": "28",
"issue": "3",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic la- beling of semantic roles. Computational linguistics, 28(3):245-288.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A dataset of syntactic-ngrams over time from a very large corpus of english books",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Orwant",
"suffix": ""
}
],
"year": 2013,
"venue": "Second Joint Conference on Lexical and Computational Semantics, Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "241--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Jon Orwant. 2013. A dataset of syntactic-ngrams over time from a very large corpus of english books. In Second Joint Conference on Lexical and Computational Semantics, Association for Com- putational Linguistics, pages 241-247. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Topics in semantic representation",
"authors": [
{
"first": "L",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"B"
],
"last": "Steyvers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2007,
"venue": "Psychological review",
"volume": "114",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas L Griffiths, Mark Steyvers, and Joshua B Tenen- baum. 2007. Topics in semantic representation. Psy- chological review, 114(2):211.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Canonical correlation analysis: An overview with application to learning methods",
"authors": [
{
"first": "Sandor",
"middle": [],
"last": "David R Hardoon",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Szedmak",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
}
],
"year": 2004,
"venue": "Neural Computation",
"volume": "16",
"issue": "12",
"pages": "2639--2664",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David R Hardoon, Sandor Szedmak, and John Shawe- Taylor. 2004. Canonical correlation analysis: An overview with application to learning methods. Neu- ral Computation, 16(12):2639-2664.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Distributional structure. Word",
"authors": [
{
"first": "Zellig",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1954,
"venue": "",
"volume": "10",
"issue": "",
"pages": "146--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig Harris. 1954. Distributional structure. Word, 10(23):146-162.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A quantitative empirical analysis of the abstract/concrete distinction",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Bentz",
"suffix": ""
}
],
"year": 2013,
"venue": "Cognitive Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Anna Korhonen, and Christian Bentz. 2013. A quantitative empirical analysis of the ab- stract/concrete distinction. Cognitive Science.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving word representations via global context and multiple word prototypes",
"authors": [
{
"first": "H",
"middle": [],
"last": "Eric",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "873--882",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2012. Improving word representa- tions via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Asso- ciation for Computational Linguistics: Long Papers- Volume 1, pages 873-882. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Experiments on the automatic induction of german semantic verb classes",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics",
"volume": "32",
"issue": "2",
"pages": "159--194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Schulte Im Walde. 2006. Experiments on the automatic induction of german semantic verb classes. Computational Linguistics, 32(2):159-194.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Perceptual inference through global lexical similarity",
"authors": [
{
"first": "T",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Michael N Jones",
"middle": [],
"last": "Johns",
"suffix": ""
}
],
"year": 2012,
"venue": "Topics in Cognitive Science",
"volume": "4",
"issue": "1",
"pages": "103--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brendan T Johns and Michael N Jones. 2012. Perceptual inference through global lexical similarity. Topics in Cognitive Science, 4(1):103-120.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Acquiring human-like feature-based conceptual representations from corpora",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Kelly",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Devereux",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 First Workshop on Computational Neurolinguistics",
"volume": "",
"issue": "",
"pages": "61--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Kelly, Barry Devereux, and Anna Korhonen. 2010. Acquiring human-like feature-based conceptual repre- sentations from corpora. In Proceedings of the NAACL HLT 2010 First Workshop on Computational Neurolin- guistics, pages 61-69. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Improving multimodal representations using image dispersion: Why less is sometimes more",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL 2014",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela and Felix Hill. 2014. Improving multi- modal representations using image dispersion: Why less is sometimes more. In Proceedings of ACL 2014, Baltimore. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A large-scale classification of english verbs. Language Resources and Evaluation",
"authors": [
{
"first": "Karin",
"middle": [],
"last": "Kipper",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Neville",
"middle": [],
"last": "Ryant",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "42",
"issue": "",
"pages": "21--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karin Kipper, Anna Korhonen, Neville Ryant, and Martha Palmer. 2008. A large-scale classification of english verbs. Language Resources and Evaluation, 42(1):21-40.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A solution to plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge",
"authors": [
{
"first": "K",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dumais",
"suffix": ""
}
],
"year": 1997,
"venue": "Psychological review",
"volume": "104",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas K Landauer and Susan T Dumais. 1997. A so- lution to plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological review, 104(2):211.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Claws4: the tagging of the British National Corpus",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Leech",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Garside",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Bryant",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "622--628",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Leech, Roger Garside, and Michael Bryant. 1994. Claws4: the tagging of the British National Cor- pus. In Proceedings of the 15th conference on Compu- tational linguistics-Volume 1, pages 622-628. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Going beyond text: A hybrid image-text approach for measuring word relatedness",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Chee Wee Leong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2011,
"venue": "IJCNLP",
"volume": "",
"issue": "",
"pages": "1403--1407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chee Wee Leong and Rada Mihalcea. 2011. Going be- yond text: A hybrid image-text approach for measur- ing word relatedness. In IJCNLP, pages 1403-1407.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Part-of-speech tagging from 97% to 100%: is it time for some linguistics?",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "171--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning. 2011. Part-of-speech tagging from 97% to 100%: is it time for some linguistics? In Computational Linguistics and Intelligent Text Pro- cessing, pages 171-189. Springer.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Similar and different: The differentiation of basiclevel categories",
"authors": [
{
"first": "B",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "Edward J",
"middle": [],
"last": "Markman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wisniewski",
"suffix": ""
}
],
"year": 1997,
"venue": "Journal of Experimental Psychology: Learning, Memory, and Cognition",
"volume": "23",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur B Markman and Edward J Wisniewski. 1997. Similar and different: The differentiation of basic- level categories. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23(1).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Semantic feature production norms for a large set of living and non-living things",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Mcrae",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cree",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Seidenberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcnorgan",
"suffix": ""
}
],
"year": 2005,
"venue": "Behavior Research Methods",
"volume": "37",
"issue": "4",
"pages": "547--559",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ken McRae, George S Cree, Mark S Seidenberg, and Chris McNorgan. 2005. Semantic feature production norms for a large set of living and non-living things. Behavior Research Methods, 37(4):547-559.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Classical and modern regression with applications",
"authors": [
{
"first": "H",
"middle": [],
"last": "Raymond",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Myers",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raymond H Myers. 1990. Classical and modern regres- sion with applications, volume 2. Duxbury Press Bel- mont, CA.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The University of South Florida free association, rhyme, and word fragment norms",
"authors": [
{
"first": "Cathy",
"middle": [
"L"
],
"last": "Douglas L Nelson",
"suffix": ""
},
{
"first": "Thomas A",
"middle": [],
"last": "Mcevoy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schreiber",
"suffix": ""
}
],
"year": 2004,
"venue": "Behavior Research Methods, Instruments, & Computers",
"volume": "36",
"issue": "3",
"pages": "402--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas L Nelson, Cathy L McEvoy, and Thomas A Schreiber. 2004. The University of South Florida free association, rhyme, and word fragment norms. Be- havior Research Methods, Instruments, & Computers, 36(3):402-407.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Dual coding theory: Retrospect and current status",
"authors": [
{
"first": "Allan",
"middle": [],
"last": "Paivio",
"suffix": ""
}
],
"year": 1991,
"venue": "Canadian Journal of Psychology/Revue Canadienne de Psychologie",
"volume": "45",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allan Paivio. 1991. Dual coding theory: Retrospect and current status. Canadian Journal of Psychology/Revue Canadienne de Psychologie, 45(3):255.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Wordnet:: Similarity: measuring the relatedness of concepts",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Michelizzi",
"suffix": ""
}
],
"year": 2004,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "38--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Pedersen, Siddharth Patwardhan, and Jason Miche- lizzi. 2004. Wordnet:: Similarity: measuring the relat- edness of concepts. In Demonstration Papers at HLT- NAACL 2004, pages 38-41. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Improved lexical acquisition through dpp-based verb clustering",
"authors": [
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roi Reichart and Anna Korhonen. 2013. Improved lexical acquisition through dpp-based verb clustering. In Proceedings of the Conference of the Association for Computational Linguistics (ACL). Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A multimodal LDA model integrating textual, cognitive and visual modalities",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1146--1157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Roller and Sabine Schulte im Walde. 2013. A multimodal LDA model integrating textual, cognitive and visual modalities. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1146-1157, Seattle, Wash- ington, USA, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Basic objects in natural categories",
"authors": [
{
"first": "Eleanor",
"middle": [],
"last": "Rosch",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"B"
],
"last": "Mervis",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wayne",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gray",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Penny",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Boyes-Braem",
"suffix": ""
}
],
"year": 1976,
"venue": "Cognitive Psychology",
"volume": "8",
"issue": "3",
"pages": "382--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eleanor Rosch, Carolyn B Mervis, Wayne D Gray, David M Johnson, and Penny Boyes-Braem. 1976. Basic objects in natural categories. Cognitive Psychol- ogy, 8(3):382-439.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Nominalism, naturalism, epistemic relativism",
"authors": [
{
"first": "Gideon",
"middle": [],
"last": "Rosen",
"suffix": ""
}
],
"year": 2001,
"venue": "No\u00fbs",
"volume": "35",
"issue": "s15",
"pages": "69--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gideon Rosen. 2001. Nominalism, naturalism, epis- temic relativism. No\u00fbs, 35(s15):69-91.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "The Word-Space Model: Using distributional analysis to represent syntagmatic and paradigmatic relations between words in highdimensional vector spaces",
"authors": [
{
"first": "Magnus",
"middle": [],
"last": "Sahlgren",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Magnus Sahlgren. 2006. The Word-Space Model: Us- ing distributional analysis to represent syntagmatic and paradigmatic relations between words in high- dimensional vector spaces. Ph.D. thesis, Stockholm.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Differential context effects in the comprehension of abstract and concrete verbal materials",
"authors": [
{
"first": "J",
"middle": [],
"last": "Paula",
"suffix": ""
},
{
"first": "Edward",
"middle": [
"J"
],
"last": "Schwanenflugel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shoben",
"suffix": ""
}
],
"year": 1983,
"venue": "Journal of Experimental Psychology: Learning, Memory, and Cognition",
"volume": "9",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paula J Schwanenflugel and Edward J Shoben. 1983. Differential context effects in the comprehension of abstract and concrete verbal materials. Journal of Ex- perimental Psychology: Learning, Memory, and Cog- nition, 9(1):82.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Grounded models of semantic representation",
"authors": [
{
"first": "Carina",
"middle": [],
"last": "Silberer",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "1423--1433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carina Silberer and Mirella Lapata. 2012. Grounded models of semantic representation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1423-1433. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Models of semantic representation with visual attributes",
"authors": [
{
"first": "Carina",
"middle": [],
"last": "Silberer",
"suffix": ""
},
{
"first": "Vittorio",
"middle": [],
"last": "Ferrari",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carina Silberer, Vittorio Ferrari, and Mirella Lapata. 2013. Models of semantic representation with visual attributes. In Proceedings of the 51th Annual Meet- ing of the Association for Computational Linguistics, Sofia, Bulgaria.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of artificial intelligence research",
"volume": "37",
"issue": "1",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D Turney, Patrick Pantel, et al. 2010. From fre- quency to meaning: Vector space models of semantics. Journal of artificial intelligence research, 37(1):141- 188.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Multiway tensor factorization for unsupervised lexical acquisition",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Van De Cruys",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Poibeau",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2012,
"venue": "COLING 2012: Technical Papers",
"volume": "",
"issue": "",
"pages": "2703--2720",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Van de Cruys, Laura Rimell, Thierry Poibeau, Anna Korhonen, et al. 2012. Multiway tensor factorization for unsupervised lexical acquisition. COLING 2012: Technical Papers, pages 2703-2720.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Labeling images with a computer game",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Von Ahn",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Dabbish",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the SIGCHI conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "319--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Von Ahn and Laura Dabbish. 2004. Labeling im- ages with a computer game. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pages 319-326. ACM.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Additive change in Spearman's \u03c1 when representations acquired from particular classes of linguistic features are combined with (actual or inferred) perceptual representations. Perceptual representations are derived from either the McRae Dataset, the ESP-Game Dataset or both (concatenated). For concepts other than concrete nouns, perceptual information is propagated using the Johns and Jones (JJ) method, and combined with simple concatenation.",
"uris": null
},
"TABREF0": {
"type_str": "table",
"text": "). For example, deverbal Context Example indirect object gave it to the man Noun direct object gave the pie to him Concepts subject the man grinned in PP was in his mouth adject. modifier the portly man infinitive clause to eat is human transitive he bit the steak Verb intransitive he salivated Concepts distransitive put jam on the toast phrasal verb he gobbled it up infinitival comp. he wants to snooze clausal comp.",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF1": {
"type_str": "table",
"text": "Grammatical features for noun/verb concepts",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "Evaluation sets used throughout. All nouns and all verbs are the union of abstract and concrete subsets and mixed abstract-concrete or concrete-abstract pairs.",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF4": {
"type_str": "table",
"text": "All representations in this section are combined by concatenation, since the present focus is not on combination methods. 5Feature Type All Nouns Conc. Nouns Abs. Nouns All Verbs Conc. Verbs Abs.",
"num": null,
"content": "<table><tr><td>(1) Lexical (2) POS-tag (3) Grammatical (1)+(2)+(3)</td><td>0.168* 0.059* 0.078* 0.182 *</td><td>0.199* 0.012 0.027 0.181*</td><td>0.248* 0.119* 0.121* 0.247*</td><td>0.173* 0.052 0.009 0.172*</td><td>0.268* -0.074 -0.017 0.267*</td><td>Verbs 0.109 0.123 0.114 0.108</td></tr></table>",
"html": null
},
"TABREF7": {
"type_str": "table",
"text": "Performance of different methods of information propagation (JJ = Johns and Jones, RR = ridge regression, LR = linear regression) and combination (Concat = concatenation, CCA = canonical correlation analysis, WGM = weighted gram matrix multiplication) across evaluation sets. Values are Spearman's \u03c1 correlation with USF scores (left hand side of columns) and WordNet path similarity (right hand side). For the LR baseline we only report the highest score across the three combination types. \u2020No propagation takes place for concrete nouns; this column reflects the performance of combination methods only.",
"num": null,
"content": "<table/>",
"html": null
}
}
}
}