|
{ |
|
"paper_id": "W13-0205", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T04:55:02.513425Z" |
|
}, |
|
"title": "Semantic Similarity Computation for Abstract and Concrete Nouns Using Network-based Distributional Semantic Models", |
|
"authors": [ |
|
{ |
|
"first": "Elias", |
|
"middle": [], |
|
"last": "Iosif", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Technical University of Crete", |
|
"location": { |
|
"country": "Greece" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Alexandros", |
|
"middle": [], |
|
"last": "Potamianos", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Technical University of Crete", |
|
"location": { |
|
"country": "Greece" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Giannoudaki", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Technical University of Crete", |
|
"location": { |
|
"country": "Greece" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Kalliopi", |
|
"middle": [], |
|
"last": "Zervanou", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Centre for Language Studies", |
|
"institution": "Radboud University", |
|
"location": { |
|
"settlement": "Nijmegen", |
|
"country": "The Netherlands" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Motivated by cognitive lexical models, network-based distributional semantic models (DSMs) were proposed in [Iosif and Potamianos (2013)] and were shown to achieve state-of-the-art performance on semantic similarity tasks. Based on evidence for cognitive organization of concepts based on degree of concreteness, we investigate the performance and organization of network DSMs for abstract vs. concrete nouns. Results show a \"concreteness effect\" for semantic similarity estimation. Network DSMs that implement the maximum sense similarity assumption perform best for concrete nouns, while attributional network DSMs perform best for abstract nouns. The performance of metrics is evaluated against human similarity ratings on an English and a Greek corpus.", |
|
"pdf_parse": { |
|
"paper_id": "W13-0205", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Motivated by cognitive lexical models, network-based distributional semantic models (DSMs) were proposed in [Iosif and Potamianos (2013)] and were shown to achieve state-of-the-art performance on semantic similarity tasks. Based on evidence for cognitive organization of concepts based on degree of concreteness, we investigate the performance and organization of network DSMs for abstract vs. concrete nouns. Results show a \"concreteness effect\" for semantic similarity estimation. Network DSMs that implement the maximum sense similarity assumption perform best for concrete nouns, while attributional network DSMs perform best for abstract nouns. The performance of metrics is evaluated against human similarity ratings on an English and a Greek corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Semantic similarity is the building block for numerous applications of natural language processing (NLP), such as grammar induction [Meng and Siu (2002) ] and affective text categorization [Malandrakis et al. (2011) ]. Distributional semantic models (DSMs) [ Baroni and Lenci (2010) ] are based on the distributional hypothesis of meaning [Harris (1954) ] assuming that semantic similarity between words is a function of the overlap of their linguistic contexts. DSMs are typically constructed from co-occurrence statistics of word tuples that are extracted from a text corpus or from data harvested from the web. A wide range of contextual features are also used by DSM exploiting lexical, syntactic, semantic, and pragmatic information. DSMs have been successfully applied to the problem of semantic similarity computation. Recently [Iosif and Potamianos (2013) ] proposed network-based DSMs motivated by the organization of words, attributes and concepts in human cognition. The proposed semantic networks can operate under either the attributional similarity or the maximum sense similarity assumptions of lexical semantics. According to attributional similarity [Turney (2006) ], semantic similarity between words is based on the commonality of their sense attributes. Following the maximum sense similarity hypothesis, the semantic similarity of two words can be estimated as the similarity of their two closest senses [Resnik (1995) ]. Network-based DSMs have been shown to achieve state-of-the-art performance for semantic similarity tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 152, |
|
"text": "[Meng and Siu (2002)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 189, |
|
"end": 215, |
|
"text": "[Malandrakis et al. (2011)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 258, |
|
"text": "[", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 259, |
|
"end": 282, |
|
"text": "Baroni and Lenci (2010)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 339, |
|
"end": 353, |
|
"text": "[Harris (1954)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 835, |
|
"end": 863, |
|
"text": "[Iosif and Potamianos (2013)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1167, |
|
"end": 1181, |
|
"text": "[Turney (2006)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1425, |
|
"end": 1439, |
|
"text": "[Resnik (1995)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Typically, the degree of semantic concreteness of a word is not taken into account in distributional models. However, evidence from neuro-and psycho-linguistics demonstrates significant differences in the cognitive organization of abstract and concrete nouns. For example, Kiehl et al. (1999) and Noppeney and Price (2004) show that concrete concepts are processed more efficiently than abstract ones (aka \"the concreteness effect\"), i.e., participants in lexical decision tasks recall concrete stimuli faster than abstract. According to dual code theory [Paivio (1971) ], the stored semantic information for concrete concepts is both verbal and visual, while for abstract concepts stored information is only verbal. Neuropsychological studies show that people with acquired dyslexia (deep dyslexia) face problems in reading abstract nouns aloud [Coltheart (2000) ], verifying that concrete and abstract concepts are stored in different regions of the human brain anatomy [Kiehl et al. (1999) ]. The reversal concreteness effect is also reported for people with semantic dementia with a striking impairment in semantic memory [Papagno et al. (2009) ].", |
|
"cite_spans": [ |
|
{ |
|
"start": 273, |
|
"end": 292, |
|
"text": "Kiehl et al. (1999)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 322, |
|
"text": "Noppeney and Price (2004)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 555, |
|
"end": 569, |
|
"text": "[Paivio (1971)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 846, |
|
"end": 863, |
|
"text": "[Coltheart (2000)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 972, |
|
"end": 992, |
|
"text": "[Kiehl et al. (1999)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1126, |
|
"end": 1148, |
|
"text": "[Papagno et al. (2009)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Motivated by this evidence, we study the semantic network organization and performance of DSMs for estimating the semantic similarity of abstract vs. concrete nouns. Specifically, we investigate the validity of the maximum sense and attributional similarity assumptions in network-based DSMs for abstract and concrete nouns (for both English and Greek).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Semantic similarity metrics can be divided into two broad categories: (i) metrics that rely on knowledge resources, and (ii) corpus-based metrics. A representative example of the first category are metrics that exploit the WordNet ontology [Miller (1990) ]. Corpus-based metrics are formalized as DSM [Baroni and Lenci (2010) ] and are based on the distributional hypothesis of meaning [Harris (1954) ]. DSM can be categorized into unstructured (unsupervised) that employ a bag-of-words model [Agirre et al. (2009) ] and structured that rely on syntactic relationships between words [Pado and Lapata (2007) ]. Recently, motivated by the graph theory, several aspects of the human languages have been modeled using network-based methods. In [Mihalcea and Radev (2011) ], an overview of network-based approaches is presented for a number of NLP problems. Different types of language units can be regarded as vertices of such networks, spanning from single words to sentences. Typically, network edges represent the relations of such units capturing phenomena such as co-occurrence, syntactic dependencies, and lexical similarity. An example of a large co-occurrence network is presented in [Widdows and Dorow (2002) ] for the automatic creation of semantic classes. In [Iosif and Potamianos (2013) ], a new paradigm for implementing DSMs is proposed: a two tier system in which corpus statistics are parsimoniously encoded in a network, while the task of similarity computation is shifted (from corpus-based techniques) to operations over network neighborhoods.", |
|
"cite_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 254, |
|
"text": "[Miller (1990)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 301, |
|
"end": 325, |
|
"text": "[Baroni and Lenci (2010)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 400, |
|
"text": "[Harris (1954)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 493, |
|
"end": 514, |
|
"text": "[Agirre et al. (2009)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 583, |
|
"end": 606, |
|
"text": "[Pado and Lapata (2007)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 740, |
|
"end": 766, |
|
"text": "[Mihalcea and Radev (2011)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1188, |
|
"end": 1213, |
|
"text": "[Widdows and Dorow (2002)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1267, |
|
"end": 1295, |
|
"text": "[Iosif and Potamianos (2013)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Co-occurrence-based: The underlying assumption of co-occurrence-based metrics is that the co-existence of words in a specified contextual environment indicates semantic relatedness. In this work, we employ a widelyused co-occurrence-based metric, namely, Dice coefficient [Iosif and Potamianos (2010) ]. The Dice coefficient between words w i and w j is defined as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 272, |
|
"end": 300, |
|
"text": "[Iosif and Potamianos (2010)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus-Based Baseline Similarity Metrics", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "D(w i , w j ) = 2f (wi,wj ) f (wi)+f (wj )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus-Based Baseline Similarity Metrics", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": ", where f (.) denotes the frequency of word occurrence/co-occurrence. Here, the word co-occurrence is considered at the sentential level, while D can be also defined with respect to broader contextual environments, e.g., at the paragraph level [V\u00e9ronis (2004) ].", |
|
"cite_spans": [ |
|
{ |
|
"start": 244, |
|
"end": 259, |
|
"text": "[V\u00e9ronis (2004)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus-Based Baseline Similarity Metrics", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The fundamental assumption behind context-based metrics is that similarity of context implies similarity of meaning [Harris (1954) ]. A contextual window of size 2H + 1 words is centered on the word of interest w i and lexical features are extracted. For every instance of w i in the corpus the H words left and right of w i formulate a feature vector v i . For a given value of H the context-based semantic similarity between two words, w i and w j , is computed as the cosine of their feature vectors:", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 130, |
|
"text": "[Harris (1954)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context-based:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Q H (w i , w j ) = vi.vj ||vi|| ||vj|| .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context-based:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The elements of feature vectors can be weighted according various schemes [Iosif and Potamianos (2010) ], while, here we use a binary scheme.", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 102, |
|
"text": "[Iosif and Potamianos (2010)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context-based:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here, we summarize the main ideas of network-based DSMs as proposed in [Iosif and Potamianos (2013) ]. The network is defined as an undirected (under a symmetric similarity metric) graph F = (V, E) whose the set of vertices V are all words in our lexicon L, and the set of edges E contains the links between the vertices. The links (edges) between words in the network are determined and weighted according to the pairwise semantic similarity of the vertices. The network is a parsimonious representation of corpus statistics as they pertain to the estimation of semantic similarities between word-pairs in the lexicon. In addition, the network can be used to discover relations that are not directly observable in the data; such relations emerge via the systematic covariation of similarity metrics. For each word (reference word) that is included in the lexicon, w i \u2208 L, we consider a sub-graph of F ,", |
|
"cite_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 99, |
|
"text": "[Iosif and Potamianos (2013)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Network-based Distributional Semantic Models", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "F i = (N i , E i ),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Network-based Distributional Semantic Models", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "where the set of vertices N i includes in total n members of L, which are linked with w i via edges E i . The F i sub-graph is referred to as the semantic neighborhood of w i . The members of N i (neighbors of w i ) are selected according to a semantic similarity metric (co-occurrence-based D or context-based Q H defined in Section 3) with respect to w i , i.e., the n most similar words to w i are selected. Next, we present two semantic similarity metrics that utilize the notion of semantic neighborhood [Iosif and Potamianos (2013) ].", |
|
"cite_spans": [ |
|
{ |
|
"start": 509, |
|
"end": 537, |
|
"text": "[Iosif and Potamianos (2013)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Network-based Distributional Semantic Models", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "This metric is based on the hypothesis that the similarity of two words, w i and w j , can be estimated by the maximum similarity of their respective sets of neighbors, defined as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Similarity of Neighborhoods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "M n (w i , w j ) = max{\u03b1 ij , \u03b1 ji }, where \u03b1 ij = max x \u2208 Nj S(w i , x), \u03b1 ji = max y \u2208 Ni S(w j , y).", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Maximum Similarity of Neighborhoods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u03b1 ij (or \u03b1 ji ) denotes the maximum similarity between w i (or w j ) and the neighbors of w j (or w i ) that is computed according to a similarity metric S: in this work either D or Q H . N i and N j are the set of neighbors for w i and w j , respectively. The definition of M n is motivated by the maximum sense similarity assumption. Here the underlying assumption is that the most salient information in the neighbors of a word are semantic features denoting senses of this word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Similarity of Neighborhoods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The similarity between w i and w j is defined as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attributional Neighborhood Similarity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "R n (w i , w j ) = max{\u03b2 ij , \u03b2 ji }, where \u03b2 ij = \u03c1(C Ni i , C Ni j ), \u03b2 ji = \u03c1(C Nj i , C Nj j )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Attributional Neighborhood Similarity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attributional Neighborhood Similarity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "C Ni i = (S(w i , x 1 ), S(w i , x 2 ), . . . , S(w i , x n ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attributional Neighborhood Similarity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": ", and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attributional Neighborhood Similarity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "N i = {x 1 , x 2 , . . . , x n }. Note that C Ni j , C Nj i , and C Nj j", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attributional Neighborhood Similarity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "are defined similarly as C Ni i . The \u03c1 function stands for the Pearson's correlation coefficient, N i is the set of neighbors of word w i , and S is a similarity metric (D or Q H ). Here, we aim to exploit the entire semantic neighborhoods for the computation of semantic similarity, as opposed to M n where a single neighbor is utilized. The motivation behind this metric is attributional similarity, i.e., we assume that semantic neighborhoods encode attributes (or features) of a word. Neighborhood correlation similarity in essence compares the distribution of semantic similarities of the two words on their semantic neighborhoods. The \u03c1 function incorporates the covariation of the similarities of w i and w j with respect to the members of their semantic neighborhoods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attributional Neighborhood Similarity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Lexica and corpora creation: For English we used a lexicon consisting of 8, 752 English nouns taken from the SemCor3 1 corpus. In addition, this lexicon was translated into Greek using Google Translate 2 , while it was further augmented resulting into a set of 9, 324 entries. For each noun an individual query was formulated and the 1, 000 top ranked results (document snippets) were retrieved using the Yahoo! search engine 3 . A corpus was created for each language by aggregating the snippets for all nouns of the lexicon. Network creation: For each language the semantic neighborhoods of lexicon noun pairs were computed following the procedure described in Section 4 using either co-occurrence D or context-based Q H=1 metrics 4 . Network-based similarity computation: For each language, the semantic similarity between noun pairs was computed applying either the max-sense M n or the attributional R n network-based metric. The underlying semantic similarity metric (the S metric in (1) and (2)) can be either D or Q H . Given that for both neighborhood creation and network-based semantic similarity estimation we have the option of D or Q H , a total of four combinations emerge for this two-phase process: (i) D/D, i.e., use co-occurence metric D for both neighborhood selection and network-based similarity estimation, (ii) D/Q H , (iii) Q H /D, and (iv) Q H /Q H .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Procedure", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The performance of network-based similarity metrics was evaluated for the task of semantic similarity between nouns. The Pearson's correlation coefficient was used as evaluation metric to compare estimated similarities against the ground truth (human ratings). The following datasets were used: English (WS353): Subset of WS353 dataset [Finkelstein et al. (2002) ] consisting of 272 noun pairs (that are also included in the SemCor3 corpus).", |
|
"cite_spans": [ |
|
{ |
|
"start": 336, |
|
"end": 362, |
|
"text": "[Finkelstein et al. (2002)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Datasets", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In total, 82 native speakers of modern Greek were asked to score the similarity of the noun pairs in a range from 0 (dissimilar) to 4 (similar). The resulting dataset consists of 99 nouns pairs (a subset of pairs translated from WS353) and is freely available 5 . Abstract vs. Concrete: From each of the above datasets two subsets of pairs were selected, where both nouns in the pair are either abstract or concrete, i.e., pairs consisting of one abstract and one concrete nouns were ruled out. More specifically, 74 abstract and 74 concrete noun pairs were selected from WS353, for a total of 148 pairs. Regarding GIP, 18 abstract and 18 concrete noun pairs were selected, for a total of 36 pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Greek (GIP):", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The performance of the two proposed network-based metrics, M n and R n , for neighborhood size of 100, is presented in Table 1 Four combinations of the co-occurrence-based metric D and the context-based metric Q H were used for the definition of semantic neighborhoods and the computation of similarity scores. Baseline performance is also shown. similarity M n=100 metric, the use of the co-occurrence metric D for neighbor selection yields the best correlation performance for both languages. For the attributional similarity R n=100 metric, best performance is achieved when using the context-based metric D for the selection of neighbors in the network. As explained in [Iosif and Potamianos (2013) ], the neighborhoods selected by the D metrics tend to include words that denote word senses (yielding best results for similarity), while neighborhoods computed using the Q H metric are semantically broader including word attributes (yielding best results for attributional similarity). The network-based DSM results are also significantly higher compared to the baseline for both languages. The best results achieved by D/Q H for the M n=100 , and Q H /D for the R n=100 are consistent with the results reported in [Iosif and Potamianos (2013) ] for English. The best performing metric for English is M n=100 (max-sense) while for Greek R n=100 (attributional). Overall, utilizing network neighborhoods for estimating semantic similarity can achieve good performance 6 , and the type of metric (feature) used to select the neighborhood is a key performance factor. Next, we investigate the performance of the network metrics with respect to the neighborhood size n for the abstract and concrete noun pairs included in English and Greek datasets. The performance of the max-sense M n (D/Q H ) metric is shown in Fig. 1(a) ,(c) for the (subsets of) WS353 and GIP, respectively. The performance over the whole (abstract and concrete) dataset is shown with a solid line. Similarly the results for the attributional R n (Q H /D) metric are shown in Fig. 1(b) ,(d). The main conclusions for these experiments (for both languages) are: 1) The correlation performance for concrete noun pairs is higher than for abstract noun pairs. 2) For concrete nouns the max-sense M n metric achieves best performance, while for abstract nouns the attributional R n metric is the top performer. 3) For the R n network metric, very good performance is achieved for abstract noun pairs for a small neighborhood size n (around 10), while for concrete nouns larger neighborhoods are needed (up to 40 and 30 neighbors, for English and Greek, respectively). In order to further investigate the network organization for abstract vs. concrete nouns, we manually inspected the top twenty neighbors of 30 randomly selected nouns (15 abstract and 15 concrete) and classified each neighbor as either abstract or concrete. The distributions of abstract/concrete neighbors are shown in Table 2 as a function of neighbor selection metric (D vs. Q H ) and reference noun category. It is clear, that the neighborhoods of abstract nouns contain mostly abstract concepts, especially for the Q H neighbor selection metric (similarly the neighborhoods of concrete nouns contain mainly concrete concepts). The neighbors of concrete nouns mainly belong to the same semantic class (e.g., \"vehicle\", \"bus\" for \"car\") often corresponding to relevant senses. The neighbors of the abstract nouns have an attributive function, reflecting relative attributes and/or aspects of the referent nouns (e.g., \"religion\", \"justice\" for \"morality\").", |
|
"cite_spans": [ |
|
{ |
|
"start": 674, |
|
"end": 702, |
|
"text": "[Iosif and Potamianos (2013)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1220, |
|
"end": 1248, |
|
"text": "[Iosif and Potamianos (2013)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 126, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1816, |
|
"end": 1825, |
|
"text": "Fig. 1(a)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2049, |
|
"end": 2058, |
|
"text": "Fig. 1(b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2956, |
|
"end": 2963, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We investigated the performance of network-based DSMs for semantic similarity estimation for abstract and concrete noun pairs of English and Greek. We observed a \"concreteness effect\", i.e., performance for concrete nouns was better than for abstract noun pairs. The assumption of maximum sense similarity as encoded by the M n metric consistently yielded higher performance for the case of concrete nouns, while the semantic similarity of abstract nouns was better estimated via the attributional similarity assumption as implemented by the R n metric. The results are consistent with the initial hypothesis that differences in cognitive organization may warrant different network organization in DSMs. In addition, abstract concepts were best modeled using an attributional network DSM with small semantic neighborhoods. This is a first step towards the better understanding of the network organization of DSMs for different categories of concepts. In terms of computation algorithms of semantic similarity, it might prove advantageous to define a metric that combines the maximum sense and attributional assumptions based on the semantic concreteness of the words under investigation. Further research on more data and languages is needed to verify the universality of the findings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "http://www.cse.unt.edu/\u02dcrada/downloads.html 2 http://translate.google.com/ 3 http://www.yahoo.com// 4 We have also experimented with other values of context window H not reported here for the sake of space. However, the highest performance was achieved for H = 1.5 http://www.telecom.tuc.gr/\u02dciosife/downloads.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The best correlation score for the WS353 dataset does not exceed the top performance (0.68) of unsupervised DSMs[Agirre et al. (2006)]. However, we have found that the proposed network metrics obtain state-of-the-art results for other standard datasets, e.g., 0.87 for[Rubenstein and Goodenough (1965)] and 0.91 for[Miller and Charles (1998)].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Elias Iosif, Alexandros Potamianos, and Maria Giannoudaki were partially funded by the PortDial project (\"Language Resources for Portable Multilingual Spoken Dialogue Systems\") supported by the EU Seventh Framework Programme (FP7), grant number 296170.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": "9" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A study on similarity and relatedness using distributional and wordnet-based approaches", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Alfonseca", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Kravalova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Pasca", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Soroa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proc. of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Agirre, E., E. Alfonseca, K. Hall, J. Kravalova, M. Pasca, and A. Soroa (2009). A study on similarity and re- latedness using distributional and wordnet-based approaches. In Proc. of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 19- 27.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Two graph-based algorithms for state-of-the-art WSD", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Mart\u00ednez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "De Lacalle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Soroa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "585--593", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Agirre, E., D. Mart\u00ednez, O. L. de Lacalle, and A. Soroa (2006). Two graph-based algorithms for state-of-the-art WSD. In Proc. of Conference on Empirical Methods in Natural Language Processing, pp. 585-593.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Distributional memory: A general framework for corpus-based semantics", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lenci", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Computational Linguistics", |
|
"volume": "36", |
|
"issue": "4", |
|
"pages": "673--721", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Baroni, M. and A. Lenci (2010). Distributional memory: A general framework for corpus-based semantics. Com- putational Linguistics 36(4), 673-721.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Deep dyslexia and right-hemisphere reading", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Coltheart", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Brain and Language", |
|
"volume": "71", |
|
"issue": "", |
|
"pages": "299--309", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Coltheart, M. (2000). Deep dyslexia and right-hemisphere reading. Brain and Language 71, 299-309.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Placing search in context: The concept revisited", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Finkelstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Gabrilovich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Matias", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Rivlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Solan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Wolfman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Ruppin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "ACM Transactions on Information Systems", |
|
"volume": "20", |
|
"issue": "1", |
|
"pages": "116--131", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Finkelstein, L., E. Gabrilovich, Y. Matias, E. Rivlin, Z. Solan, G. Wolfman, and E. Ruppin (2002). Placing search in context: The concept revisited. ACM Transactions on Information Systems 20(1), 116-131.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Distributional structure", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Harris", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1954, |
|
"venue": "Word", |
|
"volume": "10", |
|
"issue": "23", |
|
"pages": "146--162", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harris, Z. (1954). Distributional structure. Word 10(23), 146-162.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Unsupervised semantic similarity computation between terms using web documents", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Iosif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Potamianos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "IEEE Transactions on Knowledge and Data Engineering", |
|
"volume": "22", |
|
"issue": "11", |
|
"pages": "1637--1647", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iosif, E. and A. Potamianos (2010). Unsupervised semantic similarity computation between terms using web documents. IEEE Transactions on Knowledge and Data Engineering 22(11), 1637-1647.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Similarity Computation Using Semantic Networks Created From Web-Harvested Data", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Iosif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Potamianos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Natural Language Engineering", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iosif, E. and A. Potamianos (2013). Similarity Computation Using Semantic Networks Created From Web- Harvested Data. Natural Language Engineering (submitted).", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Neural pathways involved in the processing of concrete and abstract nouns", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Kiehl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Liddle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Mendrek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Forster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Hare", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Human Brain Mapping", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "225--233", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kiehl, K. A., P. F. Liddle, A. M. Smith, A. Mendrek, B. B. Forster, and R. D. Hare (1999). Neural pathways involved in the processing of concrete and abstract nouns. Human Brain Mapping 7, 225-233.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Kernel models for affective lexicon creation", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Malandrakis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Potamianos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Iosif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Narayanan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proc. Interspeech", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2977--2980", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Malandrakis, N., A. Potamianos, E. Iosif, and S. Narayanan (2011). Kernel models for affective lexicon creation. In Proc. Interspeech, pp. 2977-2980.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Semi-automatic acquisition of semantic structures for understanding domainspecific natural language queries", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Meng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K.-C", |
|
"middle": [], |
|
"last": "Siu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "IEEE Transactions on Knowledge and Data Engineering", |
|
"volume": "14", |
|
"issue": "1", |
|
"pages": "172--181", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Meng, H. and K.-C. Siu (2002). Semi-automatic acquisition of semantic structures for understanding domain- specific natural language queries. IEEE Transactions on Knowledge and Data Engineering 14(1), 172-181.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Graph-Based Natural Language Processing and Information Retrieval", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Radev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mihalcea, R. and D. Radev (2011). Graph-Based Natural Language Processing and Information Retrieval. Cam- bridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Wordnet: An on-line lexical database", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "International Journal of Lexicography", |
|
"volume": "3", |
|
"issue": "4", |
|
"pages": "235--312", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miller, G. (1990). Wordnet: An on-line lexical database. International Journal of Lexicography 3(4), 235-312.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Contextual correlates of semantic similarity", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Charles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Language and Cognitive Processes", |
|
"volume": "6", |
|
"issue": "1", |
|
"pages": "1--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miller, G. and W. Charles (1998). Contextual correlates of semantic similarity. Language and Cognitive Pro- cesses 6(1), 1-28.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Retrieval of abstract semantics", |
|
"authors": [ |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Noppeney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Price", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "NeuroImage", |
|
"volume": "22", |
|
"issue": "", |
|
"pages": "164--170", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Noppeney, U. and C. J. Price (2004). Retrieval of abstract semantics. NeuroImage 22, 164-170.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Dependency-based construction of semantic space models", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Pado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computational Linguistics", |
|
"volume": "33", |
|
"issue": "2", |
|
"pages": "161--199", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pado, S. and M. Lapata (2007). Dependency-based construction of semantic space models. Computational Lin- guistics 33(2), 161-199.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Imagery and Verbal Processes", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Paivio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1971, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paivio, A. (1971). Imagery and Verbal Processes. New York, Holt, Rinehart and Winston.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Reversed concreteness effect for nouns in a subject with semantic dementia", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Papagno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Capasso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Miceli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Neuropsychologia", |
|
"volume": "47", |
|
"issue": "4", |
|
"pages": "1138--1148", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Papagno, C., R. Capasso, and G. Miceli (2009). Reversed concreteness effect for nouns in a subject with semantic dementia. Neuropsychologia 47(4), 1138-1148.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Using information content to evaluate semantic similarity in a taxanomy", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proc. of International Joint Conference for Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "448--453", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Resnik, P. (1995). Using information content to evaluate semantic similarity in a taxanomy. In Proc. of Interna- tional Joint Conference for Artificial Intelligence, pp. 448-453.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Contextual correlates of synonymy", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Rubenstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Goodenough", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1965, |
|
"venue": "Communications of the ACM", |
|
"volume": "8", |
|
"issue": "10", |
|
"pages": "627--633", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rubenstein, H. and J. B. Goodenough (1965). Contextual correlates of synonymy. Communications of the ACM 8(10), 627-633.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Similarity of semantic relations", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Turney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Computational Linguistics", |
|
"volume": "32", |
|
"issue": "3", |
|
"pages": "379--416", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Turney, P. (2006). Similarity of semantic relations. Computational Linguistics 32(3), 379-416.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Hyperlex: Lexical cartography for information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "V\u00e9ronis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Computer Speech and Language", |
|
"volume": "18", |
|
"issue": "3", |
|
"pages": "223--252", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "V\u00e9ronis, J. (2004). Hyperlex: Lexical cartography for information retrieval. Computer Speech and Language 18(3), 223-252.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "A graph model for unsupervised lexical acquisition", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Widdows", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Dorow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proc. of the 19th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1093--1099", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Widdows, D. and B. Dorow (2002). A graph model for unsupervised lexical acquisition. In Proc. of the 19th International Conference on Computational Linguistics, pp. 1093-1099.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Correlation as a function of number of neighbors for network-based metrics. Max-sense M n (D/Q H ) for datasets: (a) English and (c) Greek. Attributional R n (Q H /D) for datasets: (b) English and (d) Greek." |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"2\">Language: Number of</td><td colspan=\"2\">Baseline</td><td colspan=\"4\">Network Neighbor selection / Similarity computation</td></tr><tr><td>dataset</td><td>pairs</td><td>D</td><td>Q H</td><td>metric</td><td colspan=\"3\">D/D D/Q H Q H /D</td><td>Q H /Q H</td></tr><tr><td>English:</td><td>272</td><td colspan=\"3\">0.22 0.30 M n=100</td><td>0.64</td><td>0.64</td><td>0.47</td><td>0.46</td></tr><tr><td>WS353</td><td/><td/><td/><td>R n=100</td><td>0.50</td><td>0.14</td><td>0.56</td><td>0.57</td></tr><tr><td>Greek:</td><td>99</td><td colspan=\"3\">0.25 0.13 M n=100</td><td>0.51</td><td>0.51</td><td>0.04</td><td>0.04</td></tr><tr><td>GIP</td><td/><td/><td/><td colspan=\"2\">R n=100 -0.11</td><td>0.03</td><td>0.66</td><td>0.11</td></tr><tr><td colspan=\"8\">Table 1: Pearson correlation with human ratings for neighborhood-based metrics for English and Greek datasets.</td></tr></table>", |
|
"text": "with respect to the English (WS353) and Greek (GIP) datasets. Baseline performance (i.e., no use of the network) is also shown for co-occurrence-based metric D and context-based metric Q H . For the max-sense", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Distribution of abstract vs. concrete nouns in (abstract/concrete noun) neighbourhoods.", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |