Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D14-1047",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:54:18.539545Z"
},
"title": "Nothing like Good Old Frequency: Studying Context Filters for Distributional Thesauri",
"authors": [
{
"first": "Muntsa",
"middle": [],
"last": "Padr\u00f3",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Federal University of Rio Grande do",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Marco",
"middle": [],
"last": "Idiart",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Carlos",
"middle": [],
"last": "Ramisch",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Aline",
"middle": [],
"last": "Villavicencio",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Federal University of Rio Grande do",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Sul",
"middle": [
"("
],
"last": "Brazil",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Much attention has been given to the impact of informativeness and similarity measures on distributional thesauri. We investigate the effects of context filters on thesaurus quality and propose the use of cooccurrence frequency as a simple and inexpensive criterion. For evaluation, we measure thesaurus agreement with WordNet and performance in answering TOEFL-like questions. Results illustrate the sensitivity of distributional thesauri to filters.",
"pdf_parse": {
"paper_id": "D14-1047",
"_pdf_hash": "",
"abstract": [
{
"text": "Much attention has been given to the impact of informativeness and similarity measures on distributional thesauri. We investigate the effects of context filters on thesaurus quality and propose the use of cooccurrence frequency as a simple and inexpensive criterion. For evaluation, we measure thesaurus agreement with WordNet and performance in answering TOEFL-like questions. Results illustrate the sensitivity of distributional thesauri to filters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Large-scale distributional thesauri created automatically from corpora (Grefenstette, 1994; Lin, 1998; Weeds et al., 2004; Ferret, 2012) are an inexpensive and fast alternative for representing semantic relatedness between words, when manually constructed resources like WordNet (Fellbaum, 1998) are unavailable or lack coverage. To construct a distributional thesaurus, the (collocational or syntactic) contexts in which a target word occurs are used as the basis for calculating its similarity with other words. That is, two words are similar if they share a large proportion of contexts.",
"cite_spans": [
{
"start": 71,
"end": 91,
"text": "(Grefenstette, 1994;",
"ref_id": "BIBREF13"
},
{
"start": 92,
"end": 102,
"text": "Lin, 1998;",
"ref_id": "BIBREF15"
},
{
"start": 103,
"end": 122,
"text": "Weeds et al., 2004;",
"ref_id": "BIBREF21"
},
{
"start": 123,
"end": 136,
"text": "Ferret, 2012)",
"ref_id": "BIBREF10"
},
{
"start": 279,
"end": 295,
"text": "(Fellbaum, 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Much attention has been devoted to refining thesaurus quality, improving informativeness and similarity measures (Lin, 1998; Curran and Moens, 2002; Ferret, 2010) , identifying and demoting bad neighbors (Ferret, 2013) , or using more relevant contexts (Broda et al., 2009; Biemann and Riedl, 2013) . For the latter in particular, as words vary in their collocational tendencies, it is difficult to determine how informative a given context is. To remove uninformative and noisy contexts, filters have often been applied like pointwise mutual information (PMI), lexicographer's mutual information (LMI) (Biemann and Riedl, 2013) , t-score (Piasecki et al., 2007) and z-score (Broda et al., 2009) . However, the selection of a measure and of a threshold value for these filters is generally empirically determined. We argue that these filtering parameters have a great influence on the quality of the generated thesauri.",
"cite_spans": [
{
"start": 113,
"end": 124,
"text": "(Lin, 1998;",
"ref_id": "BIBREF15"
},
{
"start": 125,
"end": 148,
"text": "Curran and Moens, 2002;",
"ref_id": "BIBREF5"
},
{
"start": 149,
"end": 162,
"text": "Ferret, 2010)",
"ref_id": "BIBREF9"
},
{
"start": 204,
"end": 218,
"text": "(Ferret, 2013)",
"ref_id": "BIBREF11"
},
{
"start": 253,
"end": 273,
"text": "(Broda et al., 2009;",
"ref_id": "BIBREF3"
},
{
"start": 274,
"end": 298,
"text": "Biemann and Riedl, 2013)",
"ref_id": "BIBREF1"
},
{
"start": 603,
"end": 628,
"text": "(Biemann and Riedl, 2013)",
"ref_id": "BIBREF1"
},
{
"start": 639,
"end": 662,
"text": "(Piasecki et al., 2007)",
"ref_id": "BIBREF18"
},
{
"start": 675,
"end": 695,
"text": "(Broda et al., 2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The goal of this paper is to quantify the impact of context filters on distributional thesauri. We experiment with different filter methods and measures to assess context significance. We propose the use of simple cooccurrence frequency as a filter and show that it leads to better results than more expensive measures such as LMI or PMI. Thus we propose a cheap and effective way of filtering contexts while maintaining quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is organized as follows: in \u00a72 we discuss evaluation of distributional thesauri. The methodology adopted in the work and the results are discussed in \u00a73 and \u00a74. We finish with some conclusions and discussion of future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In a nutshell, the standard approach to build a distributional thesaurus consists of: (i) the extraction of contexts for the target words from corpora, (ii) the application of an informativeness measure to represent these contexts and (iii) the application of a similarity measure to compare sets of contexts. The contexts in which a target word appears can be extracted in terms of a window of cooccurring (content) words surrounding the target (Freitag et al., 2005; Ferret, 2012; Erk and Pado, 2010) or in terms of the syntactic dependencies in which the target appears (Lin, 1998; McCarthy et al., 2003; Weeds et al., 2004) . The informativeness of each context is calculated using measures like PMI, and t-test while the similarity between contexts is calculated using measures like Lin's (1998) , cosine, Jensen-Shannon divergence, Dice or Jaccard.",
"cite_spans": [
{
"start": 446,
"end": 468,
"text": "(Freitag et al., 2005;",
"ref_id": "BIBREF12"
},
{
"start": 469,
"end": 482,
"text": "Ferret, 2012;",
"ref_id": "BIBREF10"
},
{
"start": 483,
"end": 502,
"text": "Erk and Pado, 2010)",
"ref_id": "BIBREF6"
},
{
"start": 573,
"end": 584,
"text": "(Lin, 1998;",
"ref_id": "BIBREF15"
},
{
"start": 585,
"end": 607,
"text": "McCarthy et al., 2003;",
"ref_id": "BIBREF17"
},
{
"start": 608,
"end": 627,
"text": "Weeds et al., 2004)",
"ref_id": "BIBREF21"
},
{
"start": 788,
"end": 800,
"text": "Lin's (1998)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Evaluation of the quality of distributional thesauri is a well know problem in the area (Lin, 1998; Curran and Moens, 2002) . For instance, for intrinsic evaluation, the agreement between thesauri has been examined, looking at the average similarity of a word in the thesauri (Lin, 1998) , and at the overlap and rank agreement between the thesauri for target words like nouns (Weeds et al., 2004) . Although much attention has been given to the evaluation of various informativeness and similarity measures, a careful assessment of the effects of filtering on the resulting thesauri is also needed. For instance, Biemann and Riedl (2013) found that filtering a subset of contexts based on LMI increased the similarity of a thesaurus with WordNet. In this work, we compare the impact of using different types of filters in terms of thesaurus agreement with WordNet, focusing on a distributional thesaurus of English verbs. We also propose a frequency-based saliency measure to rank and filter contexts and compare it with PMI and LMI.",
"cite_spans": [
{
"start": 88,
"end": 99,
"text": "(Lin, 1998;",
"ref_id": "BIBREF15"
},
{
"start": 100,
"end": 123,
"text": "Curran and Moens, 2002)",
"ref_id": "BIBREF5"
},
{
"start": 276,
"end": 287,
"text": "(Lin, 1998)",
"ref_id": "BIBREF15"
},
{
"start": 377,
"end": 397,
"text": "(Weeds et al., 2004)",
"ref_id": "BIBREF21"
},
{
"start": 614,
"end": 638,
"text": "Biemann and Riedl (2013)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Extrinsic evaluation of distributional thesauri has been carried out for tasks such as English lexical substitution (McCarthy and Navigli, 2009) , phrasal verb compositionality detection (McCarthy et al., 2003) and the WordNet-based synonymy test (WBST) (Freitag et al., 2005) . For comparative purposes in this work we adopt the latter.",
"cite_spans": [
{
"start": 116,
"end": 144,
"text": "(McCarthy and Navigli, 2009)",
"ref_id": "BIBREF16"
},
{
"start": 187,
"end": 210,
"text": "(McCarthy et al., 2003)",
"ref_id": "BIBREF17"
},
{
"start": 254,
"end": 276,
"text": "(Freitag et al., 2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We focus on thesauri of English verbs constructed from the BNC (Burnard, 2007) 1 . Contexts are extracted from syntactic dependencies generated by RASP (Briscoe et al., 2006) , using nouns (heads of NPs) which have subject and direct object relations with the target verb. Thus, each target verb is represented by a set of triples containing (i) the verb itself, (ii) a context noun and (iii) a syntactic relation (object, subject). The thesauri were constructed using Lin's (1998) method. Lin's version of the distributional hypothesis states that two words (verbs v 1 and v 2 in our case) are similar if they share a large proportion of contexts weighted by their information content, assessed with PMI (Bansal et al., 2012; Turney, 2013) .",
"cite_spans": [
{
"start": 63,
"end": 78,
"text": "(Burnard, 2007)",
"ref_id": "BIBREF4"
},
{
"start": 152,
"end": 174,
"text": "(Briscoe et al., 2006)",
"ref_id": "BIBREF2"
},
{
"start": 469,
"end": 481,
"text": "Lin's (1998)",
"ref_id": "BIBREF15"
},
{
"start": 705,
"end": 726,
"text": "(Bansal et al., 2012;",
"ref_id": "BIBREF0"
},
{
"start": 727,
"end": 740,
"text": "Turney, 2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "In the literature, little attention is paid to context filters. To investigate their impact, we compare two kinds of filters, and before calculating similarity using Lin's measure, we apply them to remove potentially noisy triples:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "\u2022 Threshold (th): we remove triples that occur less than a threshold th. Threshold values vary from 1 to 50 counts per triple. \u2022 Relevance (p): we keep only the top p most relevant contexts for each verb, were relevance is defined according to the following measures: (a) frequency, (b) PMI, and (c) LMI (Biemann and Riedl, 2013) . Values of p vary between 10 and 1000.",
"cite_spans": [
{
"start": 304,
"end": 329,
"text": "(Biemann and Riedl, 2013)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "In this work, we want to answer two questions: (a) Do more selective filters improve intrinsic evaluation of thesaurus? and (b) Do they also help in extrinsic evaluation?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "For intrinsic evaluation, we determine agreement between a distributional thesaurus and Word-Net as the path similarities for the first k distributional neighbors of a verb. A single score is obtained by averaging the similarities of all verbs with their k first neighbors. The higher this score is, the closer the neighbors are to the target in WordNet, and the better the thesaurus. Several values of k were tested and the results showed exactly the same curve shapes for all values, with WordNet similarity decreasing linearly with k. For the remainder of the paper we adopt k = 10, as it is widely used in the literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "For extrinsic evaluation, we use the WBST set for verbs (Freitag et al., 2005) with 7,398 questions and an average polysemy of 10.4. The task consists of choosing the most suitable synonym for a word among a set of four options. The thesaurus is used to rank the candidate answers by similarity scores, and select the first one as the correct synonym. As discussed by Freitag et al. (2005) , the upper bound reached by English native speakers is 88.4% accuracy, and simple lower bounds are 25% (random choice) and 34.5% (always choosing the most frequent option). Figure 1 shows average WordNet similarities for thesauri built filtering by frequency threshold th and by p most frequent contexts. When using a threshold filter (Figure 1 left) , high values lead to better performance for midand low-frequency verbs. This is because, for high th values, there are few low and mid-frequency verbs left, since a verb that occurs less has less chances to be seen often in the same context. The similarity for verbs with no contexts over the frequency threshold cannot be assessed and as a consequence those verbs are not included in the final thesaurus. As Figure 2 shows, the number of verbs decreases much faster for low and mid frequency verbs when th increases. 3 For example, for th = 50, there are only 7 remaining lowfrequency verbs in the thesaurus and these tend to be idiosyncratic multiword expressions. One example is wreak, and the only triple containing this verb that appeared more than 50 times is wreak havoc (71 occurrences). The neighbors of this verb are cause and play, which yield a good similarity score in WordNet. Therefore, although higher thresholds result in higher similarities for low and mid-frequency verbs, this comes at a cost, as the number of verbs included in the thesaurus decreases considerably.",
"cite_spans": [
{
"start": 56,
"end": 78,
"text": "(Freitag et al., 2005)",
"ref_id": "BIBREF12"
},
{
"start": 368,
"end": 389,
"text": "Freitag et al. (2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 564,
"end": 572,
"text": "Figure 1",
"ref_id": null
},
{
"start": 726,
"end": 741,
"text": "(Figure 1 left)",
"ref_id": null
},
{
"start": 1152,
"end": 1160,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "(||v|| \u2265 500), mid-frequency (150 \u2264 ||v|| < 500) and lowfrequency (||v|| < 150).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "3 For p most salient contexts, the number of verbs does not vary and is the same shown in Figure 2 for th = 1 (no filter). As expected, the best performance is obtained for high-frequency verbs and no filter, since it results in more context information per verb. Increasing th decreases similarity due to the removal of some of these contexts. In average, higher th values lead to better overall similarity among the frequency ranges (from 0.148 with th = 1 to 0.164 with th = 50). The higher the threshold, the more high-frequency verbs will prevail in the thesauri, for which the WordNet path similarities are higher.",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 98,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "On the other hand, when adopting a relevance filter of keeping the p most relevant contexts for each verb (Figure 1 right) , we obtain similar results, but more stable thesauri. The number of verbs remains constant, since we keep a fixed number of contexts for each verb and verbs are not removed when the threshold is modified. Word-Net similarity increases as more contexts are taken into account, for all frequency ranges. There is a maximum around p = 200, though larger values do not lead to a drastic drop in quality. This suggests that the noise introduced by low-frequency contexts is compensated by the increase of informativeness for other contexts. An ideal balance is reached by the lowest possible p that maintains high WordNet similarity, since the lower the p the faster the thesaurus construction. In terms of saliency measure, when keeping only the p most relevant contexts, sorting them with PMI leads to much worse results than LMI or frequency, as PMI gives too much weight to infrequent combinations. This is consistent with results of Biemann and Riedl (2013) . Regarding LMI versus frequency, the results using the latter are slightly better (or with no significant difference, depending on the frequency range). The advantage of using frequency instead of LMI is that it makes the process simpler and faster while leading to equal or better performance in all frequency ranges. Therefore for the extrinsic evaluation using WBST task, we use frequency to select the p most relevant contexts and then compute Lin's similarity using only those contexts. Figure 3 shows the performance of the thesauri in the WBST task in terms of precision, recall and F1. 4 For precision, the best filter is to remove con- 4 Filters based on LMI and PMI were also tested with the texts occurring less than th times, but, this also leads to poor recall, since many verbs are left out of the thesauri and their WSBT questions cannot be answered. On the other hand, keeping the most relevant p contexts leads to more stable results and when p is high (right plot), they are similar to those shown in the left plot of Figure 3 .",
"cite_spans": [
{
"start": 1057,
"end": 1081,
"text": "Biemann and Riedl (2013)",
"ref_id": "BIBREF1"
},
{
"start": 1677,
"end": 1678,
"text": "4",
"ref_id": null
},
{
"start": 1728,
"end": 1729,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 106,
"end": 122,
"text": "(Figure 1 right)",
"ref_id": null
},
{
"start": 1575,
"end": 1583,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 2119,
"end": 2127,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "The answer to our questions in Section 3 is yes, more selective filters improve intrinsic and extrinsic thesaurus quality. The use of both filtering methods results in thesauri in which the neighbors of target verbs are closer in WordNet and get better scores in TOEFL-like tests. However, the fact that filtering contexts with frequency under th removes verbs in the final thesaurus is a drawback, as highlighted in the extrinsic evaluation on the WBST task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.1"
},
{
"text": "Furthermore, we demonstrated that competitive results can be obtained keeping only the p most relevant contexts per verb. On the one hand, this method leads to much more stable thesauri, with the same verbs for all values of p. On the other hand, it is important to highlight that the best results to assess the relevance of the contexts are obtained using frequency while more sophisticated filters such as LMI do not improve thesaurus quality. Although an LMI filter is relatively fast compared to dimensionality reduction techniques such as singular value decomposition (Landauer and Dumais, 1997), it is still considerably more expensive than a simple frequency filter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.1"
},
{
"text": "In short, our experiments indicate that a reasonsame results as intrinsic evaluation: sorting contexts by frequency leads to better results. able trade-off between noise, coverage and computational efficiency is obtained for p = 200 most frequent contexts, as confirmed by intrinsic and extrinsic evaluation. Frequency threshold th is not recommended: it degrades recall because the contexts for many verbs are not frequent enough. This result is useful for extracting distributional thesauri from very large corpora like the UKWaC (Ferraresi et al., 2008) by proposing an alternative that minimizes the required computational resources while efficiently removing a significant amount of noise.",
"cite_spans": [
{
"start": 532,
"end": 556,
"text": "(Ferraresi et al., 2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.1"
},
{
"text": "In this paper we addressed the impact of filters on the quality of distributional thesauri, evaluating a set of standard thesauri and different filtering methods. The results suggest that the use of filters and their parameters greatly affect the thesauri generated. We show that it is better to use a filter that selects the most relevant contexts for a verb than to simply remove rare contexts. Furthermore, the best performance was obtained with the simplest method: frequency was found to be a simple and inexpensive measure of context salience. This is especially important when dealing with large amounts of data, since computing LMI for all contexts would be computationally costly. With our proposal to keep just the p most frequent contexts per verb, a great deal of contexts are cheaply removed and thus the computational power required for assessing similarity is drastically reduced. As future work, we plan to use these filters to build thesauri from larger corpora. We would like to generalize our findings to other syntactic configurations (e.g. noun-adjective) as well as to other similarity and informativeness measures. For instance, ongoing experiments indicate that the same parameters apply when Lin's similarity is replaced by cosine. Finally, we would like to compare the proposed heuristics with more sophisticated filtering strategies like singular value decomposition (Landauer and Dumais, 1997) and non-negative matrix factorization (Van de Cruys, 2009) .",
"cite_spans": [
{
"start": 1460,
"end": 1480,
"text": "(Van de Cruys, 2009)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "Even though larger corpora are available, we use a traditional carefully constructed corpus with representative samples of written English to control the quality of the thesaurus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In order to study the influence of verb frequency on the results, we divide the verbs in three groups: high-frequency",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the support of projects CAPES/COFECUB 707/11, PNPD 2484/2009, FAPERGS-INRIA 1706-2551/13-7, CNPq 312184/2012-3, 551964/2011-1, 482520/2012-4 and 312077/2012-2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised translation sense clustering",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "773--782",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Bansal, John DeNero, and Dekang Lin. 2012. Unsupervised translation sense clustering. In Pro- ceedings of the 2012 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 773-782, Montr\u00e9al, Canada, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Text: Now in 2D! a framework for lexical expansion with contextual similarity",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Riedl",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Language Modelling",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Biemann and Martin Riedl. 2013. Text: Now in 2D! a framework for lexical expansion with con- textual similarity. Journal of Language Modelling, 1(1).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The second release of the RASP system",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Watson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of the COLING/ACL 2006 Interactive Presentation Sessions",
"volume": "",
"issue": "",
"pages": "77--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Briscoe, John Carroll, and Rebecca Watson. 2006. The second release of the RASP system. In James Curran, editor, Proc. of the COLING/ACL 2006 In- teractive Presentation Sessions, pages 77-80, Sid- ney, Australia, Jul. ACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Rank-based transformation in measuring semantic relatedness",
"authors": [
{
"first": "Bartosz",
"middle": [],
"last": "Broda",
"suffix": ""
},
{
"first": "Maciej",
"middle": [],
"last": "Piasecki",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 22nd Canadian Conference on Artificial Intelligence: Advances in Artificial Intelligence, Canadian AI '09",
"volume": "",
"issue": "",
"pages": "187--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bartosz Broda, Maciej Piasecki, and Stan Szpakow- icz. 2009. Rank-based transformation in mea- suring semantic relatedness. In Proceedings of the 22nd Canadian Conference on Artificial Intel- ligence: Advances in Artificial Intelligence, Cana- dian AI '09, pages 187-190, Berlin, Heidelberg. Springer-Verlag.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "User Reference Guide for the British National Corpus",
"authors": [
{
"first": "Lou",
"middle": [],
"last": "Burnard",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lou Burnard. 2007. User Reference Guide for the British National Corpus. Technical report, Oxford University Computing Services, Feb.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Improvements in automatic thesaurus extraction",
"authors": [
{
"first": "R",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Curran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc.of the ACL 2002 Workshop on Unsupervised Lexical Acquisition",
"volume": "",
"issue": "",
"pages": "59--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James R. Curran and Marc Moens. 2002. Improve- ments in automatic thesaurus extraction. In Proc.of the ACL 2002 Workshop on Unsupervised Lexical Acquisition, pages 59-66, Philadelphia, Pennsylva- nia, USA. ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Exemplar-based models for word meaning in context",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pado",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of the ACL 2010 Conference Short Papers",
"volume": "",
"issue": "",
"pages": "92--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Erk and Sebastian Pado. 2010. Exemplar-based models for word meaning in context. In Proc. of the ACL 2010 Conference Short Papers, pages 92-97, Uppsala, Sweden, Jun. ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "WordNet: An Electronic Lexical Database (Language, Speech, and Communication)",
"authors": [],
"year": 1998,
"venue": "",
"volume": "423",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database (Language, Speech, and Communication). MIT Press, May. 423 p.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Introducing and evaluating UKWaC, a very large web-derived corpus of English",
"authors": [
{
"first": "Adriano",
"middle": [],
"last": "Ferraresi",
"suffix": ""
},
{
"first": "Eros",
"middle": [],
"last": "Zanchetta",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Bernardini",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 4th Web as Corpus Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adriano Ferraresi, Eros Zanchetta, Marco Baroni, and Silvia Bernardini. 2008. Introducing and evaluat- ing UKWaC, a very large web-derived corpus of En- glish. In In Proceedings of the 4th Web as Corpus Workshop (WAC-4.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Testing semantic similarity measures for extracting synonyms from a corpus",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Ferret",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of the Seventh LREC (LREC 2010)",
"volume": "",
"issue": "",
"pages": "3338--3343",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Ferret. 2010. Testing semantic similarity mea- sures for extracting synonyms from a corpus. In Proc. of the Seventh LREC (LREC 2010), pages 3338-3343, Valetta, Malta, May. ELRA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Combining bootstrapping and feature selection for improving a distributional thesaurus",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Ferret",
"suffix": ""
}
],
"year": 2012,
"venue": "ECAI",
"volume": "",
"issue": "",
"pages": "336--341",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Ferret. 2012. Combining bootstrapping and feature selection for improving a distributional the- saurus. In ECAI, pages 336-341.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Identifying bad semantic neighbors for improving distributional thesauri",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Ferret",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of the 51st ACL",
"volume": "1",
"issue": "",
"pages": "561--571",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Ferret. 2013. Identifying bad semantic neigh- bors for improving distributional thesauri. In Proc. of the 51st ACL (Volume 1: Long Papers), pages 561-571, Sofia, Bulgaria, Aug. ACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "New experiments in distributional representations of synonymy",
"authors": [
{
"first": "Dayne",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Blume",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Byrnes",
"suffix": ""
},
{
"first": "Edmond",
"middle": [],
"last": "Chow",
"suffix": ""
},
{
"first": "Sadik",
"middle": [],
"last": "Kapadia",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Rohwer",
"suffix": ""
},
{
"first": "Zhiqiang",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of the Ninth CoNLL (CoNLL-2005)",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dayne Freitag, Matthias Blume, John Byrnes, Ed- mond Chow, Sadik Kapadia, Richard Rohwer, and Zhiqiang Wang. 2005. New experiments in distri- butional representations of synonymy. In Ido Dagan and Dan Gildea, editors, Proc. of the Ninth CoNLL (CoNLL-2005), pages 25-32, University of Michi- gan, MI, USA, Jun. ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Explorations in Automatic Thesaurus Discovery",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory Grefenstette. 1994. Explorations in Au- tomatic Thesaurus Discovery. Springer, Norwell, MA, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A solution to platos problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge",
"authors": [
{
"first": "K",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dumais",
"suffix": ""
}
],
"year": 1997,
"venue": "Psychological review",
"volume": "",
"issue": "",
"pages": "211--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas K Landauer and Susan T. Dumais. 1997. A solution to platos problem: The latent semantic anal- ysis theory of acquisition, induction, and represen- tation of knowledge. Psychological review, pages 211-240.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Automatic retrieval and clustering of similar words",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of the 36th ACL and 17th COLING",
"volume": "2",
"issue": "",
"pages": "768--774",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin. 1998. Automatic retrieval and cluster- ing of similar words. In Proc. of the 36th ACL and 17th COLING, Volume 2, pages 768-774, Montreal, Quebec, Canada, Aug. ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The english lexical substitution task. Language Resources and Evaluation",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "43",
"issue": "",
"pages": "139--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana McCarthy and Roberto Navigli. 2009. The en- glish lexical substitution task. Language Resources and Evaluation, 43(2):139-159.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Detecting a continuum of compositionality in phrasal verbs",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the ACL Workshop on MWEs: Analysis, Acquisition and Treatment (MWE 2003)",
"volume": "",
"issue": "",
"pages": "73--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana McCarthy, Bill Keller, and John Carroll. 2003. Detecting a continuum of compositionality in phrasal verbs. In Francis Bond, Anna Korhonen, Diana McCarthy, and Aline Villavicencio, editors, Proc. of the ACL Workshop on MWEs: Analysis, Ac- quisition and Treatment (MWE 2003), pages 73-80, Sapporo, Japan, Jul. ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automatic selection of heterogeneous syntactic features in semantic similarity of polish nouns",
"authors": [
{
"first": "Maciej",
"middle": [],
"last": "Piasecki",
"suffix": ""
},
{
"first": "Stanislaw",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
},
{
"first": "Bartosz",
"middle": [],
"last": "Broda",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 10th international conference on Text, speech and dialogue, TSD'07",
"volume": "",
"issue": "",
"pages": "99--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maciej Piasecki, Stanislaw Szpakowicz, and Bartosz Broda. 2007. Automatic selection of heterogeneous syntactic features in semantic similarity of polish nouns. In Proceedings of the 10th international conference on Text, speech and dialogue, TSD'07, pages 99-106, Berlin, Heidelberg. Springer-Verlag.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distributional semantics beyond words: Supervised learning of analogy and paraphrase",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "1",
"issue": "",
"pages": "353--366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney. 2013. Distributional semantics be- yond words: Supervised learning of analogy and paraphrase. 1:353-366.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A non-negative tensor factorization model for selectional preference induction",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Van De Cruys",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Geometrical Models of Natural Language Semantics",
"volume": "",
"issue": "",
"pages": "83--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Van de Cruys. 2009. A non-negative tensor factor- ization model for selectional preference induction. In Proceedings of the Workshop on Geometrical Models of Natural Language Semantics, pages 83- 90, Athens, Greece, March. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Characterising measures of lexical distributional similarity",
"authors": [
{
"first": "Julie",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of the 20th COLING",
"volume": "",
"issue": "",
"pages": "1015--1021",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julie Weeds, David Weir, and Diana McCarthy. 2004. Characterising measures of lexical distributional similarity. In Proc. of the 20th COLING (COL- ING 2004), pages 1015-1021, Geneva, Switzerland, Aug. ICCL.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "Number of verbs per frequency ranges when filtering by context frequency threshold th",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "WBST task scores filtering by frequency threshold th (left) and p most frequent contexts (right).",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"content": "<table><tr><td/><td colspan=\"3\">Filtering triples with frequency under th</td><td/><td/><td colspan=\"3\">Keeping p most frequent triples per verb</td></tr><tr><td/><td>0.25</td><td/><td/><td/><td>0.25</td><td/><td/><td/></tr><tr><td/><td>0.2</td><td/><td/><td/><td>0.2</td><td/><td/><td/></tr><tr><td>WN similarity</td><td>0.1 0.15</td><td/><td/><td>WN similarity</td><td>0.1 0.15</td><td/><td/><td/></tr><tr><td/><td/><td colspan=\"2\">all verbs</td><td/><td/><td/><td/><td>all verbs</td></tr><tr><td/><td>0.05</td><td colspan=\"2\">high frequent verbs</td><td/><td>0.05</td><td/><td colspan=\"2\">high frequent verbs</td></tr><tr><td/><td/><td colspan=\"2\">mid frequent verbs</td><td/><td/><td/><td colspan=\"2\">mid frequent verbs</td></tr><tr><td/><td>0</td><td colspan=\"2\">low frequent verbs</td><td/><td>0</td><td/><td colspan=\"2\">low frequent verbs</td></tr><tr><td/><td>1</td><td>10</td><td/><td/><td>10</td><td/><td>100</td><td/><td>1000</td></tr><tr><td/><td/><td>th</td><td/><td/><td/><td/><td>p</td><td/></tr><tr><td colspan=\"2\">Figure 1: Filter</td><td colspan=\"2\">All verbs</td><td/><td/><td colspan=\"2\">Frequency range</td><td/></tr><tr><td/><td/><td/><td/><td>Low</td><td/><td>Mid</td><td/><td>High</td></tr><tr><td/><td>No filter</td><td>-</td><td>0.148</td><td>-</td><td>0.101</td><td>-</td><td>0.144</td><td>-</td><td>0.198</td></tr><tr><td/><td>Filter low freq. contexts</td><td>th = 50</td><td>0.164</td><td>th = 50</td><td>0.202</td><td>th = 50</td><td>0.154</td><td>th = 1</td><td>0.200</td></tr><tr><td/><td>Keep p contexts (freq.)</td><td>p = 200</td><td>0.158</td><td>p = 500</td><td>0.138</td><td>p = 200</td><td>0.149</td><td>p = 200</td><td>0.206</td></tr><tr><td/><td>Keep p contexts (PMI)</td><td colspan=\"8\">p = 1000 0.139 p = 1000 0.101 p = 1000 0.136 p = 1000 0.181</td></tr><tr><td/><td>Keep p contexts (LMI)</td><td>p = 200</td><td>0.155</td><td>p = 100</td><td>0.112</td><td>p = 200</td><td>0.147</td><td>p = 200</td><td>0.208</td></tr></table>",
"text": "summarizes the parametrization leading to the best WordNet similarity for each kind of filter. In all cases we show the results obtained for different frequency ranges 2 as well as the results when averaging over all verbs.WordNet path Similarity for different frequency ranges, k=10WordNet path Similarity for different frequency ranges, k=10 WordNet scores for verb frequency ranges, filtering by frequency threshold th (left) and p most frequent contexts (right).",
"type_str": "table",
"num": null,
"html": null
},
"TABREF1": {
"content": "<table/>",
"text": "",
"type_str": "table",
"num": null,
"html": null
}
}
}
}