ACL-OCL / Base_JSON /prefixW /json /wac /2020.wac-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:33:21.581175Z"
},
"title": "Hypernym-LIBre: A free Web-based Corpus for Hypernym Detection",
"authors": [
{
"first": "Shaurya",
"middle": [],
"last": "Rawat",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad Polit\u00e9cnica de Madrid",
"location": {
"settlement": "Madrid",
"country": "Spain"
}
},
"email": "[email protected]."
},
{
"first": "Mariano",
"middle": [],
"last": "Rico",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad Polit\u00e9cnica de Madrid",
"location": {
"settlement": "Madrid",
"country": "Spain"
}
},
"email": "mariano.rico@"
},
{
"first": "Oscar",
"middle": [],
"last": "Corcho",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad Polit\u00e9cnica de Madrid",
"location": {
"settlement": "Madrid",
"country": "Spain"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe a web-based corpus for hypernym detection which consists of 32 GB of high quality English paragraphs along with their part-of-speech tagged and dependency parsed versions. One of the main advantages of this corpus is that it is available under an open license while providing similar results for training and testing on state-of-the-art methods and techniques for detecting hypernyms, which makes it a good alternative to currently used corpora which are not available freely. The corpus has been created by cleaning and pre-processing the existing UMBC web-corpus and English Wikipedia.We detail existing methods for hypernym detection and analyze the state-of-the-art techniques using our corpus as a text source. We evaluate the corpus using 5 datasets and 4 models and compare them.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe a web-based corpus for hypernym detection which consists of 32 GB of high quality English paragraphs along with their part-of-speech tagged and dependency parsed versions. One of the main advantages of this corpus is that it is available under an open license while providing similar results for training and testing on state-of-the-art methods and techniques for detecting hypernyms, which makes it a good alternative to currently used corpora which are not available freely. The corpus has been created by cleaning and pre-processing the existing UMBC web-corpus and English Wikipedia.We detail existing methods for hypernym detection and analyze the state-of-the-art techniques using our corpus as a text source. We evaluate the corpus using 5 datasets and 4 models and compare them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Hyponyms are terms whose semantic field lies within that of another term, which is called its Hypernym. They capture the 'is-a' or 'type-of' relationship between terms. It is also sometimes referred to as the umbrella term or the blanket term. For example: \"Spain is a country\". In this case, 'Spain' is an instance of the type 'country' and therefore 'country' is its hypernym. The relationship can also exist between classes. For example: \"car is a vehicle\". Here, both 'car' and 'vehicle' are classes as there can be multiple types of both and this is an example of class-class relationship in hypernymy. Terms that have the same hypernym are called co-hyponyms. For example: Spain and France are co-hyponyms as they have the same hypernym, country. The earliest attempts at detecting hypernym pairs from text started with the introduction of Hearst patterns (Hearst, 1992) . This approach attempted to extract the hypernyms from the text using lexico-synctactic patterns that could capture the contexts in which hyponym-hypernym pairs occur in text. These patterns take advantage of noun phrases in a given corpus. Even though Hearst patterns may capture the hyponym-hypernym pairs from the corpus, they suffer from sparsity, that is, if the pairs do not follow the exact pattern that is used, then no relation is picked up.",
"cite_spans": [
{
"start": 862,
"end": 876,
"text": "(Hearst, 1992)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Recent works are now moving to the distributional methods for hypernym detection which are based on the DIH (distributional inclusion hypotheses) (Geffet and Dagan, 2005) , which states that the contexts in which a narrower term like 'Spain' occurs should be a subset of the contexts in which the broader term 'country' occurs. The measures in this space follow on from the creation of distributional semantic spaces and then use inclusion (Weeds et al., 2004) or non-inclusion (Lenci and Benotto, 2012) measures to detect if the hypernym relation holds. There is an alternative to the inclusion hypotheses, called the informativeness hypotheses, which uses entropy instead of inclusion contexts. This has been covered in Santus et al. (2014) and furthered in Shwartz et al. (2016b) . Along with distributional approaches, there are some machine learning based approaches that introduce the idea of using dependency paths as features for known hypernym pairs (Snow et al., 2005) and further work branching out from this using satellite links (Sheena et al., 2016) . Both referenced works train a classifier to predict whether the relation holds between two terms. There has also been work in the field of using distributional semantic spaces called embeddings (Mikolov et al., 2013; Pennington et al., 2014) to train classifiers for predicting hypernymy. Recent works on hypernym detection have used Wikipedia derived corpora (Shwartz et al., 2016a) or Gigaword (Graff, David, and Christopher Cieri, 2011) concatenated with Wikipedia (Roller et al., 2018) . Evaluation of extractions from these corpora has been done using 5 datasets which will be covered later in this paper (Section 3.2.3). Being consistent with Roller et al. (2018) and Shwartz et al. (2016b) , average precision is used as a metric to evaluate extractions and predict hypernymy between pairs in all datasets. In this paper, we first describe the two corpora from which our corpus is derived. We also detail the various approaches to hypernym detection and our methodology in extracting candidate pairs from the corpus. Finally, we describe the evaluation datasets used and compare our results to the current state-of-the-art (Roller et al., 2018) . We propose a free corpus along with its POS-tagged and dependency parsed versions that produces similar results on 5 tests and 4 methods. This is the main contribution of the paper 1 along with the relevant code for implementation 2 , and the hyponym-hypernym pairs extracted.",
"cite_spans": [
{
"start": 146,
"end": 170,
"text": "(Geffet and Dagan, 2005)",
"ref_id": "BIBREF6"
},
{
"start": 440,
"end": 460,
"text": "(Weeds et al., 2004)",
"ref_id": "BIBREF24"
},
{
"start": 478,
"end": 503,
"text": "(Lenci and Benotto, 2012)",
"ref_id": "BIBREF11"
},
{
"start": 722,
"end": 742,
"text": "Santus et al. (2014)",
"ref_id": "BIBREF15"
},
{
"start": 760,
"end": 782,
"text": "Shwartz et al. (2016b)",
"ref_id": "BIBREF20"
},
{
"start": 959,
"end": 978,
"text": "(Snow et al., 2005)",
"ref_id": "BIBREF21"
},
{
"start": 1042,
"end": 1063,
"text": "(Sheena et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 1260,
"end": 1282,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF12"
},
{
"start": 1283,
"end": 1307,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF13"
},
{
"start": 1426,
"end": 1449,
"text": "(Shwartz et al., 2016a)",
"ref_id": "BIBREF19"
},
{
"start": 1462,
"end": 1505,
"text": "(Graff, David, and Christopher Cieri, 2011)",
"ref_id": "BIBREF27"
},
{
"start": 1534,
"end": 1555,
"text": "(Roller et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 1715,
"end": 1735,
"text": "Roller et al. (2018)",
"ref_id": "BIBREF14"
},
{
"start": 1740,
"end": 1762,
"text": "Shwartz et al. (2016b)",
"ref_id": "BIBREF20"
},
{
"start": 2196,
"end": 2217,
"text": "(Roller et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Our corpus has been created as a concatenation of two webbased sources that are provided not only in their raw format but also POS-tagged with dependency path annotations using spacy (Honnibal and Montani, 2017) . Both sources are described in the following sections.",
"cite_spans": [
{
"start": 183,
"end": 211,
"text": "(Honnibal and Montani, 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Description",
"sec_num": "2."
},
{
"text": "The UMBC 3 corpus (Han et al., 2013) is based on the Stanford WebBase crawl from February 2007 and contains 100 million web pages from 50,000 websites. Duplicated paragraphs and non-English words as well as strange characters were removed from the text to get 3 billion good quality English words. The corpus can be downloaded freely as a 13GB tar file which when uncompressed, comes to around 48GB of text + part-of-speech tagged files. There are 408 files which contain English text in the paragraph format and 408 files that are the same paragraphs but part-of-speech tagged.",
"cite_spans": [
{
"start": 18,
"end": 36,
"text": "(Han et al., 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "UMBC Web Corpus",
"sec_num": "2.1."
},
{
"text": "The English Wikipedia corpus is a widely used corpus in the field of Computational Linguistics and Natural Language Processing. It provides data for various fields of research as a one-stop online free encyclopedia. It also provides various APIs for extracting specific information and the entire Wikipedia in downloadable format 4 either in XML or SQL for directly integrating into a database for further analyses. Wikipedia as a corpus is especially useful in the field of Hypernym detection because it covers a variety of topics which can be extracted as candidate pairs for satisfying the relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wikipedia Corpus",
"sec_num": "2.2."
},
{
"text": "Parsing of our corpus",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Tagging and Dependency",
"sec_num": "2.3."
},
{
"text": "The UMBC corpus comes with 408 files of POS-tagged version of the their text counterparts which is almost 30GB. According to Han et al. (2013) , the corpus was POS-tagged using the Stanford POS Tagger (Toutanova and Manning, 2000) . As the POS-tagged version of UMBC is quite dated and we needed to POS-tag Wikipedia as well to extract noun-phrases from the corpora, we used the multi-task CNN(Convolutional Neural Network) from spacy (Honnibal and Montani, 2017) for the concatenation of both. Although we do not use dependency parsing in our models or experiments, it is useful for implementing some distributional models as listed in Shwartz et al. (2016b) . We therefore, provide the dependency parsed version of the corpus as well for aiding future research in this field. This has also been performed using the dependency parser available in spacy.",
"cite_spans": [
{
"start": 125,
"end": 142,
"text": "Han et al. (2013)",
"ref_id": "BIBREF7"
},
{
"start": 201,
"end": 230,
"text": "(Toutanova and Manning, 2000)",
"ref_id": "BIBREF22"
},
{
"start": 637,
"end": 659,
"text": "Shwartz et al. (2016b)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Tagging and Dependency",
"sec_num": "2.3."
},
{
"text": "We analyze the state-of-the-art pattern-based methods for hypernym detection from Roller et al. (2018) and our evaluation shows that the results using our corpus are similar to the results from the alternate paid corpus mentioned before.",
"cite_spans": [
{
"start": 82,
"end": 102,
"text": "Roller et al. (2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hypernym Detection",
"sec_num": "3."
},
{
"text": "There are 3 main groups of approaches for hypernym detection that we enlist below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approaches for Hypernym Detection",
"sec_num": "3.1."
},
{
"text": "Pattern-based methods are the current state-of-the-art in Hypernym detection (Roller et al., 2018) . These methods use lexico-syntactic patterns (LSPs) to extract hypernym pairs based on their linguistic structure. The most popular patterns were proposed by Hearst (1992) , as shown in Table 1 , where N P stands for noun phrases. Apart from the regular Hearst Patterns, more patterns can be used to extract hypernym-hyponyms from a corpus.",
"cite_spans": [
{
"start": 77,
"end": 98,
"text": "(Roller et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 258,
"end": 271,
"text": "Hearst (1992)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 286,
"end": 293,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Pattern Based Methods",
"sec_num": "3.1.1."
},
{
"text": "These methods involve the formation of distributional semantic spaces or DSMs to capture the contexts in which a word occurs. It is closely linked to how word embeddings like Word2Vec (Mikolov et al., 2013) or GloVe (Pennington et al., 2014) are formed. A vector space is created based on these contexts, and can be used to determine whether two words hold the hypernym relation.Vector spaces can be created using window-based approaches(taking a fixed window around the target word) or dependency-tree based(taking the parent and sister node of the target word in a dependency tree). For example, Let us consider a sentence: \"Trade laws in Uganda are similar to those in South Africa.\" In this sentence, if we do not know what Uganda is, looking at the contexts surrounding this word and projecting it into a vector space of similar contexts, we can infer that it must be a country. A common method for checking for similarity in distributional spaces is Cosine Similarity (Dillon, 1983) . After the creation of such a distributional semantic space, various measures can be applied for hypernymy detection. All measures are variants of the DIH (Distributional Inclusion Hypothesis) (Geffet and Dagan, 2005) , which states that a narrower term's contexts will always be a subset of the broader term's contexts. For example: The context in which a narrower term like dog appears will be always be a subset of the contexts of a broader term like animal. All DIH measures are defined for large, sparse and positively valued distributional spaces. There are 3 main variants based on this:",
"cite_spans": [
{
"start": 184,
"end": 206,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF12"
},
{
"start": 974,
"end": 988,
"text": "(Dillon, 1983)",
"ref_id": "BIBREF5"
},
{
"start": 1183,
"end": 1207,
"text": "(Geffet and Dagan, 2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "\u2022 WeedsPrec (Weeds et al., 2004) which captures contexts of x that are included in the set of a broader term's contexts like y",
"cite_spans": [
{
"start": 12,
"end": 32,
"text": "(Weeds et al., 2004)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "W eedsP rec(x, y) = n i=1 x i * 1 yi > 0 n i=1 x i",
"eq_num": "(1)"
}
],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "\u2022 invCL (Lenci and Benotto, 2012) which uses distributional inclusion as well as distributional exclusion of the contexts of the two words. It uses the inclusion variant from Clarke (2009) and adds a non-inclusion element to it.",
"cite_spans": [
{
"start": 8,
"end": 33,
"text": "(Lenci and Benotto, 2012)",
"ref_id": "BIBREF11"
},
{
"start": 175,
"end": 188,
"text": "Clarke (2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "CL(x, y) = n i=1 min(x i , y i i=1 nx i (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "Hearst Patterns",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "Pattern #1 N P 0 such as N P 1 , N P 2 . . . , (and | or) N P n Example: \" Countries such as Spain, France and Germany.\" Extracts: N P 0 : Countries (hypernym), N P 1 : Spain (hyponym), N P 2 : France (hyponym)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "Pattern #2 such N P 0 as {N P 1 , } * {(or|and)} N P n Example: \"such flowers as Hibiscus and Rose.\" Extracts: N P 0 : Flowers (hypernym), N P 1 : Hibiscus (hyponym) and N P 2 : Rose (hyponym)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "Pattern #3 N P 0 {, N P 1 } * {, } or other N P 2 Example: \"Enid Blyton, Mario Puzo or other authors.\" Extracts: N P 0 : Enid Blyton (hyponym), N P 1 : Mario Puzo (hyponym), N P 2 : authors (hypernym)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "Pattern #4 N P 0 {, N P 1 } * {, } and other N P 2 Example: \"Socrates, Plato and other philosophers.\" Extracts: N P 0 : Socrates (hyponym), N P 1 : Plato (hyponym), N P 2 : philosophers (hypernym)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "Pattern #5 N P 0 {, } including {N P 1 , } * {or|and}N P 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "Example: \"Fishes including Dolphins and Rays.\" Extracts: N P 0 : Fishes (hypernym), N P 1 : Dolphins (hyponym), N P 2 : Rays (hyponym)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "Pattern #6 N P 0 {, } especially {N P 1 , } * {or|and}N P 2 Example: \"East European countries especially Bosnia and Hungary.\" Extracts: N P 0 : East European countries (hypernym), N P 1 : Bosnia (hyponym), N P 2 : Hungary (hyponym) Hearst(1992) .",
"cite_spans": [
{
"start": 232,
"end": 244,
"text": "Hearst(1992)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "invCL(x, y) = CL(x, y) * (1 \u2212 CL(y, x) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "\u2022 SLQS (Santus et al., 2014; Shwartz et al., 2016b) which is based on the alternate informativeness hypothesis. It depends on the median entropy of a term's top N contexts. Here N becomes the hyperparameter for the model.",
"cite_spans": [
{
"start": 7,
"end": 28,
"text": "(Santus et al., 2014;",
"ref_id": "BIBREF15"
},
{
"start": 29,
"end": 51,
"text": "Shwartz et al., 2016b)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E x = median N i=1 [H(c i )]",
"eq_num": "(4)"
}
],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": ", where H is the Shannon entropy. Then SLQS model is defined as the ratio of its application on both the terms in the pair:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "SLQS(x, y) = 1 \u2212 E x E y",
"eq_num": "(5)"
}
],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "3.1.3. Machine Learning Based Approaches Supervised learning methods have been used to classify whether two words hold the hypernymy relation or not. Methods such as in Snow et al. (2005) and Sheena et al. (2016) , create a training set with dependency paths between known hypernym-hyponym pairs as the features and the target as a binary variable whether that dependency path leads to a hypernymy relation or not. This task then becomes a binary classification task and can be used as a Hypernym classifier between a pair of words, given the dependency path that links them. There has been recent progress in using neural networks and spherical embeddings (Wang et al., 2019) and in combining pattern-based approaches with nearest-neighbor candidate pairs (Held and Habash, 2019) . However, these have not been considered in this study and are beyond the scope of this paper.",
"cite_spans": [
{
"start": 169,
"end": 187,
"text": "Snow et al. (2005)",
"ref_id": "BIBREF21"
},
{
"start": 192,
"end": 212,
"text": "Sheena et al. (2016)",
"ref_id": "BIBREF18"
},
{
"start": 657,
"end": 676,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 757,
"end": 780,
"text": "(Held and Habash, 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Distributional Methods",
"sec_num": "3.1.2."
},
{
"text": "Roller et al. (2018) conclude that pattern-based methods outperform distributional methods for Hypernym detection. In order to validate extractions, a corpus is required to match the patterns and obtain candidate hypernymhyponym pairs. The dataset used in Roller et al. (2018) consisted of the concatenation of the Gigaword (Graff, David, and Christopher Cieri, 2011) and the Wikipedia corpus. However, Gigaword is a paid corpus and requires fees for access. We used an alternate corpus derived from the concatenation of the UMBC and the Wikipedia corpus. A relevant result is that using our free corpus, we were able to achieve similar state-of-the-art results for all the datasets the extractions were validated on.",
"cite_spans": [
{
"start": 256,
"end": 276,
"text": "Roller et al. (2018)",
"ref_id": "BIBREF14"
},
{
"start": 324,
"end": 367,
"text": "(Graff, David, and Christopher Cieri, 2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Pattern-based Methodology for Hypernym Detection",
"sec_num": "3.2."
},
{
"text": "We now outline our methodology for obtaining these results using Pattern-based methods for Hypernym detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Pattern-based Methodology for Hypernym Detection",
"sec_num": "3.2."
},
{
"text": "Pairs were extracted from the UMBC+Wikipedia corpus as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Pairs from the Corpus",
"sec_num": "3.2.1."
},
{
"text": "1. Convert the Hearst Patterns shown in Table 1 into regular expressions.",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Extracting Pairs from the Corpus",
"sec_num": "3.2.1."
},
{
"text": "2. Pre-process and clean the corpus by removing special characters like #,$, HTML tags etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Pairs from the Corpus",
"sec_num": "3.2.1."
},
{
"text": "3. Split the corpus into sentences and tokenize each sentence into words such that we get a list of sentences where each sentence is a list of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Pairs from the Corpus",
"sec_num": "3.2.1."
},
{
"text": "4. Part-of-speech tag the words in each sentence with the Perceptron Tagger 5 6 .",
"cite_spans": [
{
"start": 76,
"end": 79,
"text": "5 6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Pairs from the Corpus",
"sec_num": "3.2.1."
},
{
"text": "5. Extract noun phrases from the text. Sequential noun phrases are combined into one with a single 'NP ' header 6. Match Hearst Patterns with the text and extract hyponym-hypernym pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Pairs from the Corpus",
"sec_num": "3.2.1."
},
{
"text": "After extracting the pairs from the corpus, we compress them to get each unique pair and the frequency of extraction from the total extractions. This creates a counts table where we have the pair extracted alongside the number of times (frequency) of occurrence. From these pairs and counts, we create a sparse cooccurrence matrix of all the words in the vocabulary where the rows are the hyponyms from each pair and the columns are the hypernyms. The value of each cell in the matrix is the number of extractions of that particular hyponymhypernym pair or the frequency. The Raw Count Matrix is created by dividing each value in the matrix by the total number of extractions to get the raw probability of extracting that particular pair as a valid hyponym-hypernym pair. Let \u03c1 denote the set of extractions from corpus \u03c4 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matrix Operations on the Extractions",
"sec_num": "3.2.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c1 = {(x, y)} n i=1",
"eq_num": "(6)"
}
],
"section": "Matrix Operations on the Extractions",
"sec_num": "3.2.2."
},
{
"text": "Let w(x, y) denote the count of how often (x,y) has been extracted using our patterns from the corpus and the total number of extractions W be denoted as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matrix Operations on the Extractions",
"sec_num": "3.2.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "W = (x,y)\u2208\u03c1 w(x, y)",
"eq_num": "(7)"
}
],
"section": "Matrix Operations on the Extractions",
"sec_num": "3.2.2."
},
{
"text": "In order to predict the hypernymy relation using this raw count matrix, we will use the probability of extraction of the pair as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matrix Operations on the Extractions",
"sec_num": "3.2.2."
},
{
"text": "p(x, y) = w(x, y) W (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matrix Operations on the Extractions",
"sec_num": "3.2.2."
},
{
"text": "This is detailed in Algorithm 1. However, raw count probabilities for predicting this relation suffers from word occurrence inconsistencies. For example, (humans, mammals) are more likely to be extracted from the corpus than (human, vertebrates), but both are true for hypernymy as humans are both mammals and vertebrates. To deal with this, Roller et al. (2018) also used PPMI (Positive Pointwise Mutual Information) which is the mathematical translation of how likely are two words to occur together than occur independent of each other. We only take positive examples in this case as hypernymy is an asymmetric relationship. Although similarity is one of its properties, for example: blue is a color but the reverse is not true. As defined in Roller et al. (2018) ,",
"cite_spans": [
{
"start": 342,
"end": 362,
"text": "Roller et al. (2018)",
"ref_id": "BIBREF14"
},
{
"start": 746,
"end": 766,
"text": "Roller et al. (2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Matrix Operations on the Extractions",
"sec_num": "3.2.2."
},
{
"text": "p \u2212 (x) = (x,y)\u2208\u03c1 w(x, y) W (9) p + (x) = (y,x)\u2208\u03c1 w(y, x) W (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matrix Operations on the Extractions",
"sec_num": "3.2.2."
},
{
"text": "where, p \u2212 (x) and p + (x) are the probability that x occurs as a hyponym and hypernym respectively. Then the PPMI for the extracted pair (x,y) can be computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matrix Operations on the Extractions",
"sec_num": "3.2.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ppmi(x, y) = max(0, log p(x, y) p \u2212 (x)p + (y) )",
"eq_num": "(11)"
}
],
"section": "Matrix Operations on the Extractions",
"sec_num": "3.2.2."
},
{
"text": "The PPMI matrix is implemented on the raw count matrix as show in Algorithm 2. While this can deal with skewed word occurrence probabilities, we still cannot handle out-of-vocabulary or unseen pairs. Therefore, we compute low-rank embeddings of the PPMI and the raw count matrix so that we can generalize to unseen or new pairs. Towards this, we use SVD or Singular Value decomposition, which is a kind of matrix factorization and reduces the matrix on the basis of the hyperparameter k which captures the number of singular values to retain and truncates all the rest. This leads to similar words having similar representations. Given, Let SVD of matrix M,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matrix Operations on the Extractions",
"sec_num": "3.2.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M = U V T (12) Then, Truncated SVD of M, T runc.SV D = u T x r v y",
"eq_num": "(13)"
}
],
"section": "Matrix Operations on the Extractions",
"sec_num": "3.2.2."
},
{
"text": "in which all but the r largest singular values are set to 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matrix Operations on the Extractions",
"sec_num": "3.2.2."
},
{
"text": "In the experiments, we consider the SVD of both the raw count as well as the PPMI matrix. Implementation and procedure are detailed in Algorithm 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matrix Operations on the Extractions",
"sec_num": "3.2.2."
},
{
"text": "The 5 datasets used in the evaluation of our pattern-based methods are consistent with Roller et al. (2018) and Shwartz et al. (2016b) . Below we outline and detail the 5 datasets used:",
"cite_spans": [
{
"start": 87,
"end": 107,
"text": "Roller et al. (2018)",
"ref_id": "BIBREF14"
},
{
"start": 112,
"end": 134,
"text": "Shwartz et al. (2016b)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Datasets",
"sec_num": "3.2.3."
},
{
"text": "i=1 (x,y) -hyponym,hypernym pairs 2: w(x, y) \u2190 f req(x, y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Datasets",
"sec_num": "3.2.3."
},
{
"text": "frequency of extraction 3: W \u2190 (x,y)\u2208p w(x, y) total extractions 4: for i := 1 \u2192 n do 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Datasets",
"sec_num": "3.2.3."
},
{
"text": "P (x i , y i ) \u2190 w(xi,yi) W 6: end for Algorithm 2 PMI (Pointwise Mutual Information) on Raw Count Matrix 1: p \u2212 (x) \u2190 row x prob(x as hyponym) 2: p + (x) \u2190 col x prob(x as hypernym) 3: p(x, y) \u2190 w(x,y) W",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Datasets",
"sec_num": "3.2.3."
},
{
"text": "from Algorithm 1 4: for i := 1 \u2192 n do 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Datasets",
"sec_num": "3.2.3."
},
{
"text": "P M I(x i , y i ) \u2190 log p(xi,yi) p \u2212 (xi)\u2022p + (yi)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Datasets",
"sec_num": "3.2.3."
},
{
"text": "6: end for 1. BLESS (Baroni and Lenci, 2011) This dataset contains hypernymy annotations for around 200 nouns. It contains pairs for other relations like meronymy and co-hyponymy as well. We label the hypernym pairs as true and all other relations as false. It contains 14,542 total pairs with 1,337 positive examples.",
"cite_spans": [
{
"start": 20,
"end": 44,
"text": "(Baroni and Lenci, 2011)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Datasets",
"sec_num": "3.2.3."
},
{
"text": "2. LEDS (Baroni et al., 2012) This dataset consists of 2,770 nouns and comes balanced with randomly shuffled positive as well as negative pairs.",
"cite_spans": [
{
"start": 8,
"end": 29,
"text": "(Baroni et al., 2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Datasets",
"sec_num": "3.2.3."
},
{
"text": "3. EVAL (Santus et al., 2015) This dataset contains 7,378 pairs in a mixture of hypernym ,antonym and synonym pairs. We only mark the hypernym pairs as true and all other relations as false.",
"cite_spans": [
{
"start": 8,
"end": 29,
"text": "(Santus et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Datasets",
"sec_num": "3.2.3."
},
{
"text": "4. SHWARTZ (Shwartz et al., 2016a) This is the largest dataset used. We took a subset containing 52,578 pairs (Roller et al., 2018) .",
"cite_spans": [
{
"start": 11,
"end": 34,
"text": "(Shwartz et al., 2016a)",
"ref_id": "BIBREF19"
},
{
"start": 110,
"end": 131,
"text": "(Roller et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Datasets",
"sec_num": "3.2.3."
},
{
"text": "A dataset of 1,668 subset of the BLESS dataset containing negative pairs from other close relations to confirm the validity of our predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WBLESS (Weeds et al., 2014)",
"sec_num": "5."
},
{
"text": "Average precision is used as metric to score all the models in this paper to be consistent with Roller et al. (2018) and Shwartz et al. (2016b) .",
"cite_spans": [
{
"start": 96,
"end": 116,
"text": "Roller et al. (2018)",
"ref_id": "BIBREF14"
},
{
"start": 121,
"end": 143,
"text": "Shwartz et al. (2016b)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WBLESS (Weeds et al., 2014)",
"sec_num": "5."
},
{
"text": "The pairs were extracted, processed and evaluated on a server with 8 Intel Xeon cores and 64 GB of RAM . None of the models have a hyper-parameter except for SVD based models, for which we selected k=100 for all. We also performed experiments with various other values of k={10,20,50,100,1000} but they have been omitted from the results for the sake of brevity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup and Hardware",
"sec_num": "3.2.4."
},
{
"text": "Our evaluation shows that the results using our corpus are similar to the results from the alternate paid corpus mentioned before. Prior to evaluating, we trim all the extrac-tions from our corpus that are less than 2 as it helps control the sparsity of our extractions. Truncated SVD on the PPMI models achieve highest scores overall. This is due to its matrix completion properties as similar words have similar representations. There are some slight variations in the results which stem from the difference in the corpus used and/or the pre-processing methodologies. However, these slight variations are not unidirectional as we perform slightly better in some datasets and slight worse in others. Overall, the results are similar as can be seen in Table 2 . The metric used here is average precision. It summarizes the precision-recall curve with the weighted mean of precision at each threshold, with the increase in recall from the previous threshold used as the weight.",
"cite_spans": [],
"ref_spans": [
{
"start": 752,
"end": 759,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results and Comparison",
"sec_num": "3.2.5."
},
{
"text": "AP (AverageP recision) = n (R n \u2212 R n\u22121 )P n (14)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Comparison",
"sec_num": "3.2.5."
},
{
"text": ", where R n and P n are recall and precision at the n th threshold. The comparison is as detailed below (the darker bars with suffix ' sota' represent the results from Roller et al. (2018) and the lighter bars with suffix ' libre' represent our results):",
"cite_spans": [
{
"start": 168,
"end": 188,
"text": "Roller et al. (2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Comparison",
"sec_num": "3.2.5."
},
{
"text": "On the BLESS dataset, we perform similar to Roller et al. (2018) . Here, SVD applied on the PPMI matrix achieves an Average Precision score of 0.71 as compared to 0.76. (as shown in Figure 1) 2. LEDS Dataset Similarly in LEDS, some of our models outperform Roller et al. (2018) and achieve exact scores on the highest performing SVD on PPMI matrix model. LEDS contains noun pairs which are discriminative and hence we get high scores overall. (as shown in Figure 2) 3. EVAL Dataset This dataset has some out-of-vocabulary words with Algorithm 3 SVD (Singular Value Decomposition) on Raw Count and PPMI Matrix 1: C \u2190 raw count matrix/P P M I matrix from Algorithm 1/2 2: k \u2190 100 hyperparameter k 3: respect to our corpus from which we extracted our pairs and most of the pairs are verb or adjective pairs. Since our patterns extract noun pairs from the corpus, the score gets penalized by these pairs. Here we achieve 0.42 AP on the SVD PPMI model as compared to 0.48. (as shown in Figure 3 ",
"cite_spans": [
{
"start": 44,
"end": 64,
"text": "Roller et al. (2018)",
"ref_id": "BIBREF14"
},
{
"start": 257,
"end": 277,
"text": "Roller et al. (2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 182,
"end": 191,
"text": "Figure 1)",
"ref_id": "FIGREF0"
},
{
"start": 456,
"end": 465,
"text": "Figure 2)",
"ref_id": "FIGREF1"
},
{
"start": 981,
"end": 989,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "BLESS Dataset",
"sec_num": "1."
},
{
"text": "r \u2190 rank(C) 4: SV D(C) \u2190 U \u2022 \u2022V T 5: k \u2282 truncated SVD by selecting k=100 singular values 6: C k \u2190 U \u2022 k \u2022V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLESS Dataset",
"sec_num": "1."
},
{
"text": "We have created a new corpus that can be used by those working in methods and techniques for hypernym detection. Our evaluation shows that we get similar results when we apply state-of-the-art methods to it, hence showing that the corpus can be used for the same purpose as it has been done with previous corpora in the state-of-the-art, with the benefit of using a corpus that is available under an open license. In order to show that the usage of this corpus does not have a negative impact in comparison with the usage of previous ones, we also show how we applied all the patternbased methods described in Roller et al. (2018) with our new corpus achieving similar results. As future work, we plan to improve existing pattern-based methods using better or more patterns and generalization techniques. We also plan on testing the combination of distributional and pattern-based approaches.",
"cite_spans": [
{
"start": 610,
"end": 630,
"text": "Roller et al. (2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4."
},
{
"text": "DOI: 10.5281/zenodo.3662204 2 https://github.com/abyssnlp/Hypernym-LIBre",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://ebiquity.umbc.edu/blogger/2013/05/01/umbcwebbase-corpus-of-3b-english-words/ 4 https://dumps.wikimedia.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.nltk.org/ modules/nltk/tag/perceptron.html6 Please note that while running the experiment, we POS tagged the corpus using the Perceptron Tagger. As the spacy tagger has been shown to be perform better, we POS tagged Hypernym-LIBre with it before releasing it as a language resource.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Algorithm 1 Raw Count Matrix from Hearst Patterns 1: p \u2190 (x, y) n",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Algorithm 1 Raw Count Matrix from Hearst Patterns 1: p \u2190 (x, y) n",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "How we blessed distributional semantic evaluation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lenci",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baroni, M. and Lenci, A. (2011). How we blessed distribu- tional semantic evaluation. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Lan- guage Semantics, pages 1-10. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Entailment above the word level in distributional semantics",
"authors": [
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "N.-Q",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Shan",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "23--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baroni, M., Bernardi, R., Do, N.-Q., and Shan, C.-c. (2012). Entailment above the word level in distributional semantics. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 23-32. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Context-theoretic semantics for natural language: an overview",
"authors": [
{
"first": "D",
"middle": [],
"last": "Clarke",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the workshop on geometrical models of natural language semantics",
"volume": "",
"issue": "",
"pages": "112--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clarke, D. (2009). Context-theoretic semantics for natural language: an overview. In Proceedings of the workshop on geometrical models of natural language semantics, pages 112-119.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Introduction to modern information retrieval: G. salton and m. mcgill. mcgraw-hill",
"authors": [
{
"first": "M",
"middle": [],
"last": "Dillon",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dillon, M. (1983). Introduction to modern information re- trieval: G. salton and m. mcgill. mcgraw-hill, new york (1983). xv+ 448 pp., 32.95 isbn 0-07-054484-0.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The distributional inclusion hypotheses and lexical entailment",
"authors": [
{
"first": "M",
"middle": [],
"last": "Geffet",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "107--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geffet, M. and Dagan, I. (2005). The distributional inclu- sion hypotheses and lexical entailment. In Proceedings of the 43rd Annual Meeting on Association for Computa- tional Linguistics, pages 107-114. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Umbc ebiquity-core: Semantic textual similarity systems",
"authors": [
{
"first": "L",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "A",
"middle": [
"L"
],
"last": "Kashyap",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Finin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mayfield",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Weese",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity",
"volume": "1",
"issue": "",
"pages": "44--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Han, L., Kashyap, A. L., Finin, T., Mayfield, J., and Weese, J. (2013). Umbc ebiquity-core: Semantic textual sim- ilarity systems. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 1: Pro- ceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 44-52.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic acquisition of hyponyms from large text corpora",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 14th conference on Computational linguistics",
"volume": "2",
"issue": "",
"pages": "539--545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hearst, M. A. (1992). Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th con- ference on Computational linguistics-Volume 2, pages 539-545. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The effectiveness of simple hybrid systems for hypernym discovery",
"authors": [
{
"first": "W",
"middle": [],
"last": "Held",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3362--3367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Held, W. and Habash, N. (2019). The effectiveness of sim- ple hybrid systems for hypernym discovery. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3362-3367.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Montani",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Honnibal, M. and Montani, I. (2017). spacy 2: Natural language understanding with bloom embeddings, convo- lutional neural networks and incremental parsing. To ap- pear, 7(1).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Identifying hypernyms in distributional semantic spaces",
"authors": [
{
"first": "A",
"middle": [],
"last": "Lenci",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Benotto",
"suffix": ""
}
],
"year": 2012,
"venue": "* SEM 2012: The First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "75--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lenci, A. and Benotto, G. (2012). Identifying hypernyms in distributional semantic spaces. In * SEM 2012: The First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main confer- ence and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evalu- ation (SemEval 2012), pages 75-79.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111- 3119.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. In Pro- ceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532- 1543.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Hearst patterns revisited: Automatic hypernym detection from large text corpora",
"authors": [
{
"first": "S",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nickel",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.03191"
]
},
"num": null,
"urls": [],
"raw_text": "Roller, S., Kiela, D., and Nickel, M. (2018). Hearst patterns revisited: Automatic hypernym detection from large text corpora. arXiv preprint arXiv:1806.03191.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Chasing hypernyms in vector spaces with entropy",
"authors": [
{
"first": "E",
"middle": [],
"last": "Santus",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lenci",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "S",
"middle": [
"S"
],
"last": "Im Walde",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "38--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Santus, E., Lenci, A., Lu, Q., and Im Walde, S. S. (2014). Chasing hypernyms in vector spaces with entropy. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguis- tics, volume 2: Short Papers, pages 38-42.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Evalution 1.0: an evolving semantic dataset for training and evaluation of distributional semantic models",
"authors": [],
"year": null,
"venue": "Proceedings of the 4th Workshop on Linked Data in Linguistics: Resources and Applications",
"volume": "",
"issue": "",
"pages": "64--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evalution 1.0: an evolving semantic dataset for training and evaluation of distributional semantic models. In Pro- ceedings of the 4th Workshop on Linked Data in Linguis- tics: Resources and Applications, pages 64-69.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automatic extraction of hypernym & meronym relations in english sentences using dependency parser",
"authors": [
{
"first": "N",
"middle": [],
"last": "Sheena",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "Jasmine",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "Procedia Computer Science",
"volume": "93",
"issue": "",
"pages": "539--546",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sheena, N., Jasmine, S. M., and Joseph, S. (2016). Au- tomatic extraction of hypernym & meronym relations in english sentences using dependency parser. Procedia Computer Science, 93:539-546.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving hypernymy detection with an integrated path-based and distributional method",
"authors": [
{
"first": "V",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.06076"
]
},
"num": null,
"urls": [],
"raw_text": "Shwartz, V., Goldberg, Y., and Dagan, I. (2016a). Improving hypernymy detection with an integrated path-based and distributional method. arXiv preprint arXiv:1603.06076.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Hypernyms under siege: Linguistically-motivated artillery for hypernymy detection",
"authors": [
{
"first": "V",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Santus",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Schlechtweg",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.04460"
]
},
"num": null,
"urls": [],
"raw_text": "Shwartz, V., Santus, E., and Schlechtweg, D. (2016b). Hypernyms under siege: Linguistically-motivated artillery for hypernymy detection. arXiv preprint arXiv:1612.04460.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning syntactic patterns for automatic hypernym discovery",
"authors": [
{
"first": "R",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "A",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2005,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "1297--1304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Snow, R., Jurafsky, D., and Ng, A. Y. (2005). Learn- ing syntactic patterns for automatic hypernym discovery. In Advances in neural information processing systems, pages 1297-1304.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Enriching the knowledge sources used in a maximum entropy part-ofspeech tagger",
"authors": [
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 2000 Joint SIG-DAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics",
"volume": "13",
"issue": "",
"pages": "63--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toutanova, K. and Manning, C. D. (2000). Enriching the knowledge sources used in a maximum entropy part-of- speech tagger. In Proceedings of the 2000 Joint SIG- DAT conference on Empirical methods in natural lan- guage processing and very large corpora: held in con- junction with the 38th Annual Meeting of the Association for Computational Linguistics-Volume 13, pages 63-70. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Spherere: Distinguishing lexical relations with hyperspherical relation embeddings",
"authors": [
{
"first": "C",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1727--1737",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, C., He, X., and Zhou, A. (2019). Spherere: Dis- tinguishing lexical relations with hyperspherical relation embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1727-1737.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Characterising measures of lexical distributional similarity",
"authors": [
{
"first": "J",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Weir",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mccarthy",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th international conference on Computational Linguistics, page 1015. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weeds, J., Weir, D., and McCarthy, D. (2004). Characteris- ing measures of lexical distributional similarity. In Pro- ceedings of the 20th international conference on Compu- tational Linguistics, page 1015. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning to distinguish hypernyms and cohyponyms",
"authors": [
{
"first": "J",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Reffin",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Weir",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2249--2259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weeds, J., Clarke, D., Reffin, J., Weir, D., and Keller, B. (2014). Learning to distinguish hypernyms and co- hyponyms. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2249-2259. Dublin City Uni- versity and Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "English Gigaword. Linguistic Data Consortium, 5.0",
"authors": [
{
"first": "David",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Cieri",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "953--543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graff, David, and Christopher Cieri. (2011). English Gi- gaword. Linguistic Data Consortium, 5.0, ISLRN 953- 543-425-922-6.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Pattern based methods on BLESS dataset",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Pattern based methods on LEDS dataset",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "Pattern based methods on EVAL datasetThis dataset is the largest dataset and it also has some Pattern based methods on SHWARTZ dataset very low frequency words which are not picked up by our Hearst Pattern based models and hence the overall score is low for all models. Here, we are at level with the state-of-the-art scores. (as shown inFigure 4) 5. WBLESS Dataset This dataset scores very high on AP across all the Pattern based methods on WBLESS dataset models. Here, the SVD applied to PPMI matrix model achieves 0.95 AP compared to 0.96. (as shown in Figure 5)",
"uris": null
},
"TABREF0": {
"type_str": "table",
"num": null,
"html": null,
"text": "",
"content": "<table/>"
},
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"text": "T final matrix to use for predictions",
"content": "<table><tr><td/><td/><td/><td colspan=\"3\">Result Comparison</td><td/><td/><td/></tr><tr><td>Datasets</td><td/><td/><td/><td/><td>Models</td><td/><td/><td/></tr><tr><td/><td colspan=\"2\">Raw Count Model</td><td colspan=\"2\">PPMI Model</td><td colspan=\"4\">SVD Raw Count Model SVD PPMI Model</td></tr><tr><td/><td>SOTA</td><td>LIBre</td><td colspan=\"3\">SOTA LIBre SOTA</td><td>LIBre</td><td>SOTA</td><td>LIBre</td></tr><tr><td>BLESS</td><td>0.49</td><td>0.47</td><td>0.45</td><td>0.42</td><td>0.66</td><td>0.64</td><td>0.76</td><td>0.71</td></tr><tr><td>LEDS</td><td>0.71</td><td>0.73</td><td>0.7</td><td>0.73</td><td>0.81</td><td>0.82</td><td>0.84</td><td>0.84</td></tr><tr><td>EVAL</td><td>0.38</td><td>0.35</td><td>0.36</td><td>0.32</td><td>0.45</td><td>0.42</td><td>0.48</td><td>0.42</td></tr><tr><td>SHWARTZ</td><td>0.29</td><td>0.36</td><td>0.28</td><td>0.33</td><td>0.41</td><td>0.53</td><td>0.44</td><td>0.47</td></tr><tr><td>WBLESS</td><td>0.74</td><td>0.74</td><td>0.72</td><td>0.73</td><td>0.91</td><td>0.93</td><td>0.96</td><td>0.95</td></tr></table>"
},
"TABREF2": {
"type_str": "table",
"num": null,
"html": null,
"text": "Result Comparison between extractions from state-of-the-art corpus and Hypernym-LIBre.",
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">BLESS dataset Results</td></tr><tr><td/><td>Raw Count Model</td><td/><td>0.49 0.47</td></tr><tr><td>Models</td><td>PPMI Model SVD Raw Count Model</td><td/><td>0.45 0.42</td><td>0.66 0.64</td></tr><tr><td/><td>SVD PPMI Model</td><td/><td/><td>0.76 0.71</td></tr><tr><td/><td>0.0</td><td>0.2</td><td colspan=\"2\">0.4 Scores (Average Precision) 0.6 BLESS_sota BLESS_libre 0.8</td><td>1.0</td></tr></table>"
}
}
}
}