|
{ |
|
"paper_id": "U18-1008", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:12:07.718183Z" |
|
}, |
|
"title": "Cluster Labeling by Word Embeddings and WordNet's Hypernymy", |
|
"authors": [ |
|
{ |
|
"first": "Hanieh", |
|
"middle": [], |
|
"last": "Poostchi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Technology Sydney Capital Markets CRC", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Piccardi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Technology Sydney", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Cluster labeling is the assignment of representative labels to clusters of documents or words. Once assigned, the labels can play an important role in applications such as navigation, search and document classification. However, finding appropriately descriptive labels is still a challenging task. In this paper, we propose various approaches for assigning labels to word clusters by leveraging word embeddings and the synonymy and hypernymy relations in the WordNet lexical ontology. Experiments carried out using the WebAP document dataset have shown that one of the approaches stand out in the comparison and is capable of selecting labels that are reasonably aligned with those chosen by a pool of four human annotators.", |
|
"pdf_parse": { |
|
"paper_id": "U18-1008", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Cluster labeling is the assignment of representative labels to clusters of documents or words. Once assigned, the labels can play an important role in applications such as navigation, search and document classification. However, finding appropriately descriptive labels is still a challenging task. In this paper, we propose various approaches for assigning labels to word clusters by leveraging word embeddings and the synonymy and hypernymy relations in the WordNet lexical ontology. Experiments carried out using the WebAP document dataset have shown that one of the approaches stand out in the comparison and is capable of selecting labels that are reasonably aligned with those chosen by a pool of four human annotators.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Document collections are often organized into clusters of either documents or words to facilitate applications such as navigation, search and classification. The organization can prove more useful if its clusters are characterized by sets of representative labels. The task of assigning a set of labels to each individual cluster in a document organization is known as cluster labeling (Wang et al., 2014) and it can provide a useful description of the collection in addition to fundamental support for navigation and search.", |
|
"cite_spans": [ |
|
{ |
|
"start": 386, |
|
"end": 405, |
|
"text": "(Wang et al., 2014)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Related Work", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In Manning et al. (2008) , cluster labeling approaches have been subdivided into i) differential cluster labeling and ii) cluster-internal labeling. The former selects cluster labels by comparing the distribution of terms in one cluster with those of the other clusters while the latter selects labels that are solely based on each cluster indi-vidually. Cluster-internal labeling approaches include computing the clusters' centroids and using them as labels, or using lists of terms with highest frequencies in the clusters. However, all these approaches can only select cluster labels from the terms and phrases that explicitly appear in the documents, possibly failing to provide an appropriate level of abstraction or description (Lau et al., 2011) . As an example, a word cluster containing words dog and wolf should not be labeled with either word, but as canids. For this reason, in this paper we explore several approaches for labeling word clusters obtained from a document collection by leveraging the synonymy and hypernymy relations in the WordNet taxonomy (Miller, 1995) , together with word embeddings (Mikolov et al., 2013; Pennington et al., 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 24, |
|
"text": "Manning et al. (2008)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 734, |
|
"end": 752, |
|
"text": "(Lau et al., 2011)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1069, |
|
"end": 1083, |
|
"text": "(Miller, 1995)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1116, |
|
"end": 1138, |
|
"text": "(Mikolov et al., 2013;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1139, |
|
"end": 1163, |
|
"text": "Pennington et al., 2014)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Related Work", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A hypernymy relation represents an asymmetric relation between a class and each of its instances. A hypernym (e.g., vertebrate) has a broader context than its hyponyms (bird, fishes, reptiles etc). Conversely, the contextual properties of the hyponyms are usually a subset of those of their hypernym(s). Hypernymy has been used extensively in natural language processing, including in recent works such as Yu et al. (2015) and HyperVec (Nguyen et al., 2017) that have proposed learning word embeddings that reflect the hypernymy relation. Based on this, we have decided to make use of available hypernym-hyponym data to propose an approach for labeling clusters of keywords by a representative selection of their hypernyms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 406, |
|
"end": 422, |
|
"text": "Yu et al. (2015)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 436, |
|
"end": 457, |
|
"text": "(Nguyen et al., 2017)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Related Work", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the proposed approach, we first extract a set of keywords from the original document collection. We then apply a step of hierarchical clustering on the keywords to partition them into a hierarchy of clusters. To this aim, we represent each keyword as a real-valued vector using pre-trained word embeddings (Pennington et al., 2014) and repeatedly apply a standard clustering algorithm. For labeling the clusters, we first look up all the synonyms of the keywords and, in turn, their hypernyms in the WordNet hierarchy. We then encode the hypernyms as word embeddings and use various approaches to select them based on their distance from the clusters' centers. The experimental results over a benchmark document collection have shown that such a distance-based selection is reasonably aligned with the hypernyms selected by four, independent human annotators. As a side result, we show that the employed word embeddings spontaneously contain the hypernymy relation, offering a plausible justification for the effectiveness of the proposed method.", |
|
"cite_spans": [ |
|
{ |
|
"start": 309, |
|
"end": 334, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Related Work", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The proposed pipeline of processing steps is shown in Figure 1 . First, keywords are extracted from each document in turn and accumulated in an overall set of unique keywords. After mapping such keywords to pre-trained word embeddings, hierarchical clustering is applied in a top-down manner. The leaves of the constructed tree are considered as the clusters to be labeled. Finally, each cluster is labeled automatically by leveraging a combination of WordNet's hypernyms and synsets and word embeddings. The following subsections present each step in greater detail.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 62, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Proposed Pipeline", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For the keyword extraction, we have used the rapid automatic keyword extraction (RAKE) of Rose et al. (2010) . This method extracts keywords (i.e., single words or very short word sequences) from a given document collection and its main steps can be summarized as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 108, |
|
"text": "Rose et al. (2010)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Keyword Extraction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "1. Split a document into sentences using a predefined set of sentence delimiters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Keyword Extraction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "2. Split sentences into sequences of contiguous words at phrase delimiters to build the candidate set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Keyword Extraction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "3. Collect the set of unique words (W ) that appear in the candidate set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Keyword Extraction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "4. Compute the word co-occurrence matrix X |W |\u00d7|W | for W .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Keyword Extraction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "5. Calculate word score score(w)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Keyword Extraction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "= deg(w)/f req(w), where deg(w) = i\u2208{1,...,|W |} X[w, i] and f req(w) = i\u2208{1,...,|W |} (X[w, i] = 0)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Keyword Extraction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": ". 6. Score each candidate keyword as the sum of its member word scores.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Keyword Extraction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "7. Select the top T scoring candidates as keywords for the document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Keyword Extraction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Alternatively, RAKE can use other combinations of deg(w) and f req(w) as the word scoring function. The keywords extracted from all the documents are accumulated into a set, C, ensuring uniqueness.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Keyword Extraction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "A top-down approach is used to hierarchically cluster the keywords in C. First, each component word of each keyword is mapped onto a numerical vector using pre-trained GloVe50d 1 word embeddings (Pennington et al., 2014) ; missing words are mapped to zero vectors. Then, each keyword k is represented with the average vector \u2212 \u2192 k of its component words. Then, we start from set C as the root of the tree and follow a branch-and-bound approach, where each tree node is clustered into c clusters using the k-means algorithm (Hartigan and Wong, 1979) . A node is marked as a leaf if it contains less than n keywords or it belongs to level d, the tree's depth limit. The leaf nodes are the clusters to be named with a set of verbal terms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 220, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 523, |
|
"end": 548, |
|
"text": "(Hartigan and Wong, 1979)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Clustering of Keywords", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "As discussed in Section 1, we aim to label each cluster with descriptive terms. The labels should be more general than the cluster's members to abstract the nature of the cluster. To this end, we leverage the hypernym-hyponym correspondences in the lexical ontology. First, for each cluster, we create a large set, L, of candidate labels by including the hypernyms 2 of the component words, expanded by their synonyms, of all the keywords. The synonyms are retrieved from the WordNet's sets of synonyms, called synsets. Then, we apply the four following approaches to select l labels from set L:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Labeling", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 FreqKey: Choose the l most frequent hypernyms of the l most frequent keywords.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Labeling", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 CentKey: Choose the l most central hypernyms of the l most central keywords.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Labeling", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 FreqHyp: Choose the l most frequent hypernyms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Labeling", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 CentHyp: Choose the l most central hypernyms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Labeling", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Approaches FreqKey and FreqHyp are based on frequencies in the collection. For performance evaluation, we sort their selected labels in descending frequency order. In CentKey and Cen-tHyp, the centrality is computed with respect to the cluster's center in the embedding space as the average vector of all its keywords", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Labeling", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2212 \u2192 K = 1 |K| k\u2208K \u2212 \u2192 k .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Labeling", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The distance between hypernym h and the cluster's center is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Labeling", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "d( \u2212 \u2192 h , \u2212 \u2192 K ) = || \u2212 \u2192 h \u2212 \u2212 \u2192 K ||, where \u2212 \u2192 h", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Labeling", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "is the average vector of the hypernym's component words. The labels selected by these two approaches are sorted in ascending distance order.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cluster Labeling", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "For the experiments, we have used the WebAP dataset 3 (Keikha et al., 2014) as the document collection. This dataset contains 6, 399 documents of diverse nature with a total of 1, 959, 777 sentences. For the RAKE software 4 , the hyper-parameters are the minimum number of characters of each keyword, the maximum number of words of each keyword, and the minimum number of times each keyword appears in the text, and they have been left to their default values of 5, 3, and 4, respectively. Likewise, parameter T has been set to its default value of one third of the words in the cooccurrence matrix. For the hierarchical clustering, we have used c = 8, n = 100 and d = 4 based on our own subjective assessment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 75, |
|
"text": "(Keikha et al., 2014)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For the evaluation, eight clusters (one from each sub-tree) were chosen to be labeled manually by four, independent human annotators. For this purpose, for each cluster, we provided the list of its keywords, K, and the candidate labels, L, to the annotators, and asked them to select the best l = 10 terms from L to describe the cluster. Initially, 3 https://ciir.cs.umass.edu/downloads/ WebAP/ 4 https://github.com/aneesha/RAKE Figure 2 : Precision at k (P @k) for k = 1, . . . , 10 averaged over the eight chosen clusters for the compared approaches.", |
|
"cite_spans": [ |
|
{ |
|
"start": 349, |
|
"end": 350, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 429, |
|
"end": 437, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human Annotation and Evaluation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "we had considered asking the annotators to also select representative labels from K, but a preliminary analysis showed that they were unsuitable to describe the cluster as a whole (Table 1 shows an example). Although the annotators were asked to provide their selection as a ranked list, we did not make use of their ranking order in the evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 180, |
|
"end": 188, |
|
"text": "(Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human Annotation and Evaluation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To evaluate the prediction accuracy, for each cluster we have considered the union of the lists provided by the human annotators as the ground truth (since |L| was typically in the order of 150 \u2212 200, the intersection of the lists was often empty or minimal). As performance figure, we have decided to report the well-known precision at k (P @k) for values of k between one and ten. We have not used the recall since the ground truth had size 40 in most cases while the prediction's size was kept to l = 10 in all cases, resulting in a highest possible recall of 0.25. Figure 2 compares the average P @k for k = 1, . . . , 10 for the four proposed approaches. The two approaches based on minimum distance to the cluster center (CentKey and CentHyp) have outperformed the other two approaches based on frequencies (FreqKey and Fre-qHyp) for all values of k. This shows that the word embedding space is in good correspondence with the human judgement. Moreover, approach Cen-tHyp has outperformed all other approaches for all values of k, showing that the hypernyms' centrality in the cluster is the key property for their effective selection.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 569, |
|
"end": 577, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human Annotation and Evaluation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Hypernyms are more general terms than the corresponding keywords, thus we expect them to be in larger mutual distance in the word embedding Keywords website www, clearinghouse, nih website, bulletin, websites, hotline, kbr publications, pfm file, syst publication, gov web site, dhhs publication, beta site, lexis nexis document, private http, national register bulletin, daily routines, data custodian, information, serc newsletter, certified mail, informational guide, dot complaint database, coverage edit followup, local update, mass mailing, ahrq web site, homepage, journal messenger, npl site, pdf private, htm centers, org website, web site address, telephone directory, service records, page layout program, service invocation, newsletter, card reader, advisory workgroup, library boards, full text online, usg publication, webpage, bulletin boards, fbis online, teleconference info, journal url, insert libraries, headquarters files, volunteer website http, bibliographic records, vch publishers, ptd web site, tsbp newsletter, electronic bulletin boards, email addresses, ecommerce, traveler, api service, intranet, website http, newsletter nps files, mail advertisement transmitted, subscribe, nna program, npci website, bulletin board, fais information, archiving, page attachment, nondriver id, mail etiquette, ip address, national directory, web page, pdq editorial boards, aml sites, dhs site, ptd website, directory ers web site, forums, digest, beta site management, directories, ccir papers, ieee press, fips publication, org web site, clearinghouse database, monterey database, hotlines, dslip description info, danish desk files, sos web site, bna program, newsletters, inspections portal page, letterhead, app ropri, image file directory, website, electronic mail notes, web site http, customized template page, mail addresses, health http, internet questionnaire assistance, electronic bulletin board, eos directly addresses, templates directory, beta site testers, informational, dataplot auxiliary directory, coverage edit, quarterly newsletter, distributed, reader, records service, web pages. space. To explore their distribution, we have used two-dimensional multidimensional scaling (MDS) visualizations (Borg and Groenen, 2005) of selected clusters. For each cluster, the keywords set K, the hypernyms set L, and the cluster's center have all been aggregated as a single set before applying MDS. An examples is shown in Figure 3 . As can be seen, the hypernyms (blue dots) nicely distribute as a circular crown, external and concentric to the keywords (black dots), showing that the hypernymy relation corresponds empirically to a radial expansion away from the cluster's center. This likely stems from the embedding space's requirement to simultaneously enforce meaningful distances between the different keywords, the keywords and the corresponding hypernyms, and between the hypernyms themselves. The hypernyms selected by the annotators (green and magenta dots) are among the closest to the cluster's center, and thus those selected by CentHyp (red and magenta dots) have the best correspondence (magenta dots alone) among the explored approaches.", |
|
"cite_spans": [ |
|
{ |
|
"start": 2233, |
|
"end": 2257, |
|
"text": "(Borg and Groenen, 2005)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 2450, |
|
"end": 2458, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Visualization of Keywords and Hypernyms", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "As a detailed example, Table 1 lists all the keywords of a sample cluster and the hypernyms selected by the four human annotators and CentHyp. Some of the hypernyms selected by more than one annotator (e.g., \"electronic communication\", \"web page\" and \"computer file\") have also been successfully identified by CentHyp. On the other hand, CentHyp has selected at least two terms (\"commercial enterprise\" and \"reference book') that are unrelated to the cluster. Qualitatively, we deem the automated annotation as noticeably inferior to the human annotations, yet usable wherever manual annotation is infeasible or impractical.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 30, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Detailed Example", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "This paper has explored various approaches for labeling keyword clusters based on the hypernyms from the WordNet lexical ontology. The proposed approaches map both the keywords and their hypernyms to a word embedding space and leverage the notion of centrality in the cluster. Experiments carried out using the WebAP dataset have shown that one of the approaches (CentHyp) has outperformed all the others in terms of precision at k for all values of k, and it has provided labels which are reasonably aligned with those of a pool of annotators. We plan to test the usefulness of the labels for tasks of search expansion in the near future.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "http://nlp.stanford.edu/data/ wordvecs/glove.6B.zip 2 Nouns only (not verbs).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research has been funded by the Capital Markets Cooperative Research Centre in Australia and supported by Semantic Sciences Pty Ltd.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Modern Multidimensional Scaling: Theory and Applications", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Borg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"J F" |
|
], |
|
"last": "Groenen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "I. Borg and P. J. F. Groenen. 2005. Modern Mul- tidimensional Scaling: Theory and Applications.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A K-Means Clustering Algorithm", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Hartigan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Wong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1979, |
|
"venue": "JSTOR: Applied Statistics", |
|
"volume": "28", |
|
"issue": "1", |
|
"pages": "100--108", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J A. Hartigan and M. A. Wong. 1979. A K-Means Clustering Algorithm. JSTOR: Applied Statistics 28(1):100-108.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Retrieving Passages and Finding Answers", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Keikha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Croft", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Sanderson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Australasian Document Computing Symposium (ADCS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "81--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Keikha, J. H. Park, W. B. Croft, and M. Sanderson. 2014. Retrieving Passages and Finding Answers. In Proceedings of the 2014 Australasian Document Computing Symposium (ADCS). pages 81-84.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Automatic Labelling of Topic Models", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Lau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Grieser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Newman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL)", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1536--1545", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. H. Lau, K. Grieser, D. Newman, and T. Baldwin. 2011. Automatic Labelling of Topic Models. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies (ACL). volume 1, pages 1536- 1545.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Introduction to Information Retrieval", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Raghavan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. D. Manning, P. Raghavan, and H. Sch\u00fctze. 2008. Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Distributed Representations of Words and Phrases and their Compositionality", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems (NIPS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems (NIPS), vol- ume 2, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "WordNet: A Lexical Database for English", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Communications of the ACM", |
|
"volume": "38", |
|
"issue": "11", |
|
"pages": "39--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. A. Miller. 1995. WordNet: A Lexical Database for English. Communications of the ACM 38(11):39- 41.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Hierarchical Embeddings for Hypernymy Detection and Directionality", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "K\u00f6eper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Schulte Im Walde", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Vu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "233--243", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. A. Nguyen, M. K\u00f6eper, S. Schulte im Walde, and N. T. Vu. 2017. Hierarchical Embeddings for Hy- pernymy Detection and Directionality. In Proceed- ings of the 2017 Empirical Methods in Natural Lan- guage Processing (EMNLP). pages 233-243.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "GloVe: Global Vectors for Word Representation", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Empirical Methods in Natural Language Processing", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Pennington, R. Socher, and C. D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Empirical Methods in Nat- ural Language Processing (EMNLP). volume 14, pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Automatic Keyword Extraction from Individual Documents", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Rose", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Engel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Cramer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Cowley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Text Mining. Applications and Theory", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--20", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Rose, D. Engel, N. Cramer, and W. Cowley. 2010. Automatic Keyword Extraction from Individ- ual Documents. In Text Mining. Applications and Theory, Wiley-Blackwell, chapter 1, pages 1-20.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A Hierarchical Dirichlet Model for Taxonomy Expansion for Search Engines", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 23rd International Conference on World Wide Web (WWW)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "961--970", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Wang, C. Kang, Y. Chang, and J. Han. 2014. A Hi- erarchical Dirichlet Model for Taxonomy Expansion for Search Engines. In Proceedings of the 23rd In- ternational Conference on World Wide Web (WWW). pages 961-970.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Learning Term Embeddings for Hypernymy Identification", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1390--1397", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Z. Yu, H. Wang, X. Lin, and M. Wang. 2015. Learning Term Embeddings for Hypernymy Identification. In Proceedings of the 24th International Joint Confer- ence on Artificial Intelligence (IJCAI). pages 1390- 1397.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "The proposed cluster labeling pipeline." |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Two-dimensional visualization of an example cluster (this figure should be viewed in color). The black and blue dots are the cluster's keywords and the keywords' hypernyms, respectively. The green dots are the hypernyms selected by the human annotators, the red dots are the hypernyms selected by CentHyp, and their intersection is recolored in magenta. The cluster's center is the turquoise star." |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "electronic communication, informing, press, medium, document, electronic equipment, computer network, transmission, record CentHyp electronic communication, information measure, text file, web page, informing, print media, web site, computer file, commercial enterprise, reference book", |
|
"content": "<table><tr><td>Annotator 1</td><td>electronic communication, computer network, web page, web site, mail, text file, computer file, protocol, software, electronic equipment</td></tr><tr><td>Annotator 2</td><td>computer network, telecommunication, computer, mail, web page, information, news, press, code, software</td></tr><tr><td>Annotator 3</td><td>news, informing, medium, web page, computer file, written record, document, press, article, essay</td></tr><tr><td>Annotator 4</td><td>communication,</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "An example cluster. The hypernyms selected by CentHyp and by at least one annotator are shown in boldface.", |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |