Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S12-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:23:34.843604Z"
},
"title": "Sentence Clustering via Projection over Term Clusters",
"authors": [
{
"first": "Lili",
"middle": [],
"last": "Kotlerman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ilan University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ilan University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Maya",
"middle": [],
"last": "Gorodetsky",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NICE Systems Ltd",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Ezra",
"middle": [],
"last": "Daya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NICE Systems Ltd",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a novel sentence clustering scheme based on projecting sentences over term clusters. The scheme incorporates external knowledge to overcome lexical variability and small corpus size, and outperforms common sentence clustering methods on two reallife industrial datasets.",
"pdf_parse": {
"paper_id": "S12-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a novel sentence clustering scheme based on projecting sentences over term clusters. The scheme incorporates external knowledge to overcome lexical variability and small corpus size, and outperforms common sentence clustering methods on two reallife industrial datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Clustering is a popular technique for unsupervised text analysis, often used in industrial settings to explore the content of large amounts of sentences. Yet, as may be seen from the results of our research, widespread clustering techniques, which cluster sentences directly, result in rather moderate performance when applied to short sentences, which are common in informal media.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we present and evaluate a novel sentence clustering scheme based on projecting sentences over term clusters. Section 2 briefly overviews common sentence clustering approaches. Our suggested clustering scheme is presented in Section 3. Section 4 describes an implementation of the scheme for a particular industrial task, followed by evaluation results in Section 5. Section 6 lists directions for future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Sentence clustering aims at grouping sentences with similar meanings into clusters. Commonly, vector similarity measures, such as cosine, are used to define the level of similarity over bag-of-words encod-ing of the sentences. Then, standard clustering algorithms can be applied to group sentences into clusters (see Steinbach et al. (2000) for an overview).",
"cite_spans": [
{
"start": 317,
"end": 340,
"text": "Steinbach et al. (2000)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "The most common practice is representing the sentences as vectors in term space and applying the K-means clustering algorithm (Shen et al. (2011) ; Pasquier (2010); Wang et al. (2009) ; Nomoto and Matsumoto (2001) ; Boros et al. (2001) ). An alternative approach involves partitioning a sentence connectivity graph by means of a graph clustering algorithm (Erkan and Radev (2004) ; Zha (2002) ).",
"cite_spans": [
{
"start": 126,
"end": 145,
"text": "(Shen et al. (2011)",
"ref_id": "BIBREF18"
},
{
"start": 165,
"end": 183,
"text": "Wang et al. (2009)",
"ref_id": "BIBREF22"
},
{
"start": 186,
"end": 213,
"text": "Nomoto and Matsumoto (2001)",
"ref_id": "BIBREF14"
},
{
"start": 216,
"end": 235,
"text": "Boros et al. (2001)",
"ref_id": "BIBREF1"
},
{
"start": 356,
"end": 379,
"text": "(Erkan and Radev (2004)",
"ref_id": "BIBREF5"
},
{
"start": 382,
"end": 392,
"text": "Zha (2002)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "The main challenge for any sentence clustering approach is language variability, where the same meaning can be phrased in various ways. The shorter the sentences are, the less effective becomes exact matching of their terms. Compare the following newspaper sentence \"The bank is phasing out the EZ Checking package, with no monthly fee charged for balances over $1,500, and is instead offering customers its Basic Banking account, which carries a fee\" with two tweets regarding the same event: \"Whats wrong.. charging $$ for checking a/c\" and \"Now they want a monthly fee!\". Though each of the tweets can be found similar to the long sentence by exact term matching, they do not share any single term. Yet, knowing that the words fee and charge are semantically related would allow discovering the similarity between the two tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "External resources can be utilized to provide such kind of knowledge, by which sentence representation can be enriched. Traditionally, WordNet (Fellbaum, 1998) has been used for this purpose (Shehata (2009); Chen et al. (2003) ; Hotho et al. (2003) ; Hatzivassiloglou et al. (2001) ). Yet, other resources of semantically-related terms can be beneficial, such as WordNet::Similarity (Pedersen et al., 2004) , statistical resources like that of Lin (1998) or DIRECT (Kotlerman et al., 2010) , thesauri, Wikipedia (Hu et al., 2009) , ontologies (Suchanek et al., 2007) etc.",
"cite_spans": [
{
"start": 208,
"end": 226,
"text": "Chen et al. (2003)",
"ref_id": "BIBREF2"
},
{
"start": 229,
"end": 248,
"text": "Hotho et al. (2003)",
"ref_id": "BIBREF9"
},
{
"start": 251,
"end": 281,
"text": "Hatzivassiloglou et al. (2001)",
"ref_id": "BIBREF8"
},
{
"start": 383,
"end": 406,
"text": "(Pedersen et al., 2004)",
"ref_id": "BIBREF16"
},
{
"start": 444,
"end": 454,
"text": "Lin (1998)",
"ref_id": "BIBREF12"
},
{
"start": 465,
"end": 489,
"text": "(Kotlerman et al., 2010)",
"ref_id": "BIBREF11"
},
{
"start": 512,
"end": 529,
"text": "(Hu et al., 2009)",
"ref_id": "BIBREF10"
},
{
"start": 543,
"end": 566,
"text": "(Suchanek et al., 2007)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "This section presents a generic sentence clustering scheme, which involves two consecutive steps: (1) generating relevant term clusters based on lexical semantic relatedness and (2) projecting the sentence set over these term clusters. Below we describe each of the two steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Clustering via Term Clusters",
"sec_num": "3"
},
{
"text": "In order to obtain term clusters, a term connectivity graph is constructed for the given sentence set and is clustered as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: Obtaining Term Clusters",
"sec_num": "3.1"
},
{
"text": "1. Create initially an undirected graph with sentence-set terms as nodes and use lexical resources to extract semantically-related terms for each node. 2. Augment the graph nodes with the extracted terms and connect semantically-related nodes with edges. Then, partition the graph into term clusters through a graph clustering algorithm. Extracting and filtering related terms. In Section 2 we listed a number of lexical resources providing pairs of semantically-related terms. Within the suggested scheme, any combination of resources may be utilized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: Obtaining Term Clusters",
"sec_num": "3.1"
},
{
"text": "Often resources contain terms, which are semantically-related only in certain contexts. E.g., the words visa and passport are semantically-related when talking about tourism, but cannot be considered related in the banking domain, where visa usually occurs in its credit card sense. In order to discard irrelevant terms, filtering procedures can be employed. E.g., a simple filtering applicable in most cases of sentence clustering in a specific domain would discard candidate related terms, which do not occur sufficiently frequently in a target-domain corpus. In the example above, this procedure would allow avoiding the insertion of passport as related to visa, when considering the banking domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: Obtaining Term Clusters",
"sec_num": "3.1"
},
{
"text": "Clustering the graph nodes. Once the term graph is constructed, a graph clustering algorithm is applied resulting in a partition of the graph nodes (terms) into clusters. The choice of a particular algorithm is a parameter of the scheme. Many clustering algorithms consider the graph's edge weights. To address this trait, different edge weights can be assigned, reflecting the level of confidence that the two terms are indeed validly related and the reliability of the resource, which suggested the corresponding edge (e.g. WordNet synonyms are commonly considered more reliable than statistical thesauri).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: Obtaining Term Clusters",
"sec_num": "3.1"
},
{
"text": "To obtain sentence clusters, the given sentence set has to be projected in some manner over the term clusters obtained in Step 1. Our projection procedure resembles unsupervised text categorization (Gliozzo et al., 2005) , with categories represented by term clusters that are not predefined but rather emerge from the analyzed data: 1. Represent term clusters and sentences as vectors in term space and calculate the similarity of each sentence with each of the term clusters. 2. Assign each sentence to the best-scoring term cluster. (We focus on hard clustering, but the procedure can be adapted for soft clustering). Various metrics for feature weighting and vector comparison may be chosen. The top terms of termcluster vectors can be regarded as labels for the corresponding sentence clusters.",
"cite_spans": [
{
"start": 198,
"end": 220,
"text": "(Gliozzo et al., 2005)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Step 2: Projecting Sentences to Term Clusters",
"sec_num": "3.2"
},
{
"text": "Thus each sentence cluster corresponds to a single coherent cluster of related terms. This is contrasted with common clustering methods, where if sentence A shares a term with B, and B shares another term with C, then A and C might appear in the same cluster even if they have no related terms in common. This behavior turns out harmful for short sentences, where each incidental term is influential. Our scheme ensures that each cluster contains only sentences related to the underlying term cluster, resulting in more coherent clusters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 2: Projecting Sentences to Term Clusters",
"sec_num": "3.2"
},
{
"text": "In industry there's a prominent need to obtain business insights from customer interactions in a contact center or social media. Though the number of key sentences to analyze is often relatively small, such as a couple hundred, manually analyzing just a handful of clusters is much preferable. This section describes our implementation of the scheme described in Section 3 for the task of clustering customer interactions, as well as the data used for evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application: Clustering Customer Interactions",
"sec_num": "4"
},
{
"text": "Results and analysis are presented in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application: Clustering Customer Interactions",
"sec_num": "4"
},
{
"text": "We apply our clustering approach over two real-life datasets. The first one consists of 155 sentences containing reasons of account cancelation, retrieved from automatic transcripts of contact center interactions of an Internet Service Provider (ISP). The second one contains 194 sentences crawled from Twitter, expressing reasons for customer dissatisfaction with a certain banking company. The sentences in both datasets were gathered automatically by a rulebased extraction algorithm. Each dataset is accompanied by a small corpus of call transcripts or tweets from the corresponding domain. 1 The goal of clustering these sentences is to identify the prominent reasons of cancelation and dissatisfaction. To obtain the gold-standard (GS) annotation, sentences were manually grouped to clusters according to the reasons stated in them. Table 1 presents examples of sentences from the ISP dataset. The sentences are short, with only one or two words expressing the actual reason stated in them. We see that exact term matching is not sufficient to group the related sentences. Moreover, traditional clustering algorithms are likely to mix related and unrelated sentences, due to matching nonessential terms (e.g. husband or summer). We note that such short and noisy sentences are common in informal media, which became a most important channel of information in industry.",
"cite_spans": [
{
"start": 595,
"end": 596,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 839,
"end": 846,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "Our proposed sentence clustering scheme presented in Section 3 includes a number of choices. Below we describe the choices we made in our current implementation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation of the Clustering Scheme",
"sec_num": "4.2"
},
{
"text": "Input sentences were tokenized, lemmatized and cleaned from stopwords in order to extract contentword terms. Candidate semantically-related terms he hasn't been using it all summer long it's been sitting idle for about it almost a year I'm getting married my husband has a computer yeah I bought a new laptop this summer so when I said faces my husband got laid off from work well I'm them going through financial difficulties Table 1 : Example sentences expressing 3 reasons for cancelation: the customer (1) does not use the service, (2) acquired a computer, (3) cannot afford the service.",
"cite_spans": [],
"ref_spans": [
{
"start": 427,
"end": 434,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Implementation of the Clustering Scheme",
"sec_num": "4.2"
},
{
"text": "were extracted for each of the terms, using Word-Net synonyms and derivations, as well as DIRECT 2 , a directional statistical resource learnt from a news corpus. Candidate terms that did not appear in the accompanying domain corpus were filtered out as described in Section 3.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation of the Clustering Scheme",
"sec_num": "4.2"
},
{
"text": "Edges in the term graph were weighted with the number of resources supporting the corresponding edge. To cluster the graph we used the Chinese Whispers clustering tool 3 (Biemann, 2006) , whose algorithm does not require to pre-set the desired number of clusters and is reported to outperform other algorithms for several NLP tasks.",
"cite_spans": [
{
"start": 170,
"end": 185,
"text": "(Biemann, 2006)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation of the Clustering Scheme",
"sec_num": "4.2"
},
{
"text": "To generate the projection, sentences were represented as vectors of terms weighted by their frequency in each sentence. Terms of the term-cluster vectors were weighted by the number of sentences in which they occur. Similarity scores were calculated using the cosine measure. Clusters were labeled with the top terms appearing both in the underlying term cluster and in the cluster's sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation of the Clustering Scheme",
"sec_num": "4.2"
},
{
"text": "In this section we present the results of evaluating our projection approach, compared to the common K-means clustering method 4 applied to: (A) Standard bag-of-words representation of sentences;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "(B) Bag-of-words representation, where sentence's words are augmented with semantically-related terms (following the common scheme of prior work, see Section 2). We use the same set of related terms as is used by our method. (C) Representation of sentences in term-cluster space, using the term clusters generated by our method as vector features. A feature is activated in a sentence vector if it contains a term from the corresponding term cluster. Table 2 shows the results in terms of Purity, Recall (R), Precision (P) and F1 (see \"Evaluation of clustering\", Manning et al. (2008) ). Projection significantly 5 outperforms all baselines for both datasets. For completeness we experimented with applying Chinese Whispers clustering to sentence connectivity graphs, but the results were inferior to K-means. Table 3 presents sample sentences from clusters produced by projection and K-means for illustration. Our initial analysis showed that our approach indeed produces more homogenous clusters than the baseline methods, as conjectured in Section 3.2. We consider it advantageous, since it's easier for a human to merge clusters than to reveal sub-clusters. E.g., a GS cluster of 20 sentences referring to fees and charges is covered by three projection clusters labeled fee, charge and interest rate, with 9, 8 and 2 sentences correspondingly. On the other hand, K-means C method places 11 out of the 20 sentences in a messy cluster of 57 sentences (see Table 3 ), scattering the remaining 9 sentences over 7 other clusters.",
"cite_spans": [
{
"start": 563,
"end": 584,
"text": "Manning et al. (2008)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 451,
"end": 458,
"text": "Table 2",
"ref_id": null
},
{
"start": 810,
"end": 817,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 1459,
"end": 1466,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "In our current implementation fee, charge and interest rate were not detected by the lexical resources we used as semantically similar and thus were not 5 p=0.001 according to McNemar test (Dietterich, 1998) . grouped in one term cluster. However, adding more resources may introduce additional noise. Such dependency on coverage and accuracy of resources is apparently a limitation of our approach. Yet, as our experiments indicate, using only two generic resources already yielded valuable results. ",
"cite_spans": [
{
"start": 189,
"end": 207,
"text": "(Dietterich, 1998)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "We presented a novel sentence clustering scheme and evaluated its implementation, showing significantly superior performance over common sentence clustering techniques. We plan to further explore the suggested scheme by utilizing additional lexical resources and clustering algorithms. We also plan to compare our approach with co-clustering methods used in document clustering (Xu et al. (2003) , Dhillon (2001), Slonim and Tishby (2000) ).",
"cite_spans": [
{
"start": 378,
"end": 395,
"text": "(Xu et al. (2003)",
"ref_id": "BIBREF23"
},
{
"start": 414,
"end": 438,
"text": "Slonim and Tishby (2000)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "The bank dataset with the output of the tested methods will be made publicly available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available for download at www.cs.biu.ac.il/ nlp/downloads/DIRECT.html. For each term we extract from the resource the top-5 related terms.3 Available at http://wortschatz.informatik.uni-leipzig.de/\u02dccbiemann/software/CW.html4 We use the Weka(Hall et al., 2009) implementation. Due to space limitations and for more meaningful comparison we report here one value of K, which is equal to the number of clusters returned by projection (60 for the ISP and 65 for the bank dataset). For K = 20, 40 and 70 the performance was similar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Chinese whispers -an efficient graph clustering algorithm and its application to natural language processing problems",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of TextGraphs: the Second Workshop on Graph Based Methods for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "73--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Biemann. 2006. Chinese whispers -an efficient graph clustering algorithm and its application to nat- ural language processing problems. In Proceedings of TextGraphs: the Second Workshop on Graph Based Methods for Natural Language Processing, pages 73- 80, New York City, USA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A clustering based approach to creating multi-document summaries",
"authors": [
{
"first": "Endre",
"middle": [],
"last": "Boros",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"B"
],
"last": "Kantor",
"suffix": ""
},
{
"first": "David",
"middle": [
"J"
],
"last": "Neu",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Endre Boros, Paul B. Kantor, and David J. Neu. 2001. A clustering based approach to creating multi-document summaries.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Clustering and visualization in a multi-lingual multi-document summarization system",
"authors": [
{
"first": "Hsin-Hsi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "June-Jei",
"middle": [],
"last": "Kuo",
"suffix": ""
},
{
"first": "Tsei-Chun",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 25th European conference on IR research, ECIR'03",
"volume": "",
"issue": "",
"pages": "266--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hsin-Hsi Chen, June-Jei Kuo, and Tsei-Chun Su. 2003. Clustering and visualization in a multi-lingual multi-document summarization system. In Proceed- ings of the 25th European conference on IR re- search, ECIR'03, pages 266-280, Berlin, Heidelberg. Springer-Verlag.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Co-clustering documents and words using bipartite spectral graph partitioning",
"authors": [
{
"first": "S",
"middle": [],
"last": "Inderjit",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dhillon",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '01",
"volume": "",
"issue": "",
"pages": "269--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inderjit S. Dhillon. 2001. Co-clustering documents and words using bipartite spectral graph partitioning. In Proceedings of the seventh ACM SIGKDD interna- tional conference on Knowledge discovery and data mining, KDD '01, pages 269-274, New York, NY, USA. ACM.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Approximate statistical tests for comparing supervised classification learning algorithms",
"authors": [
{
"first": "G",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dietterich",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas G. Dietterich. 1998. Approximate statistical tests for comparing supervised classification learning algorithms.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Lexrank: graph-based lexical centrality as salience in text summarization",
"authors": [
{
"first": "G\u00fcnes",
"middle": [],
"last": "Erkan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Dragomir",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2004,
"venue": "J. Artif. Int. Res",
"volume": "22",
"issue": "1",
"pages": "457--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00fcnes Erkan and Dragomir R. Radev. 2004. Lexrank: graph-based lexical centrality as salience in text sum- marization. J. Artif. Int. Res., 22(1):457-479, Decem- ber.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "WordNet -An Electronic Lexical Database",
"authors": [
{
"first": "C",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Fellbaum. 1998. WordNet -An Electronic Lexical Database. MIT Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Investigating unsupervised learning for text categorization bootstrapping",
"authors": [
{
"first": "Alfio",
"middle": [],
"last": "Massimiliano Gliozzo",
"suffix": ""
},
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2005,
"venue": "SIGKDD Explor. Newsl",
"volume": "11",
"issue": "1",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alfio Massimiliano Gliozzo, Carlo Strapparava, and Ido Dagan. 2005. Investigating unsupervised learning for text categorization bootstrapping. In HLT/EMNLP. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The weka data mining software: an update. SIGKDD Explor. Newsl., 11(1):10-18.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Simfinder: A flexible clustering tool for summarization",
"authors": [
{
"first": "Vasileios",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
},
{
"first": "Judith",
"middle": [
"L"
],
"last": "Klavans",
"suffix": ""
},
{
"first": "Melissa",
"middle": [
"L"
],
"last": "Holcombe",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"R"
],
"last": "Min Yen Kan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the NAACL Workshop on Automatic Summarization",
"volume": "",
"issue": "",
"pages": "41--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vasileios Hatzivassiloglou, Judith L. Klavans, Melissa L. Holcombe, Regina Barzilay, Min yen Kan, and Kath- leen R. McKeown. 2001. Simfinder: A flexible clus- tering tool for summarization. In In Proceedings of the NAACL Workshop on Automatic Summarization, pages 41-49.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Wordnet improves text document clustering",
"authors": [
{
"first": "A",
"middle": [],
"last": "Hotho",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Staab",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Stumme",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Semantic Web Workshop of the 26th Annual International ACM SI-GIR Conference on Research and Development in Informaion Retrieval (SIGIR 2003)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Hotho, S. Staab, and G. Stumme. 2003. Word- net improves text document clustering. In Ying Ding, Keith van Rijsbergen, Iadh Ounis, and Joe- mon Jose, editors, Proceedings of the Semantic Web Workshop of the 26th Annual International ACM SI- GIR Conference on Research and Development in In- formaion Retrieval (SIGIR 2003), August 1, 2003, Toronto Canada. Published Online at http://de. scientificcommons.org/608322.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Exploiting wikipedia as external knowledge for document clustering",
"authors": [
{
"first": "Xiaohua",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Caimei",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "E",
"middle": [
"K"
],
"last": "Park",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '09",
"volume": "",
"issue": "",
"pages": "389--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaohua Hu, Xiaodan Zhang, Caimei Lu, E. K. Park, and Xiaohua Zhou. 2009. Exploiting wikipedia as exter- nal knowledge for document clustering. In Proceed- ings of the 15th ACM SIGKDD international confer- ence on Knowledge discovery and data mining, KDD '09, pages 389-396, New York, NY, USA. ACM.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Directional distributional similarity for lexical inference",
"authors": [
{
"first": "Lili",
"middle": [],
"last": "Kotlerman",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "Maayan",
"middle": [],
"last": "Zhitomirsky-Geffet",
"suffix": ""
}
],
"year": 2010,
"venue": "JNLE",
"volume": "16",
"issue": "",
"pages": "359--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lili Kotlerman, Ido Dagan, Idan Szpektor, and Maayan Zhitomirsky-Geffet. 2010. Directional distributional similarity for lexical inference. JNLE, 16:359-389.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Automatic retrieval and clustering of similar words",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 17th international conference on Computational linguistics",
"volume": "2",
"issue": "",
"pages": "768--774",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 17th interna- tional conference on Computational linguistics -Vol- ume 2, COLING '98, pages 768-774, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Introduction to Information Retrieval",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hin- rich Sch\u00fctze. 2008. Introduction to Information Re- trieval. Cambridge University Press, Cambridge, Juli.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A new approach to unsupervised text summarization",
"authors": [
{
"first": "Tadashi",
"middle": [],
"last": "Nomoto",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR '01",
"volume": "",
"issue": "",
"pages": "26--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tadashi Nomoto and Yuji Matsumoto. 2001. A new ap- proach to unsupervised text summarization. In Pro- ceedings of the 24th annual international ACM SIGIR conference on Research and development in informa- tion retrieval, SIGIR '01, pages 26-34, New York, NY, USA. ACM.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Single document keyphrase extraction using sentence clustering and latent dirichlet allocation",
"authors": [
{
"first": "Claude",
"middle": [],
"last": "Pasquier",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation, Se-mEval '10",
"volume": "5",
"issue": "",
"pages": "154--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claude Pasquier. 2010. Task 5: Single document keyphrase extraction using sentence clustering and la- tent dirichlet allocation. In Proceedings of the 5th International Workshop on Semantic Evaluation, Se- mEval '10, pages 154-157, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Wordnet::similarity: measuring the relatedness of concepts",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Michelizzi",
"suffix": ""
}
],
"year": 2004,
"venue": "Demonstration Papers at HLT-NAACL 2004, HLT-NAACL-Demonstrations '04",
"volume": "",
"issue": "",
"pages": "38--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Pedersen, Siddharth Patwardhan, and Jason Miche- lizzi. 2004. Wordnet::similarity: measuring the relatedness of concepts. In Demonstration Papers at HLT-NAACL 2004, HLT-NAACL-Demonstrations '04, pages 38-41, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A wordnet-based semantic model for enhancing text clustering",
"authors": [
{
"first": "Shady",
"middle": [],
"last": "Shehata",
"suffix": ""
}
],
"year": 2009,
"venue": "Data Mining Workshops, International Conference on",
"volume": "0",
"issue": "",
"pages": "477--482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shady Shehata. 2009. A wordnet-based semantic model for enhancing text clustering. Data Mining Work- shops, International Conference on, 0:477-482.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Integrating clustering and multi-document summarization by bi-mixture probabilistic latent semantic analysis (plsa) with sentence bases",
"authors": [
{
"first": "Chao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chris",
"middle": [
"H Q"
],
"last": "Ding",
"suffix": ""
}
],
"year": 2011,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao Shen, Tao Li, and Chris H. Q. Ding. 2011. Integrat- ing clustering and multi-document summarization by bi-mixture probabilistic latent semantic analysis (plsa) with sentence bases. In AAAI.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Document clustering using word clusters via the information bottleneck method",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
},
{
"first": "Naftali",
"middle": [],
"last": "Tishby",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR '00",
"volume": "",
"issue": "",
"pages": "208--215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Slonim and Naftali Tishby. 2000. Document clus- tering using word clusters via the information bottle- neck method. In Proceedings of the 23rd annual inter- national ACM SIGIR conference on Research and de- velopment in information retrieval, SIGIR '00, pages 208-215, New York, NY, USA. ACM.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A comparison of document clustering techniques",
"authors": [
{
"first": "M",
"middle": [],
"last": "Steinbach",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Karypis",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2000,
"venue": "KDD Workshop on Text Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Steinbach, G. Karypis, and V. Kumar. 2000. A comparison of document clustering techniques. KDD Workshop on Text Mining.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Yago: A large ontology from wikipedia and wordnet",
"authors": [
{
"first": "Fabian",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
},
{
"first": "Gjergji",
"middle": [],
"last": "Kasneci",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: A large ontology from wikipedia and wordnet.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Multi-document summarization using sentence-based topic models",
"authors": [
{
"first": "Dingding",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shenghuo",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yihong",
"middle": [],
"last": "Gong",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, ACLShort '09",
"volume": "",
"issue": "",
"pages": "297--300",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dingding Wang, Shenghuo Zhu, Tao Li, and Yihong Gong. 2009. Multi-document summarization us- ing sentence-based topic models. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, ACLShort '09, pages 297-300, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Document clustering based on non-negative matrix factorization",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yihong",
"middle": [],
"last": "Gong",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, SIGIR '03",
"volume": "",
"issue": "",
"pages": "267--273",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Xu, Xin Liu, and Yihong Gong. 2003. Document clustering based on non-negative matrix factorization. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in in- formaion retrieval, SIGIR '03, pages 267-273, New York, NY, USA. ACM.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Generic summarization and keyphrase extraction using mutual reinforcement principle and sentence clustering",
"authors": [
{
"first": "Hongyuan",
"middle": [],
"last": "Zha",
"suffix": ""
}
],
"year": 2002,
"venue": "SIGIR",
"volume": "",
"issue": "",
"pages": "113--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyuan Zha. 2002. Generic summarization and keyphrase extraction using mutual reinforcement prin- ciple and sentence clustering. In SIGIR, pages 113- 120.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"content": "<table/>",
"text": "a. Projection credit card, card, mastercard, visa (38 sentences) XXX has the worst credit cards ever XXX MasterCard is the worst credit card I've ever had ntuc do not accept XXX visa now I have to redraw $150... XXX card declined again , $40 dinner in SF... b. K-means C fee, charge (57 sentences) XXX playing games wit my interest arguing w incompetent pol at XXX damansara perdana XXX's upper management are a bunch of rude pricks XXX are ninjas at catching fraudulent charges.",
"num": null,
"type_str": "table"
},
"TABREF2": {
"html": null,
"content": "<table/>",
"text": "Excerpt from resulting clusterings for the bank dataset. Bank name is substituted with XXX. Cluster labels are given in italics. Two most frequent terms are assigned as cluster labels for K-means C.",
"num": null,
"type_str": "table"
}
}
}
}