ACL-OCL / Base_JSON /prefixR /json /repl4nlp /2021.repl4nlp-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:58:25.387090Z"
},
"title": "Learn The Big Picture: Representation Learning for Clustering",
"authors": [
{
"first": "Sumanta",
"middle": [],
"last": "Kashyapi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of New Hampshire",
"location": {}
},
"email": ""
},
{
"first": "Laura",
"middle": [],
"last": "Dietz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of New Hampshire",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Existing supervised models for text clustering find it difficult to directly optimize for clustering results. This is because clustering is a discrete process and it is difficult to estimate meaningful gradient of any discrete function that can drive gradient based optimization algorithms. So, existing supervised clustering algorithms indirectly optimize for some continuous function that approximates the clustering process. We propose a scalable training strategy that directly optimizes for a discrete clustering metric. We train a BERTbased embedding model using our method and evaluate it on two publicly available datasets. We show that our method outperforms another BERT-based embedding model employing Triplet loss and other unsupervised baselines. This suggests that optimizing directly for the clustering outcome indeed yields better representations suitable for clustering.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Existing supervised models for text clustering find it difficult to directly optimize for clustering results. This is because clustering is a discrete process and it is difficult to estimate meaningful gradient of any discrete function that can drive gradient based optimization algorithms. So, existing supervised clustering algorithms indirectly optimize for some continuous function that approximates the clustering process. We propose a scalable training strategy that directly optimizes for a discrete clustering metric. We train a BERTbased embedding model using our method and evaluate it on two publicly available datasets. We show that our method outperforms another BERT-based embedding model employing Triplet loss and other unsupervised baselines. This suggests that optimizing directly for the clustering outcome indeed yields better representations suitable for clustering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text clustering is a well-studied problem which finds its application in a wide range of tasks: organizing documents in cluster-based information retrieval (Cutting et al., 2017; Mei and Chen, 2014) , representation of search results (Scaiella et al., 2012; Navigli and Crisafulli, 2010) , analyzing different opinions about a subject (Tsirakis et al., 2017 ) among many others. Each of these applications may focus on text contents of different granularities (e.g. words, sentences, passages, articles) but all of them follow a common high-level approach to clustering: represent the documents in form of vectors and then cluster them based on vector similarities. Although clustering is typically employed in an unsupervised setting, many semi-supervised deep learning models have been proposed recently. Many of these approaches formulate this as a representation space learning prob-lem (Yang et al., 2017 ) that projects initial document vectors into a latent vector space which is more suitable for the clustering task and generate clusters similar to some ground truth. However, most of these algorithms do not directly optimize for a clustering evaluation metric during training. Instead, they optimize for a different criterion that approximates the global clustering error. Semi-supervised clustering approaches (Basu et al., 2002) cast the clustering problem into binary classification by learning pairwise constraints extracted from the available training examples: mustlinks for sample pairs sharing the same cluster and cannot-links for different clusters. However, clustering problems with numerous small clusters produce only a few must-links among all possible links, leading to highly unbalanced training data. Consequently, the trained model is biased towards predicitng cannot-links. Learning triplet-based constraints (Dor et al., 2018) that combine a positive and a negative sample in a single triplet, mitigate such bias towards negative samples. However, the sample complexity (Bartlett, 1998) (number of samples required to cover all interactions in a dataset) grows more rapidly compared to paired samples. Also, such approximation of the original clustering problem may lead to unsatisfactory results because the optimization criterion does not always correspond with the clustering quality. These observations motivate us to hypothesize the following:",
"cite_spans": [
{
"start": 156,
"end": 178,
"text": "(Cutting et al., 2017;",
"ref_id": "BIBREF5"
},
{
"start": 179,
"end": 198,
"text": "Mei and Chen, 2014)",
"ref_id": "BIBREF14"
},
{
"start": 234,
"end": 257,
"text": "(Scaiella et al., 2012;",
"ref_id": "BIBREF20"
},
{
"start": 258,
"end": 287,
"text": "Navigli and Crisafulli, 2010)",
"ref_id": "BIBREF16"
},
{
"start": 335,
"end": 357,
"text": "(Tsirakis et al., 2017",
"ref_id": "BIBREF21"
},
{
"start": 891,
"end": 909,
"text": "(Yang et al., 2017",
"ref_id": "BIBREF28"
},
{
"start": 1322,
"end": 1341,
"text": "(Basu et al., 2002)",
"ref_id": "BIBREF2"
},
{
"start": 1839,
"end": 1857,
"text": "(Dor et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Instead of learning to solve some approximation of the original clustering problem, we need to directly optimize for a clustering evaluation metric in order to train a model specialized for clustering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. Instead of sample-pairs in case of pairwise constraints or triplets in case of Triplet-loss, we can make efficient and scalable use of the available training data by presenting all inter-actions between a set of data points as a single clustering sample. This way the training approach neither suffers from unbalanced data nor from sample complexity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To test our hypotheses, we propose an alternative training strategy that directly draws its supervision signal from an evaluation metric that measures clustering quality to train a representation model for text documents. During training, it consumes a complete clustering example of a set of data points as a single training sample in form of an interaction matrix. Due to this, we experiment with clustering datasets containing numerous small clustering examples instead of a single instance of a large clustering problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is challenging to derive training signals directly from the clustering ground truth or a clustering evaluation metric because the clustering process is discrete. In other words, a function that estimates the clustering quality of a random partition of the input data is not continuous and hence nondifferentiable. As most supervised algorithms rely on gradient-based optimization algorithms, it is difficult for them to orchestrate a useful training process without proper gradient. So far some continuous approximation of the clustering problem is used as discussed earlier to bypass the core optimization issue. Recently a novel gradient approximation method, blackbox backpropagation (Vlastelica et al., 2019) is proposed for combinatorial problems that finds solution in a discrete space. We leverage their findings by molding the clustering problem into a combinatorial problem. This allows us to derive meaningful gradients out of the clustering process and to train a representation model by directly optimizing for a clustering evaluation metric.",
"cite_spans": [
{
"start": 690,
"end": 715,
"text": "(Vlastelica et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contribution: We make the following contributions through this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We develop a new training strategy for supervised clustering that directly obtains its supervision signal from optimizing a clustering metric. 1 We utilize recently proposed blackbox backpropagation technique to derive gradients from discrete clustering results that drives the training process.",
"cite_spans": [
{
"start": 146,
"end": 147,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. We use our training strategy to train a BERTbased (Devlin et al., 2018) representation model suitable for topical clustering. To support the training mechanism, we design a loss function that effectively optimizes a clustering evaluation metric.",
"cite_spans": [
{
"start": 53,
"end": 74,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. We empirically show that our method is more efficient in terms of training time and utilizing available training examples when compared to existing supervised clustering methods. The resulting representation model achieves better clustering results than other strong baseline models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Traditionally, text clustering is achieved by employing a distance-based clustering algorithm (e.g. KMeans) on vector representations of documents such as TF-IDF (Jones, 1972) . Recent works focus on learning text representaions suitable for clustering (Chen, 2017; Xu et al., 2017; Hadifar et al., 2019) . Alternatively, they explore different similarity metrics between the vectors that govern the clustering algorithm through pairwise binary constraints (Basu et al., 2002; Kulis et al., 2009) . In this work, we focus on the former -representation learning of documents, suitable for text clustering.",
"cite_spans": [
{
"start": 162,
"end": 175,
"text": "(Jones, 1972)",
"ref_id": "BIBREF11"
},
{
"start": 253,
"end": 265,
"text": "(Chen, 2017;",
"ref_id": "BIBREF4"
},
{
"start": 266,
"end": 282,
"text": "Xu et al., 2017;",
"ref_id": "BIBREF26"
},
{
"start": 283,
"end": 304,
"text": "Hadifar et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 457,
"end": 476,
"text": "(Basu et al., 2002;",
"ref_id": "BIBREF2"
},
{
"start": 477,
"end": 496,
"text": "Kulis et al., 2009)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Deep clustering (Min et al., 2018) is an active field of research that utilizes recent advancements of deep learning techniques to improve supervised clustering. The primary focus is to learn a suitable representation space that optimizes some clustering criterion (e.g. cluster assignment loss) along with a representation criterion (e.g. reconstruction loss) (Xie et al., 2016; Li et al., 2018; Ghasedi Dizaji et al., 2017; Jiang et al., 2016) . It has also been shown that clustering criterions alone are sufficient to train such representation space (Yang et al., 2016) . However, none of these approaches attempt to receive direct supervision from a clustering evaluation metric. Motivated by earlier works that learn a representation model under pairwise binary constraints, Chang et al. (2017) envisions the clustering task as a binary classification task of paired data samples and achieves state-of-the-art results on multiple image clustering datasets. Reimers and Gurevych (2019) propose Sentence-BERT which trains a BERT-based sentence embedding model by employing Triplet loss (Dor et al., 2018) that uses triples of sentences as training samples where exactly two of them are from the same section of Wikipedia. Although both of these approaches are supervised, each training sample only consists of a fraction of the whole clustering instance. Hence, during training, these methods mostly ignore the overall relationships between multiple data samples and how they form clusters.",
"cite_spans": [
{
"start": 16,
"end": 34,
"text": "(Min et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 361,
"end": 379,
"text": "(Xie et al., 2016;",
"ref_id": "BIBREF25"
},
{
"start": 380,
"end": 396,
"text": "Li et al., 2018;",
"ref_id": "BIBREF13"
},
{
"start": 397,
"end": 425,
"text": "Ghasedi Dizaji et al., 2017;",
"ref_id": "BIBREF8"
},
{
"start": 426,
"end": 445,
"text": "Jiang et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 554,
"end": 573,
"text": "(Yang et al., 2016)",
"ref_id": "BIBREF29"
},
{
"start": 781,
"end": 800,
"text": "Chang et al. (2017)",
"ref_id": "BIBREF3"
},
{
"start": 963,
"end": 990,
"text": "Reimers and Gurevych (2019)",
"ref_id": "BIBREF18"
},
{
"start": 1090,
"end": 1108,
"text": "(Dor et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The main hindrance of drawing a supervision signal directly from a clustering evaluation metric is the combinatorial nature of the clustering problem. Some research introduce differentiable building blocks for special cases of combinatorial algorithms such as satisfiability (SAT) problems (Wang et al., 2019) . use a differentiable variant of the K-means algorithm to approximate a harder combinatorial problem (e.g. graph optimization). Such relaxations of the original combinatorial problem may lead to sub-optimal results. Recently, Vlastelica et al. 2019proposed a novel technique of differentiating combinatorial solvers as a blackbox without any relaxation that allows us to use an optimal combinatorial algorithm as a component of a deep representation learning model and optimize it end-to-end. We give a brief background of their approach in the following section.",
"cite_spans": [
{
"start": 290,
"end": 309,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Blackbox backpropagation. In their approach to optimize for a combinatorial function Vlastelica et al. (2019) formalize combinatorial solvers as a mapping function between continuous input, w \u2208 W \u2286 R N and discrete output,\u0177 \u2208 Y as w \u2192\u0177 such that the output y = arg min y\u2208Y c(w, y) where c is the cost that the solver tries to minimize. Here W is the N -dimensional continuous input space and Y is a finite set of all possible solutions. For a linear cost function c, a continuous interpolation of the original cost function is constructed and the gradient of this interpolation is used during backpropagation. The closeness of the interpolation to the original function is controlled by a single hyperparameter, \u03bb. In our work, we extend this approach for clustering framework to draw the supervision signals directly from the clustering results and learn our model parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our text clustering method works in two steps: 1. Train a text representation model directly from example clusters of text snippets, 2. Cluster the trained embedding vectors using hierarchical agglomerative clustering (HAC). Our primary con-tribution lies in the training strategy of step 1 which we refer here as Clustering Optimization as Blackbox (COB). We describe COB in the following sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Supervised text clustering is a combinatorial problem. Let P be a set of N documents and Y be the set of all possible k-partitions of set P. Also let V \u03c6 be a representation model with trainable parameters \u03c6. We obtain the set of representation vectors V \u03c6 (P) for each of the documents in set P using the model, V \u03c6 . Based on the Euclidean distances between representation vectors in V \u03c6 (P), a clustering algorithm chooses a particular k-partition y \u2208 Y that minimizes some linear cost function c(V \u03c6 (P), y) e.g. intra-cluster distances for HAC. Hence the clustering process can be expressed as the following mapping:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Approach",
"sec_num": "3.1"
},
{
"text": "V \u03c6 (P) \u2192\u0177 such that\u0177 = arg min y\u2208Y c(V \u03c6 (P), y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Approach",
"sec_num": "3.1"
},
{
"text": "The clustering ground truth y * \u2208 Y is the correct kpartition of set P. The training process of COB is governed by a loss function L(y * ,\u0177) that optimizes a clustering evaluation metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Approach",
"sec_num": "3.1"
},
{
"text": "However, we want to emphasize here that the minimization of the cost function c(V \u03c6 (P), y) takes place inside the clustering algorithm and remains opaque for our supervised model. As a result, COB is not dependent on the exact clustering algorithm we choose. In this work however, we choose to use HAC as our clustering algorithm. We optimize for RAND index in this work but our method can be applied to optimize for other clustering evaluation metrics as well (e.g. purity).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Approach",
"sec_num": "3.1"
},
{
"text": "Our goal is to train the representation model, V \u03c6 , such that the resulting clusters maximize a clustering evaluation metric of our choice. In this work, we focus on optimizing for RAND index, a widely used clustering metric, which measures the similarity between the generated clusters and the clustering ground truth. If y * \u2208 Y be the ground truth partition or the ideal clustering of P, then the clustering quality of a candidate cluster\u0177 is expressed in terms of RAND index (RI):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimizing for RAND index",
"sec_num": "3.2"
},
{
"text": "RI =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimizing for RAND index",
"sec_num": "3.2"
},
{
"text": "No. of unordered data pairs that agrees between y * and\u0177 n 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimizing for RAND index",
"sec_num": "3.2"
},
{
"text": "where n = total number of data samples. Embedding model with trainable parameters \u03c6 V \u03c6 (P) Representation vectors of P obtained using V \u03c6 D Pairwise distance matrix of vectors in V \u03c6 (P) A Adjacency matrix denoting clustering result T Adjacency matrix denoting ground truth clusters Figure 1 and Table 1 presents the overall training approach. The focus of the training loop is to train the representation model V \u03c6 . First, the set of representation vectors V \u03c6 (P) is obtained for all documents in set P. Then we encode the input to the clustering algorithm as a square symmetric matrix D with pairwise Euclidean distance scores between vectors in V \u03c6 (P).",
"cite_spans": [],
"ref_spans": [
{
"start": 284,
"end": 292,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 297,
"end": 304,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Optimizing for RAND index",
"sec_num": "3.2"
},
{
"text": "D ij = ||V \u03c6 (p i ) \u2212 V \u03c6 (p j )|| 2 where p i , p j \u2208 P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COB Training Loop",
"sec_num": "3.3"
},
{
"text": "The solution to the clustering problem is expressed in form of an adjacency matrix A such that A ij = 1 if i, j share same cluster and 0 otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COB Training Loop",
"sec_num": "3.3"
},
{
"text": "We denote the adjacency matrix of the clustering ground truth as T . Now, we can express RI using the following form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COB Training Loop",
"sec_num": "3.3"
},
{
"text": "RI = 1 \u2212 ij |A ij \u2212 T ij | 2 n 2 see Appendix",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COB Training Loop",
"sec_num": "3.3"
},
{
"text": "It is clear from the above equation that if we want to maximize RI, we need to minimize the difference between A and T . Intuitively, if we are able to produce ideal clustering results, then A and T would be identical, meaning A \u2212 T is a zero matrix. Hence, we define our loss function L as the sum of A \u2212 T . Formally:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COB Training Loop",
"sec_num": "3.3"
},
{
"text": "L = ij |A ij \u2212 T ij |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COB Training Loop",
"sec_num": "3.3"
},
{
"text": "The backward pass of this training loop involves estimating the gradient of the loss L with respect to the distance matrix D, the input to the clustering algorithm. This is achieved using blackbox backpropagation technique and the resulting gradient is used to drive a gradient descent algorithm for training the representation model V \u03c6 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COB Training Loop",
"sec_num": "3.3"
},
{
"text": "The purpose of any clustering algorithm is to identify groups of similar data points. By optimizing for a clustering metric such as RI, we learn a notion of similarity that most likely yields the ground truth clusters when used in HAC. However, we want to encourage a large margin between similar and dissimilar data points. This is achieved when the loss function encourages inter-cluster distances to increase and intra-cluster distances to decrease. While this is part of the optimization process within the clustering algorithm, it is opaque during neural network training, due to the blackbox optimization technique. The clustering evaluation metric does not encourage a margin that is larger than necessary. Hence we incorporate a measure of intra versus inter-cluster distance as a regularizer in our optimization criterion as described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "3.4"
},
{
"text": "L r = L + r \u2022 [mean intra-cluster distance \u2212 mean inter-cluster distance] = L + r \u2022 ij D ij T ij ij T ij intra-cluster \u2212 ij D ij (1 \u2212 T ij ) ij (1 \u2212 T ij ) inter-cluster",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "3.4"
},
{
"text": "where r is the regularization constant",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "3.4"
},
{
"text": "The regularization constant r controls how much emphasis is placed on increasing the margin between similar and dissimilar data points versus optimizing the clustering evaluation metric. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "3.4"
},
{
"text": "In this section, we describe the datasets used for our experiments, discuss our evaluation paradigm and present experimental results that demonstrate efficacy of the representation model trained using our proposed training strategy over our baseline models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "To evaluate our proposed approach, we use two publicly available datasets: 20 newsgroups (20NG 2 ) and TREC Complex Answer Retrieval (CAR 3 ). As discussed earlier, for our proposed method, each training example consists of the ideal clustering of a set of documents. To produce enough such training samples, we choose to train and evaluate on multiple smaller clustering instances instead of a single but large clustering instance. We note that it will not make any difference in the way our baseline model is trained because they consume the training data in form of triples (SBERT Triplet), as long as we ensure that all models are trained on the same set of clustering examples. We take the following approach to construct such clustering benchmarks from the datasets (detailed statistics are presented in Table 2) : 20NG dataset is a widely used public collection of 18846 documents, each categorized into any one of twenty topics. To convert this to a clustering benchmark, both train and test split of 20NG dataset is randomly grouped into sets of 50 documents along with their topic labels, resulting in 226 and 150 clustering instances respectively. Each set of 50 documents represents a single instance of clustering problem.",
"cite_spans": [],
"ref_spans": [
{
"start": 810,
"end": 818,
"text": "Table 2)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "CAR dataset (version 2.0 year 1) is a large collection of Wikipedia articles. Each article consists of text passages about a topic, segmented into hierarchical subtopics using sections. From the CAR dataset, we use train.v2.0 as train split (CAR train) and benchmarkY1test as test split (CAR test). This dataset is originally designed for a passage retrieval task where passages in CAR articles are relevant for different sections under the overarching topic of the article. This relevance information is part of the dataset in form of the ground truth. We assume that all relevant passages for an article are already retrieved and our focus is to cluster these passages. So each article is a separate clustering problem where our task is to cluster all the passages of the article such that passages from same sections in the original article share the same cluster. We treat the section label under which a passage appears as the clustering label of the passage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Section labels in CAR dataset are hierarchical. This provides an opportunity to evaluate our clustering models under different levels of granularity. As depicted in Figure 2 , passages p 6 and p 7 in article COVID 19 belong to the sections Cause and Cause/Transmission respectively. For a coarsegrained view of the clustering, we consider p 6 , p 7 under the same topic cluster Cause. However, for fine-grained clustering we have to consider p 6 , p 7 under separate subtopic clusters. The CAR dataset provides both in form of top-level (coarse) and hierarchical (fine-grained) benchmarks. We train and evaluate our models on both flavors of the dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 173,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Our primary focus is to evaluate the efficacy of our proposed training strategy for supervised clustering and compare it with other training methods while ensuring the fairness of our evaluation. Hence, we train the same text embedding model with the same training data differing only in the training strate-gies. For the embedding model, we use Sentence-BERT (Reimers and Gurevych, 2019) , a recent BERT-based embedding model. Finally, macroaverage performance on all clustering instances on the test sets are reported with statistical significance testing. We use three clustering evaluation metrics, RAND index (RI), Adjusted RAND index (ARI) and Normalized Mutual Information (NMI).",
"cite_spans": [
{
"start": 360,
"end": 388,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Paradigm",
"sec_num": "4.2"
},
{
"text": "Compared methods. In this section we discuss all the methods which are compared in our experiments. All methods are trained until no significant improvement is observed on the validation set. For each method, models are saved on regular interval and we use the best model found during training in terms of validation ARI score to evaluate on the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Paradigm",
"sec_num": "4.2"
},
{
"text": "SBERT COB. We train Sentence-BERT with our proposed training strategy and refer the obtained model as SBERT COB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Paradigm",
"sec_num": "4.2"
},
{
"text": "SBERT Triplet. To compare our approach with a strong supervised baseline, we train Sentence-BERT with Triplet loss function (Dor et al., 2018) . It is designed to generate document representations that capture topical similarities. Here, each training example consists of two similar (d, d + ) and one dissimilar (d \u2212 ) documents. Triplet loss trains the document representation model V trip so that the Euclidean distance between the similar pair of representations ||V trip (d)",
"cite_spans": [
{
"start": 124,
"end": 142,
"text": "(Dor et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Paradigm",
"sec_num": "4.2"
},
{
"text": "\u2212 V trip (d + )|| 2 is less than the negative pair ||V trip (d) \u2212 V trip (d \u2212 )|| 2 by at least a margin . L triplet = max(0, ||V trip (d) \u2212 V trip (d + )|| 2 \u2212 ||V trip (d) \u2212 V trip (d \u2212 )|| 2 + )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Paradigm",
"sec_num": "4.2"
},
{
"text": "Unsupervised baselines. To compare the performances of unsupervised clustering approaches for our use cases, we also include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Paradigm",
"sec_num": "4.2"
},
{
"text": "1. SBERT raw, the pre-trained Sentence-BERT model without any finetuning and 2. TFIDF with cosine similarity as a more canonical approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Paradigm",
"sec_num": "4.2"
},
{
"text": "The interpolation parameter \u03bb (Section 2) and regularization constant r (Section 3.4) are two hyperparameters we have to tune in SBERT COB. We use Optuna (Akiba et al., 2019) , a recently proposed hyperparameter optimization framework, to search for optimum \u03bb, r pair in terms of validation performance for each dataset. Table 3 presents the optimum hyperparameter values used for our experiments.",
"cite_spans": [
{
"start": 154,
"end": 174,
"text": "(Akiba et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 321,
"end": 328,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Hyperparameter Optimization",
"sec_num": "4.3"
},
{
"text": "Here we present details of all the experiments carried out and discuss the results. All experiments are executed on a single NVIDIA Titan XP GPU with 12GB memory. For all the SBERT models, we use uncased DistilBERT (Sanh et al., 2019) as the underlying BERT embedding model.",
"cite_spans": [
{
"start": 215,
"end": 234,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering Evaluation",
"sec_num": "4.4"
},
{
"text": "We train SBERT COB and other supervised methods using 80% of the train split of 20NG dataset and the remainder is held out for validation. Table 4 presents the performance on the test set evaluated using mean RI, ARI and NMI. We observe that our proposed method SBERT COB outperforms all other baselines in terms of RI, ARI and NMI. For ARI and NMI, the improvement is statistically significant in terms of paired t-test with \u03b1 = 0.05 carried out with respect to the best performing baseline, SBERT Triplet. Both TFIDF and SBERT raw fail to obtain meaningful clusters, demonstrating the efficacy of supervised representation models in clustering context.",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 147,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiment 1: 20NG",
"sec_num": "4.4.1"
},
{
"text": "Due to large size of the CAR training split (train.v2.0), it is impractical to train SBERT Triplet with all possible triplets in the training set. Instead, we compare the supervised models trained on three smaller subsets of the training dataset. Each subset contains articles with exactly n passages where n = 30, 35 and 40. However, they are always evaluated on the same CAR test set. These values of n are chosen so that we obtain reasonable numbers of training samples while their statistics remain close to the CAR test set on which we are evaluating. Table 5 presents statistics about these three training subsets. We report the coarse and fine-grained clustering performance in Table 6 and Table 7 respectively. For both coarse and fine-grained clustering, we observe that for each of the training splits (n = 30, 35, 40), our proposed method SBERT COB consistently performs better than the best performing baseline, SBERT Triplet (n = 30) in terms of both ARI and NMI. As expected, clustering performance in terms of RI score mostly correlates with ARI score. The only exception is SBERT ",
"cite_spans": [],
"ref_spans": [
{
"start": 557,
"end": 564,
"text": "Table 5",
"ref_id": "TABREF4"
},
{
"start": 685,
"end": 704,
"text": "Table 6 and Table 7",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiment 2: CAR",
"sec_num": "4.4.2"
},
{
"text": "Existing methods for learning clustering representation spaces, focus solely on classifying individual pairs as similar or different, and hence ignore to which extent other data points already form clusters. The key difference in our work is that we learn the representation space to directly optimize for the clustering evaluation metric, which is based on the clustering results of HAC when used with pairwise Euclidean distances. This allows the model to reach convergence much faster, leading to reduced overall training time, when compared to other methods that uses only a sub-sample of each clustering example (e.g. Triplets). This is particularly helpful in scenarios when we want to regularly update our model to incorporate new training examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 3: Training Convergence",
"sec_num": "4.4.3"
},
{
"text": "To demonstrate this we present Figure 5 that compares the time taken to reach convergence during training of SBERT Triplet and SBERT COB on 20NG dataset and CAR dataset (coarse n = 35) respectively. For both the datasets, SBERT COB is able to converge at least five times sooner than SBERT Triplet, leading to much faster overall training time. Moreover, for NG20 dataset each epoch of SBERT COB is about 100 times faster than SBERT Triplet. This leads to decrease in overall training time even though SBERT COB takes many more epochs to converge than SBERT Triplet. We observe similar training behaviour for CAR dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 39,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 3: Training Convergence",
"sec_num": "4.4.3"
},
{
"text": "Here, we demonstrate efficacy of SBERT COB over SBERT Triplet (n = 35) through visual comparison of clustering results from the CAR dataset. Principle Component Analysis (PCA) is used to transform the representation vectors into 3D vectors which are then visualized as points in 3D vector space. Figure 4 compares the results obtained for four articles from CAR test split.",
"cite_spans": [],
"ref_spans": [
{
"start": 296,
"end": 304,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Qualitative Evaluation",
"sec_num": "4.5"
},
{
"text": "For articles Anti-slavery International and Hybrid Electric Vehicle, SBERT COB is able to clearly identify clusters of different topics and projects them in different regions of the embedding space. On the contrary, it is difficult to find any clear cluster boundaries in the SBERT Triplet representation space which is also reflected in the ARI scores obtained by the methods. For the article Coffee Preparation, both the methods perform poorly in terms of ARI scores. But in case of SBERT COB we see a tendency to separate dissimilar passages. SBERT Triplet projects almost all the passages in a dense region except for a few outlier passages. For the article Hot Chocolate, SBERT Triplet obtains numerous small clusters of similar passages. As ARI metric is based on sample-pairs, SBERT Triplet obtains better ARI score even though it does not achieve clear groupings of similar elements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Evaluation",
"sec_num": "4.5"
},
{
"text": "It is clear from the examples that SBERT COB provides better global clustering quality than SBERT Triplet. This is expected because unlike SBERT Triplet, SBERT COB observes the relationships between all passages in a clustering instance at once to directly optimize for RAND index. Hence, SBERT COB is able to make better global clustering decisions than other pair-based methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Evaluation",
"sec_num": "4.5"
},
{
"text": "As SBERT COB learns from all possible interactions of data points in a clustering instance at once, it requires all the adjacency matrices in a batch of clustering samples to fit in memory. Thus the space complexity increases quadratically with the size of each clustering instance. Hence, the batch size is kept small to allow training with a limited GPU memory. However, even with batch size of 1, SBERT COB is observed to obtain superior results in terms of training speed and clustering performance as reported earlier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quadratic Scaling of SBERT COB",
"sec_num": "4.6"
},
{
"text": "In this work, we propose an alternative training strategy to train a representation model, for clustering. Our training strategy, COB (Clustering Optimization as Blackbox), directly optimizes the RAND index, a clustering evaluation metric. Using our method, we train SBERT COB, a BERT-based text representation model. We empirically show that SBERT COB significantly outperforms other supervised and unsupervised text embedding model on two separate datasets in terms of RI, ARI and NMI, indicating better cluster quality. Visual representations of the resulting vectors also confirm that SBERT COB learns to holistically distinguish clusters of different topics. Moreover, each epoch in SBERT COB training loop is about 100 times faster when compared to SBERT Triplet, our best performing baseline method. This leads to a significant decrease in overall training time even though SBERT COB requires more iterations to converge than SBERT Triplet. This makes SBERT COB suitable for applications that require clustering models to be updated on a regular basis as new training samples become available. Lastly, although we have conducted experiments with a specific clustering algorithm (HAC) and a clustering metric to optimize (RAND index), our model is independent of the particular choice of algorithm or the metric. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "= 2 n 2 \u2212 ij |A ij \u2212 T ij | 2 n 2 = 1 \u2212 ij |A ij \u2212 T ij | 2 n 2 B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Comparison of Epoch Time Figure 5 shows the mean epoch time of SBERT Triplet and SBERT COB on 20NG dataset and CAR dataset (coarse n = 35) respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 33,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The source code is available at https://github. com/nihilistsumo/Blackbox_clustering",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Part of scikit-learn datasetsPedregosa et al. (2011) 3 http://trec-car.cs.unh.edu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Given a set of n data points P, let us compare two clustering results of P, C T and C A , in terms of RAND index. We know that RAND index is expressed as:where a = number of pairs that share the same cluster both in C T and C A where b = number of pairs that are from different clusters both in C T and C A Now we can express any clustering result C M in form of an adjacency matrix M where M ij = 1 if the i, j-th data points in P share the same cluster in C M and M ij = 0 otherwise. We represent the clustering results C T and C A with such adjacency matrices T and A respectively. Also, the difference matrix of A, T denoted as |A \u2212 T | indicates the ordered pairs that do not agree between A, T . In other words, |A ij \u2212 T ij | = 1 denotes that the i, j-th data points do not agree between A and T . Now, we can express RAND index in terms of A and T as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Relation between RAND index and Adjacency matrix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Optuna: A next-generation hyperparameter optimization framework",
"authors": [
{
"first": "Takuya",
"middle": [],
"last": "Akiba",
"suffix": ""
},
{
"first": "Shotaro",
"middle": [],
"last": "Sano",
"suffix": ""
},
{
"first": "Toshihiko",
"middle": [],
"last": "Yanase",
"suffix": ""
},
{
"first": "Takeru",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Masanori",
"middle": [],
"last": "Koyama",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 25rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Op- tuna: A next-generation hyperparameter optimiza- tion framework. In Proceedings of the 25rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network",
"authors": [
{
"first": "L",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bartlett",
"suffix": ""
}
],
"year": 1998,
"venue": "IEEE transactions on Information Theory",
"volume": "44",
"issue": "2",
"pages": "525--536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter L Bartlett. 1998. The sample complexity of pat- tern classification with neural networks: the size of the weights is more important than the size of the network. IEEE transactions on Information Theory, 44(2):525-536.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Semi-supervised clustering by seeding",
"authors": [
{
"first": "Sugato",
"middle": [],
"last": "Basu",
"suffix": ""
},
{
"first": "Arindam",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 19th International Conference on Machine Learning (ICML-2002",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sugato Basu, Arindam Banerjee, and Raymond Mooney. 2002. Semi-supervised clustering by seed- ing. In In Proceedings of 19th International Confer- ence on Machine Learning (ICML-2002. Citeseer.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Shiming Xiang, and Chunhong Pan",
"authors": [
{
"first": "Jianlong",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Lingfeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Gaofeng",
"middle": [],
"last": "Meng",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE international conference on computer vision",
"volume": "",
"issue": "",
"pages": "5879--5887",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianlong Chang, Lingfeng Wang, Gaofeng Meng, Shiming Xiang, and Chunhong Pan. 2017. Deep adaptive image clustering. In Proceedings of the IEEE international conference on computer vision, pages 5879-5887.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improved tfidf in big news retrieval: An empirical study",
"authors": [
{
"first": "Chien-Hsing",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2017,
"venue": "Pattern Recognition Letters",
"volume": "93",
"issue": "",
"pages": "113--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chien-Hsing Chen. 2017. Improved tfidf in big news retrieval: An empirical study. Pattern Recognition Letters, 93:113-122.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Scatter/gather: A clusterbased approach to browsing large document collections",
"authors": [
{
"first": "",
"middle": [],
"last": "Douglass R Cutting",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"O"
],
"last": "Karger",
"suffix": ""
},
{
"first": "John",
"middle": [
"W"
],
"last": "Pedersen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tukey",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM SIGIR Forum",
"volume": "51",
"issue": "",
"pages": "148--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglass R Cutting, David R Karger, Jan O Pedersen, and John W Tukey. 2017. Scatter/gather: A cluster- based approach to browsing large document collec- tions. In ACM SIGIR Forum, volume 51, pages 148- 159. ACM New York, NY, USA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning thematic similarity metric from article sections using triplet networks",
"authors": [
{
"first": "Yosi",
"middle": [],
"last": "Liat Ein Dor",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Mass",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Halfon",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Venezian",
"suffix": ""
},
{
"first": "Ranit",
"middle": [],
"last": "Shnayderman",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Aharonov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "49--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liat Ein Dor, Yosi Mass, Alon Halfon, Elad Venezian, Ilya Shnayderman, Ranit Aharonov, and Noam Slonim. 2018. Learning thematic similarity metric from article sections using triplet networks. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 49-54.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization",
"authors": [
{
"first": "Amirhossein",
"middle": [],
"last": "Kamran Ghasedi Dizaji",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Herandi",
"suffix": ""
},
{
"first": "Weidong",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE international conference on computer vision",
"volume": "",
"issue": "",
"pages": "5736--5745",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kamran Ghasedi Dizaji, Amirhossein Herandi, Cheng Deng, Weidong Cai, and Heng Huang. 2017. Deep clustering via joint convolutional autoencoder em- bedding and relative entropy minimization. In Pro- ceedings of the IEEE international conference on computer vision, pages 5736-5745.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A self-training approach for short text clustering",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Hadifar",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Sterckx",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Demeester",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Develder",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 4th Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "194--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Hadifar, Lucas Sterckx, Thomas Demeester, and Chris Develder. 2019. A self-training approach for short text clustering. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 194-199.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Variational deep embedding: An unsupervised and generative approach to clustering",
"authors": [
{
"first": "Zhuxi",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Yin",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Huachun",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Bangsheng",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Hanning",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.05148"
]
},
"num": null,
"urls": [],
"raw_text": "Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou. 2016. Variational deep embedding: An unsupervised and gener- ative approach to clustering. arXiv preprint arXiv:1611.05148.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A statistical interpretation of term specificity and its application in retrieval",
"authors": [
{
"first": "Karen Sparck",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1972,
"venue": "Journal of documentation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Sparck Jones. 1972. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Semi-supervised graph clustering: a kernel approach",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Kulis",
"suffix": ""
},
{
"first": "Sugato",
"middle": [],
"last": "Basu",
"suffix": ""
},
{
"first": "Inderjit",
"middle": [],
"last": "Dhillon",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2009,
"venue": "Machine learning",
"volume": "74",
"issue": "1",
"pages": "1--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Kulis, Sugato Basu, Inderjit Dhillon, and Ray- mond Mooney. 2009. Semi-supervised graph clus- tering: a kernel approach. Machine learning, 74(1):1-22.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Discriminatively boosted image clustering with fully convolutional auto-encoders",
"authors": [
{
"first": "Fengfu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Qiao",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Pattern Recognition",
"volume": "83",
"issue": "",
"pages": "161--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fengfu Li, Hong Qiao, and Bo Zhang. 2018. Dis- criminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition, 83:161-173.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Proximity-based k-partitions clustering with ranking for document categorization and analysis. Expert systems with applications",
"authors": [
{
"first": "Jian-Ping",
"middle": [],
"last": "Mei",
"suffix": ""
},
{
"first": "Lihui",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "41",
"issue": "",
"pages": "7095--7105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jian-Ping Mei and Lihui Chen. 2014. Proximity-based k-partitions clustering with ranking for document categorization and analysis. Expert systems with ap- plications, 41(16):7095-7105.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A survey of clustering with deep learning: From the perspective of network architecture",
"authors": [
{
"first": "Erxue",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Xifeng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Gen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jianjing",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Long",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Access",
"volume": "6",
"issue": "",
"pages": "39501--39514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erxue Min, Xifeng Guo, Qiang Liu, Gen Zhang, Jian- jing Cui, and Jun Long. 2018. A survey of clustering with deep learning: From the perspective of network architecture. IEEE Access, 6:39501-39514.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Inducing word senses to improve web search result clustering",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Crisafulli",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "116--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli and Giuseppe Crisafulli. 2010. Induc- ing word senses to improve web search result clus- tering. In Proceedings of the 2010 conference on empirical methods in natural language processing, pages 116-126.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Scikit-learn: Machine learning in Python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Sentencebert: Sentence embeddings using siamese bertnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.10084"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. arXiv preprint arXiv:1908.10084.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Topical clustering of search results",
"authors": [
{
"first": "Ugo",
"middle": [],
"last": "Scaiella",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Ferragina",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Marino",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the fifth ACM international conference on Web search and data mining",
"volume": "",
"issue": "",
"pages": "223--232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ugo Scaiella, Paolo Ferragina, Andrea Marino, and Massimiliano Ciaramita. 2012. Topical clustering of search results. In Proceedings of the fifth ACM inter- national conference on Web search and data mining, pages 223-232, New York, NY, USA.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Large scale opinion mining for social, news and blog data",
"authors": [
{
"first": "Nikos",
"middle": [],
"last": "Tsirakis",
"suffix": ""
},
{
"first": "Vasilis",
"middle": [],
"last": "Poulopoulos",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of Systems and Software",
"volume": "127",
"issue": "",
"pages": "237--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikos Tsirakis, Vasilis Poulopoulos, Panagiotis Tsanti- las, and Iraklis Varlamis. 2017. Large scale opinion mining for social, news and blog data. Journal of Systems and Software, 127:237-248.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Differentiation of blackbox combinatorial solvers",
"authors": [
{
"first": "Marin",
"middle": [],
"last": "Vlastelica",
"suffix": ""
},
{
"first": "Anselm",
"middle": [],
"last": "Paulus",
"suffix": ""
},
{
"first": "V\u00edt",
"middle": [],
"last": "Musil",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Martius",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Rol\u00ednek",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.02175"
]
},
"num": null,
"urls": [],
"raw_text": "Marin Vlastelica, Anselm Paulus, V\u00edt Musil, Georg Martius, and Michal Rol\u00ednek. 2019. Differentiation of blackbox combinatorial solvers. arXiv preprint arXiv:1912.02175.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver",
"authors": [
{
"first": "Po-Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Priya",
"middle": [],
"last": "Donti",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Wilder",
"suffix": ""
},
{
"first": "Zico",
"middle": [],
"last": "Kolter",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "6545--6554",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Po-Wei Wang, Priya Donti, Bryan Wilder, and Zico Kolter. 2019. Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiabil- ity solver. In International Conference on Machine Learning, pages 6545-6554. PMLR.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "End to end learning and optimization on graphs",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Wilder",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Ewing",
"suffix": ""
},
{
"first": "Bistra",
"middle": [],
"last": "Dilkina",
"suffix": ""
},
{
"first": "Milind",
"middle": [],
"last": "Tambe",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.13732"
]
},
"num": null,
"urls": [],
"raw_text": "Bryan Wilder, Eric Ewing, Bistra Dilkina, and Milind Tambe. 2019. End to end learning and optimization on graphs. arXiv preprint arXiv:1905.13732.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Unsupervised deep embedding for clustering analysis",
"authors": [
{
"first": "Junyuan",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Girshick",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
}
],
"year": 2016,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "478--487",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyuan Xie, Ross Girshick, and Ali Farhadi. 2016. Unsupervised deep embedding for clustering analy- sis. In International conference on machine learn- ing, pages 478-487. PMLR.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Self-taught convolutional neural networks for short text clustering",
"authors": [
{
"first": "Jiaming",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Suncong",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Guanhua",
"middle": [],
"last": "Tian",
"suffix": ""
}
],
"year": 2017,
"venue": "Neural Networks",
"volume": "88",
"issue": "",
"pages": "22--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaming Xu, Bo Xu, Peng Wang, Suncong Zheng, Guanhua Tian, and Jun Zhao. 2017. Self-taught con- volutional neural networks for short text clustering. Neural Networks, 88:22-31.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Figure 5: Comparison between SBERT COB and SBERT Triplet in terms of epoch time",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Figure 5: Comparison between SBERT COB and SBERT Triplet in terms of epoch time.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Towards k-means-friendly spaces: Simultaneous deep learning and clustering",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Nicholas",
"suffix": ""
},
{
"first": "Mingyi",
"middle": [],
"last": "Sidiropoulos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hong",
"suffix": ""
}
],
"year": 2017,
"venue": "international conference on machine learning",
"volume": "",
"issue": "",
"pages": "3861--3870",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Yang, Xiao Fu, Nicholas D Sidiropoulos, and Mingyi Hong. 2017. Towards k-means-friendly spaces: Simultaneous deep learning and clustering. In international conference on machine learning, pages 3861-3870. PMLR.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Joint unsupervised learning of deep representations and image clusters",
"authors": [
{
"first": "Jianwei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "5147--5156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianwei Yang, Devi Parikh, and Dhruv Batra. 2016. Joint unsupervised learning of deep representations and image clusters. In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 5147-5156.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Training loop of our proposed supervised clustering approach."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Coarse and fine-grained clustering benchmarks from CAR dataset."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Comparison between SBERT COB and SBERT Triplet in terms of total training time."
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Visual comparison of clustering results between SBERT COB and SBERT Triplet (n = 35). Each dot denotes a passage from an article projected into the representation space after applying PCA. Different color denotes different subtopics. Clear separation of different colored blobs indicates good clustering quality."
},
"FIGREF4": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "unordered pairs in P that agrees be-tween C T , C A n No. of ordered pairs in P that agrees between C T , C A 2 n Total ordered pairs in P \u2212 ij |A ij \u2212 T ij | 2 n 2"
},
"TABREF0": {
"num": null,
"type_str": "table",
"text": "Description of variables used inFigure 1.",
"html": null,
"content": "<table><tr><td colspan=\"2\">Variable Description</td></tr><tr><td>P</td><td>Set of documents to be clustered</td></tr><tr><td>V \u03c6</td><td/></tr></table>"
},
"TABREF1": {
"num": null,
"type_str": "table",
"text": "Dataset statistics: N = total no. of documents, C = total no. of clustering instances, n = average number of documents per clustering instance, k = average number of clusters per clustering instance.",
"html": null,
"content": "<table><tr><td>Dataset</td><td>N</td><td>C</td><td>n</td><td>k</td><td/></tr><tr><td colspan=\"3\">20NG train 11314 226</td><td>50</td><td>18</td><td/></tr><tr><td>20NG test</td><td colspan=\"2\">7532 150</td><td>50</td><td>18</td><td/></tr><tr><td/><td/><td/><td/><td colspan=\"2\">k(coarse) k(fine)</td></tr><tr><td>CAR train</td><td colspan=\"3\">6.8M 597K 11</td><td>3.84</td><td>5.04</td></tr><tr><td>CAR test</td><td>6K</td><td>126</td><td>47</td><td>7.78</td><td>17.16</td></tr></table>"
},
"TABREF2": {
"num": null,
"type_str": "table",
"text": "Optimum values for interpolation parameter \u03bb and regularization constant r found using Optuna.",
"html": null,
"content": "<table><tr><td>Dataset</td><td>\u03bb</td><td>r</td></tr><tr><td>NG20</td><td>90.0</td><td>1.0</td></tr><tr><td>CAR coarse</td><td>47.0</td><td>3.8</td></tr><tr><td>CAR fine-grained</td><td>103.0</td><td>0.3</td></tr></table>"
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td colspan=\"4\">: Clustering performance on NG20 dataset in</td></tr><tr><td colspan=\"4\">terms of mean RAND index (RI), its corrected for</td></tr><tr><td colspan=\"4\">chance version Adjusted RAND Index (ARI) and mean</td></tr><tr><td colspan=\"4\">Normalized Mutual Information (NMI). Paired t-test</td></tr><tr><td colspan=\"4\">(\u03b1 = 0.05) is carried out with respect to SBERT Triplet</td></tr><tr><td>(denoted with *) and</td><td>and</td><td colspan=\"2\">denotes significantly</td></tr><tr><td colspan=\"2\">higher or lower performance.</td><td/><td/></tr><tr><td>Method</td><td>RI</td><td>ARI</td><td>NMI</td></tr><tr><td>SBERT COB</td><td>0.925</td><td>0.233</td><td>0.725</td></tr><tr><td>SBERT Triplet*</td><td>0.924</td><td>0.223</td><td>0.721</td></tr><tr><td>SBERT raw</td><td>0.754</td><td>0.041</td><td>0.582</td></tr><tr><td>TFIDF</td><td>0.624</td><td>0.008</td><td>0.506</td></tr></table>"
},
"TABREF4": {
"num": null,
"type_str": "table",
"text": "Dataset statistics: N, C, n, k denotes the same asTable 2, t denotes the total number of available triples to train SBERT Triplet method.",
"html": null,
"content": "<table><tr><td>Subset N</td><td colspan=\"5\">C k(coarse) k(fine) t(coarse) t(fine)</td></tr><tr><td colspan=\"2\">n=30 71K 2.4K</td><td>5.97</td><td>10.64</td><td>8.6M</td><td>5.8M</td></tr><tr><td colspan=\"2\">n=35 56K 1.6K</td><td>6.27</td><td>12.17</td><td>9.3M</td><td>5.9M</td></tr><tr><td colspan=\"2\">n=40 50K 1.2K</td><td>6.73</td><td colspan=\"2\">13.62 10.8M</td><td>6.5M</td></tr></table>"
},
"TABREF5": {
"num": null,
"type_str": "table",
"text": "Coarse-level clustering performance on CAR dataset using top-level benchmarks. Supervised models are trained with set of clustering examples each containing n passages. Paired t-test (\u03b1 = 0.05) is carried out with respect to SBERT Triplet (n = 30) and marked with *.",
"html": null,
"content": "<table><tr><td>Method</td><td>RI</td><td>ARI</td><td>NMI</td></tr><tr><td>Trained on n=30 subset</td><td/><td/><td/></tr><tr><td>SBERT COB</td><td>0.742</td><td>0.230</td><td>0.502</td></tr><tr><td>SBERT Triplet*</td><td>0.738</td><td>0.214</td><td>0.494</td></tr><tr><td>Trained on n=35 subset</td><td/><td/><td/></tr><tr><td>SBERT COB</td><td>0.744</td><td>0.236</td><td>0.512</td></tr><tr><td>SBERT Triplet</td><td>0.715</td><td>0.167</td><td>0.460</td></tr><tr><td>Trained on n=40 subset</td><td/><td/><td/></tr><tr><td>SBERT COB</td><td>0.726</td><td>0.231</td><td>0.514</td></tr><tr><td>SBERT Triplet</td><td>0.704</td><td>0.145</td><td>0.438</td></tr><tr><td>Unsupervised</td><td/><td/><td/></tr><tr><td>SBERT raw</td><td>0.563</td><td>0.101</td><td>0.406</td></tr><tr><td>TFIDF</td><td>0.544</td><td>0.072</td><td>0.375</td></tr></table>"
},
"TABREF6": {
"num": null,
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td colspan=\"4\">: Fine-grained clustering performance on CAR</td></tr><tr><td colspan=\"4\">dataset using hierarchical benchmarks. Notations used</td></tr><tr><td>are same as in Table 6.</td><td/><td/><td/></tr><tr><td>Method</td><td>RI</td><td>ARI</td><td>NMI</td></tr><tr><td>Trained on n=30 subset</td><td/><td/><td/></tr><tr><td>SBERT COB</td><td>0.849</td><td>0.178</td><td>0.682</td></tr><tr><td>SBERT Triplet*</td><td>0.848</td><td>0.173</td><td>0.678</td></tr><tr><td>Trained on n=35 subset</td><td/><td/><td/></tr><tr><td>SBERT COB</td><td>0.837</td><td>0.163</td><td>0.672</td></tr><tr><td>SBERT Triplet</td><td>0.830</td><td>0.152</td><td>0.665</td></tr><tr><td>Trained on n=40 subset</td><td/><td/><td/></tr><tr><td>SBERT COB</td><td>0.832</td><td>0.154</td><td>0.666</td></tr><tr><td>SBERT Triplet</td><td>0.860</td><td>0.138</td><td>0.662</td></tr><tr><td>Unsupervised</td><td/><td/><td/></tr><tr><td>SBERT raw</td><td>0.796</td><td>0.130</td><td>0.646</td></tr><tr><td>TFIDF</td><td>0.788</td><td>0.110</td><td>0.631</td></tr><tr><td colspan=\"4\">Triplet trained on n = 40 for fine-grained cluster-</td></tr><tr><td colspan=\"4\">ing. However, we also observe overall decrease in</td></tr><tr><td colspan=\"4\">ARI scores for all methods in case of fine-grained</td></tr><tr><td colspan=\"4\">clustering. This is expected as fine-grained clus-</td></tr><tr><td colspan=\"4\">tering is a harder problem largely due to fewer</td></tr><tr><td colspan=\"4\">passage pairs sharing a cluster. Note that RI and</td></tr><tr><td colspan=\"4\">NMI measures are only comparable within table</td></tr><tr><td colspan=\"4\">because unlike ARI, it is not adjusted for chance.</td></tr></table>"
}
}
}
}