ACL-OCL / Base_JSON /prefixC /json /coling /2020.coling-industry.22.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:43:33.428916Z"
},
"title": "Regularized Graph Convolutional Networks for Short Text Classification",
"authors": [
{
"first": "Kshitij",
"middle": [],
"last": "Tayal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Minnesota Twin Cities",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Saurabh",
"middle": [],
"last": "Agrawal",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Rao",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Xiaowei",
"middle": [],
"last": "Jia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pittsburgh",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Karthik",
"middle": [],
"last": "Subbian",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Vipin",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Minnesota",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Short text classification is a fundamental problem in natural language processing, social network analysis, and e-commerce. The lack of structure in short text sequences limits the success of popular NLP methods based on deep learning. Simpler methods that rely on bag-of-words representations tend to perform on par with complex deep learning methods. To tackle the limitations of textual features in short text, we propose a Graph-regularized Graph Convolution Network (GR-GCN), which augments graph convolution networks by incorporating label dependencies in the output space. Our model achieves state-of-the-art results on both proprietary and external datasets, outperforming several baseline methods by up to 6%. Furthermore, we show that compared to baseline methods, GR-GCN is more robust to noise in textual features.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Short text classification is a fundamental problem in natural language processing, social network analysis, and e-commerce. The lack of structure in short text sequences limits the success of popular NLP methods based on deep learning. Simpler methods that rely on bag-of-words representations tend to perform on par with complex deep learning methods. To tackle the limitations of textual features in short text, we propose a Graph-regularized Graph Convolution Network (GR-GCN), which augments graph convolution networks by incorporating label dependencies in the output space. Our model achieves state-of-the-art results on both proprietary and external datasets, outperforming several baseline methods by up to 6%. Furthermore, we show that compared to baseline methods, GR-GCN is more robust to noise in textual features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Short-text classification is a common problem in information retrieval (Ji et al., 2014) and has applications in several domains including e-commerce (Yu et al., 2012; Shen et al., 2009) , social media (Kateb and Kalita, 2015) , healthcare (Pestian et al., 2007) and cognitive-biometric recognition (Pokhriyal et al., 2016) . In this paper, we develop a short text classification technique for solving two problems relevant to product search on e-commerce platform: 1) Product Query Classification (PQC) -When the customer enters a free form query, it is important to understand their product type intent to recommend and advertise the relevant products. We classify customer search queries to one or more product types (e.g., shoe, televisions, skis), and 2) Product Title Classification (PTC) -We classify billions of product titles to one or more product categories. This is important for sellers to place their items in the correct product category and retrieve it when a customer queries it.",
"cite_spans": [
{
"start": 71,
"end": 88,
"text": "(Ji et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 150,
"end": 167,
"text": "(Yu et al., 2012;",
"ref_id": "BIBREF19"
},
{
"start": 168,
"end": 186,
"text": "Shen et al., 2009)",
"ref_id": "BIBREF15"
},
{
"start": 202,
"end": 226,
"text": "(Kateb and Kalita, 2015)",
"ref_id": "BIBREF8"
},
{
"start": 240,
"end": 262,
"text": "(Pestian et al., 2007)",
"ref_id": "BIBREF11"
},
{
"start": 299,
"end": 323,
"text": "(Pokhriyal et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unlike traditional text classification, classifying short-texts poses additional challenges. First, short texts in e-commerce typically involve sentences with an average length of 3 (for queries) to 15 words (for product titles). Second, unlike longer texts such as blogs or news articles, these customer queries or product titles lack \"natural\" language structure and are often plagued with spelling errors. For example, in PQC, queries like Nike running, shoes size 9, nike shos (misspelling variants) all belong to the shoes category. In addition, queries contain non-target language text and non-language text (like model/part numbers), which introduces noise in the embedding. A similar challenge could also be faced in PTC problem, where products from the same class have high diversity in their title texts. For example, titles PhotoFast microSD to MS Pro Duo CR-5300, Kingston microSD Card and 8GB card for Blackberry Storm 9530 all belong to the same genre of microSD card products and hence need to be listed under the same category. All these factors make it difficult to separate product-type classes by purely relying on text which is heterogeneous and contains noise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we propose to enhance the textual information by leveraging additional knowledge about relationships between input short-texts as well as among class labels. For PQC, we can derive similarities between input user queries from (anonymized) user logs, by looking at commonly purchased items in response to different queries. The intuition is that two queries that consistently lead to the purchase of a same set of items might have similar product-type intents. Likewise, in PTC, we can estimate similarity between two input product titles from historical information such as co-views. Similarly, in output space, relationships between product-type classes can be modeled using product-category taxonomies, which are typically hand-curated and readily available in e-commerce applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Such auxiliary information can be naturally represented in graphical form, where each node represents a short-text (input graph) or a class label (output graph), while an edge indicates magnitude of similarity between two nodes. We thus propose a Graph-regularized Graph Convolution Network (GR-GCN) approach, which augments the graph convolutional network (Tayal et al., 2019) to incorporate such graphical information in an end to end learning framework. The two key aspects of GR-GCN are: i) a GCN that leverages dependencies in the input space to learn more informative representations of nodes(input short texts), and ii) a graph-regularization (GR) term in the objective function that exploits label similarities to penalize contrasting predictions for similar class labels on each input sample, thereby restricting the solution space and making our approach more robust to noise in the data.",
"cite_spans": [
{
"start": 357,
"end": 377,
"text": "(Tayal et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We perform extensive experiments on one proprietary, and two public datasets and demonstrate the improvement in classification accuracy for GR-GCN upto 6% compared to text-based baselines. Further, we add noise in the input data and show that the graph's presence makes our method more robust to noise as compared to baseline methods based on just textual features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Let X \u2208 R n\u00d7d be a matrix with each row being the embedding vector of an input sample and Y \u2208 R n\u00d7L be the label indicator matrix. Here d is the dimension of embedding and L number of class labels. Let G I = (V I , E I ) be the graph on input samples, with a corresponding adjacency matrix A I \u2208 R n\u00d7n ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "2"
},
{
"text": "A I = A I + I. Let G o = (V o , E o )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "2"
},
{
"text": "be the graph in output space, with A o \u2208 R L\u00d7L being the adjacency matrix on the output labels. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "2"
},
{
"text": "L GCN + \u03bbL GR , where L GCN := L(f \u03b8 (x), y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "2"
},
{
"text": "is the cross-entropy loss between the GCN predictions f \u03b8 (x) for node x, parameterized by \u03b8 and the ground truth y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "2"
},
{
"text": "L GR := i,j\u2208Eo f \u03b8 (x i ) \u2212 f \u03b8 (x j ) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "2"
},
{
"text": "is the graph Laplacian based regularization that acts on the output node representations, and forces the predictions of adjacent nodes in the output graph to be similar. As we demonstrate in the experiments, this additional regularization makes our model especially robust to noise. Regularizers of the form L GR have shown to be successful in factorization models (Johnson and Zhang, 2007; Rao et al., 2015; Zhou et al., 2012) , and to the best of our knowledge, we are the first to apply it to regularize the output space for GCNs. In this section, we discuss the construction of graphs in input space and output space in the context of the two application problems that we focus on in this paper.",
"cite_spans": [
{
"start": 365,
"end": 390,
"text": "(Johnson and Zhang, 2007;",
"ref_id": "BIBREF6"
},
{
"start": 391,
"end": 408,
"text": "Rao et al., 2015;",
"ref_id": "BIBREF13"
},
{
"start": 409,
"end": 427,
"text": "Zhou et al., 2012)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "2"
},
{
"text": "The goal of PQC is to predict the product-type intent of user-typed search queries on an e-commerce website. To create the input graph, we use anonymous user logs that hold knowledge about query association. Intuitively, any two queries leading to the purchase of same items are more likely to have similar product-type intent. Following the intuition, we construct the graph such that for any two queries i and j, the adjacency matrix A I is constructed as A ij = number of common purchases between query i and query j. To construct the output graph between product labels , we first represent each label (product category) with the mean of embeddings of the titles of products that belong to this category. We then apply cosine similarity between embedding vectors of labels to construct the output graph, and discard edges that do not meet the threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Product Query Classification (PQC):",
"sec_num": "3.1"
},
{
"text": "The goal of PTC is to classify products into product categories. Specifically, each input sample is the title of a product, while the output label is a product category. To construct graph G I , we use the coviewed metadata of each product. Specifically, for two product titles i and j, the input matrix A I is constructed as A ij = number of co-view between title i and title j. As with PQC problem, we used the same procedure to obtain the output space graph between labels. Co-views, is an intuitive means to construct the input graph, since items that are co-views are typically substitutes of each other. Thus, neighbors on the co-view graph tend to have similar categorization. (Kim, 2014) 79.45 55.87 61.42 CNN-non-static (Kim, 2014) 82.75 58.75 64.19 CharCNN (Zhang et al., 2015) 80.36 61.72 63.18 LSTM (Gers et al., 1999) 80.35 60.09 61.3 Bi-LSTM (Graves and Schmidhuber, 2005) 81.38 60.19 61.00 fastText (Joulin et al., 2016) 83.67 61.4 64.03 Graph-CNN-C (Defferrard et al., 2016) 80.08 58.60 59.25 Text GCN (Yao et al., 2019) 80.25 61.77 65.31 SWEM (Shen et al., 2018) 86 We evaluate GR-GCN approach on three datasets described in Table 1 All pre-trained word embeddings are 128-d learned by training FastText (Bojanowski et al., 2017; Joulin et al., 2016) . For GR-GCN, we used a 2-layer GCN with learning rate 0.1, dropout set to 0.1 and L2 regularization factor \u03bb of 1e \u22127 . All the datasets are split into 70 % training, 10% validation, and 20 % testing.",
"cite_spans": [
{
"start": 684,
"end": 695,
"text": "(Kim, 2014)",
"ref_id": "BIBREF9"
},
{
"start": 729,
"end": 740,
"text": "(Kim, 2014)",
"ref_id": "BIBREF9"
},
{
"start": 767,
"end": 787,
"text": "(Zhang et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 806,
"end": 830,
"text": "LSTM (Gers et al., 1999)",
"ref_id": null
},
{
"start": 856,
"end": 886,
"text": "(Graves and Schmidhuber, 2005)",
"ref_id": "BIBREF3"
},
{
"start": 914,
"end": 935,
"text": "(Joulin et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 965,
"end": 990,
"text": "(Defferrard et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 1018,
"end": 1036,
"text": "(Yao et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 1060,
"end": 1079,
"text": "(Shen et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 1221,
"end": 1246,
"text": "(Bojanowski et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 1247,
"end": 1267,
"text": "Joulin et al., 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 1142,
"end": 1149,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Product Title Classification (PTC):",
"sec_num": "3.2"
},
{
"text": "The following are brief descriptions of the baselines in the comparative study. We have grouped our baselines into three categories i.e., Text Models (text features only), Graph Models (graph relation information only), and Text + Graph Models (uses both textual and graph information).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines :",
"sec_num": "4.1"
},
{
"text": "Text Models: TF-IDF+LR: Bag-of-words model with TF-IDF as feature and Logistic Regression as a classifier. Low-frequency words appearing less than 5 times were removed. CNN: Two variants of CNN proposed in (Kim, 2014) are used: i) CNN-rand uses randomly initialized word embeddings and, ii) CNN-non-static uses pre-trained word embeddings. CharCNN: Character-level CNNs as proposed in (Zhang et al., 2015) LSTM: A simple LSTM block with 256 hidden states. We input pre-trained word embeddings. Bi-LSTM: Bidirectional LSTM block with 256 hidden states. We input pre-trained word embeddings. fastText: text classification tool from facebook (Joulin et al., 2016) . It averages words embedding, then feeds into a linear classifier. SWEM: employing average pooling operation (Shen et al., 2018) of feature and afterwards using feed forward network with architecture 256-512-1024-C as classifier, where C is the number of classes.",
"cite_spans": [
{
"start": 206,
"end": 217,
"text": "(Kim, 2014)",
"ref_id": "BIBREF9"
},
{
"start": 385,
"end": 405,
"text": "(Zhang et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 639,
"end": 660,
"text": "(Joulin et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 771,
"end": 790,
"text": "(Shen et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines :",
"sec_num": "4.1"
},
{
"text": "Graph Models: Graph-CNN-C: CNN model that performs convolutions across word embeddings relation graph using Chebyshev filter (Defferrard et al., 2016) Text+Graph Models: Text GCN: GCN model where we construct corpus graph using documents and word as nodes (Yao et al., 2019) .",
"cite_spans": [
{
"start": 125,
"end": 150,
"text": "(Defferrard et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 256,
"end": 274,
"text": "(Yao et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines :",
"sec_num": "4.1"
},
{
"text": "For all baselines, we used default parameters as in their original paper/implementation. The results are summarized in Table 2 . GR-GCN can be seen to outperform all baseline models in classification accuracy by a margin of 6% for the Internal dataset, 2.8% on Electronics and 3.8% on Home dataset. Further, we make the following observations from our results: 1) A simple model, TF-IDF + LR performs well on short text datasets beating CNN random on internal and electronics datasets and performing comparably on home dataset. This reinforces our observation that short text documents often lack the structure that complex models such as neural networks can capture, and in some cases, using the latter might over parameterized the problem. 2) LSTM based methods that use pre-trained embeddings also do not perform better than TFIDF based methods, for a similar reason: the word order is seldom important in user typed queries and product titles, and there's no \"natural language\" structure to exploit. 3) TextGCN shows competitive performance on Electronics and Home dataset but performs poorly on the internal dataset. This is because the texts in the internal dataset are super short, with an average length below 4, contain many spelling errors, and the label space is vast. On further examination, we noticed that due to spelling errors, TextGCN is creating a lot of spurious nodes in the graph, which is a limitation of the learning graph from the corpus. To evaluate the effect of each graph (input and output) individually on model performance, we perform two more experiments. In first experiment we incorporate output graph to the best \"Text\" performance model in our baselines (SWEM) and measure the performance. We refer to this model as SWEM-GR-out. In the second experiment, we only use the input graph, thus eliminating the graph regularization. We refer to this model as GR-GCN-inp. Results are summarized in Table 3 . We observe that both input and output graphs in their individual capacity are adding meaning to the classification accuracy. Search queries on e-commerce platforms contain a lot of misspelled keywords that introduce noise in the embedding. To simulate this behavior in our model, we evaluate the robustness of GR-GCN by introducing Additive White Gaussian Noise with zero mean and varying standard deviations in our embedding (Zhang and Yang, 2018) . Figure 2 reports test accuracy with varying standard deviation (\u03c3) of the noise. We compare GR-GCN with two text-base baselines: TF-IDF+LR, a simple bag of words model and SWEM (Shen et al., 2018) , a state of the art deep learning model. We see test performance of TF-IDF + LR, and SWEM drops immediately with minimal noise, while GR-GCN is robust to noise. On Internal dataset with \u03c3 = 0.05, TF-IDF performance drops to 12%, SWEM performance drops to 67%, while GR-GCN performance dropped to 87%. We attribute this noise-tolerant feature to the presence of input and output graph, which other methods lack.",
"cite_spans": [
{
"start": 2362,
"end": 2384,
"text": "(Zhang and Yang, 2018)",
"ref_id": "BIBREF20"
},
{
"start": 2564,
"end": 2583,
"text": "(Shen et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 119,
"end": 126,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 1926,
"end": 1933,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 2387,
"end": 2395,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Quantitative Results",
"sec_num": "4.2"
},
{
"text": "To assess the impact of the size of the labeled data, we tested the top-performing models with varying proportions of the training data. Figure 3 reports test accuracy with various sized subsamples of the training datasets. We observe that GR-GCN can achieve higher test accuracy with limited labeled documents. For example, with just 20% training data, GR-GCN achieves 87 % accuracy on the Internal dataset, surpassing all other baselines that are trained on 100 % training data. ",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 145,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Effect of the size of the Labelled Data",
"sec_num": "4.4.1"
},
{
"text": "In this paper, we propose GR-GCN to classify short texts that capture dependency on two levels, i.e., within the text samples (input space graph) and amongst the output label (output space graph). We demonstrated its efficacy on two commercial e-commerce applications and its robustness to noise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We note that the proposed method can add value to other domains like medical science, where the input graph can capture drug similarity, and the output graph can capture the relationship among various types of illness; remote sensing, where the input graph can capture the distance, depth etc. between different ground points and output graph can capture similarity between similar labels such as pasture and vegetation as well as distinguish between entirely different labels such as river and residence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://jmcauley.ucsd.edu/data/amazon/links.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Convolutional neural networks on graphs with fast localized spectral filtering",
"authors": [
{
"first": "Micha\u00ebl",
"middle": [],
"last": "Defferrard",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Bresson",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Vandergheynst",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3844--3852",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micha\u00ebl Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, pages 3844-3852.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning to forget: Continual prediction with lstm",
"authors": [
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Felix A Gers",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cummins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix A Gers, J\u00fcrgen Schmidhuber, and Fred Cummins. 1999. Learning to forget: Continual prediction with lstm.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Framewise phoneme classification with bidirectional lstm and other neural network architectures",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2005,
"venue": "Neural networks",
"volume": "18",
"issue": "5-6",
"pages": "602--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves and J\u00fcrgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural networks, 18(5-6):602-610.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering",
"authors": [
{
"first": "Ruining",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Mcauley",
"suffix": ""
}
],
"year": 2016,
"venue": "proceedings of the 25th international conference on world wide web",
"volume": "",
"issue": "",
"pages": "507--517",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In proceedings of the 25th international conference on world wide web, pages 507-517. International World Wide Web Conferences Steering Committee.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An information retrieval approach to short text conversation",
"authors": [
{
"first": "Zongcheng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.6988"
]
},
"num": null,
"urls": [],
"raw_text": "Zongcheng Ji, Zhengdong Lu, and Hang Li. 2014. An information retrieval approach to short text conversation. arXiv preprint arXiv:1408.6988.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "On the effectiveness of laplacian normalization for graph semi-supervised learning",
"authors": [
{
"first": "Rie",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Machine Learning Research",
"volume": "8",
"issue": "",
"pages": "1489--1517",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rie Johnson and Tong Zhang. 2007. On the effectiveness of laplacian normalization for graph semi-supervised learning. Journal of Machine Learning Research, 8(Jul):1489-1517.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bag of tricks for efficient text classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.01759"
]
},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Classifying short text in social media: Twitter as case study",
"authors": [
{
"first": "Faris",
"middle": [],
"last": "Kateb",
"suffix": ""
},
{
"first": "Jugal",
"middle": [],
"last": "Kalita",
"suffix": ""
}
],
"year": 2015,
"venue": "International Journal of Computer Applications",
"volume": "",
"issue": "9",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Faris Kateb and Jugal Kalita. 2015. Classifying short text in social media: Twitter as case study. International Journal of Computer Applications, 111(9).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5882"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Semi-supervised classification with graph convolutional networks",
"authors": [
{
"first": "N",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.02907"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A shared task involving multi-label classification of clinical free text",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "John P Pestian",
"suffix": ""
},
{
"first": "Pawe\u0142",
"middle": [],
"last": "Brew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Matykiewicz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dj",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Hovermale",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "W\u0142odzis\u0142aw",
"middle": [],
"last": "Bretonnel Cohen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Duch",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing",
"volume": "",
"issue": "",
"pages": "97--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John P Pestian, Christopher Brew, Pawe\u0142 Matykiewicz, Dj J Hovermale, Neil Johnson, K Bretonnel Cohen, and W\u0142odzis\u0142aw Duch. 2007. A shared task involving multi-label classification of clinical free text. In Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing, pages 97-104. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Cognitive-biometric recognition from language usage: A feasibility study",
"authors": [
{
"first": "Neeti",
"middle": [],
"last": "Pokhriyal",
"suffix": ""
},
{
"first": "Kshitij",
"middle": [],
"last": "Tayal",
"suffix": ""
},
{
"first": "Ifeoma",
"middle": [],
"last": "Nwogu",
"suffix": ""
},
{
"first": "Venu",
"middle": [],
"last": "Govindaraju",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE Transactions on Information Forensics and Security",
"volume": "12",
"issue": "1",
"pages": "134--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neeti Pokhriyal, Kshitij Tayal, Ifeoma Nwogu, and Venu Govindaraju. 2016. Cognitive-biometric recognition from language usage: A feasibility study. IEEE Transactions on Information Forensics and Security, 12(1):134- 143.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Collaborative filtering with graph information: Consistency and scalable methods",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Hsiang-Fu",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Pradeep",
"suffix": ""
},
{
"first": "Inderjit S",
"middle": [],
"last": "Ravikumar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dhillon",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2107--2115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil Rao, Hsiang-Fu Yu, Pradeep K Ravikumar, and Inderjit S Dhillon. 2015. Collaborative filtering with graph information: Consistency and scalable methods. In Advances in neural information processing systems, pages 2107-2115.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Term-weighting approaches in automatic text retrieval. Information processing & management",
"authors": [
{
"first": "Gerard",
"middle": [],
"last": "Salton",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Buckley",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "24",
"issue": "",
"pages": "513--523",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerard Salton and Christopher Buckley. 1988. Term-weighting approaches in automatic text retrieval. Information processing & management, 24(5):513-523.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Product query classification",
"authors": [
{
"first": "Dou",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dengyong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 18th ACM conference on Information and knowledge management",
"volume": "",
"issue": "",
"pages": "741--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dou Shen, Ying Li, Xiao Li, and Dengyong Zhou. 2009. Product query classification. In Proceedings of the 18th ACM conference on Information and knowledge management, pages 741-750. ACM.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Baseline needs more love: On simple word-embedding-based models and associated pooling mechanisms",
"authors": [
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Guoyin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wenlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Renqiang Min",
"suffix": ""
},
{
"first": "Qinliang",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chunyuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Henao",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.09843"
]
},
"num": null,
"urls": [],
"raw_text": "Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Chunyuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline needs more love: On simple word-embedding-based models and associated pooling mechanisms. arXiv preprint arXiv:1805.09843.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Short text classification using graph convolutional network",
"authors": [
{
"first": "Kshitij",
"middle": [],
"last": "Tayal",
"suffix": ""
},
{
"first": "Rao",
"middle": [],
"last": "Nikhil",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Subbian",
"suffix": ""
}
],
"year": 2019,
"venue": "NIPS workshop on Graph Representation Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kshitij Tayal, Rao Nikhil, Saurabh Agarwal, and Karthik Subbian. 2019. Short text classification using graph convolutional network. NIPS workshop on Graph Representation Learning.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Graph convolutional networks for text classification",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Chengsheng",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "7370--7377",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Graph convolutional networks for text classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7370-7377.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Product title classification versus text classification",
"authors": [
{
"first": "Hsiang-Fu",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Chia-Hua",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Prakash",
"middle": [],
"last": "Arunachalam",
"suffix": ""
},
{
"first": "Manas",
"middle": [],
"last": "Somaiya",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2012,
"venue": "Csie. Ntu. Edu. Tw",
"volume": "",
"issue": "",
"pages": "1--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hsiang-Fu Yu, Chia-Hua Ho, Prakash Arunachalam, Manas Somaiya, and Chih-Jen Lin. 2012. Product title classification versus text classification. Csie. Ntu. Edu. Tw, pages 1-25.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Word embedding perturbation for sentence classification",
"authors": [
{
"first": "Dongxu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhichao",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.08166"
]
},
"num": null,
"urls": [],
"raw_text": "Dongxu Zhang and Zhichao Yang. 2018. Word embedding perturbation for sentence classification. arXiv preprint arXiv:1804.08166.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Character-level convolutional networks for text classification",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "649--657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649-657.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Kernelized probabilistic matrix factorization: Exploiting graphs and side information",
"authors": [
{
"first": "Tinghui",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Hanhuai",
"middle": [],
"last": "Shan",
"suffix": ""
},
{
"first": "Arindam",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Guillermo",
"middle": [],
"last": "Sapiro",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 SIAM international Conference on Data mining",
"volume": "",
"issue": "",
"pages": "403--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tinghui Zhou, Hanhuai Shan, Arindam Banerjee, and Guillermo Sapiro. 2012. Kernelized probabilistic matrix factorization: Exploiting graphs and side information. In Proceedings of the 2012 SIAM international Confer- ence on Data mining, pages 403-414. SIAM.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Schematic Illustration of GR-GCN. The GCN is used to learn node representations that respect the input graph structure, while the graph regularization is used to learn representations that respect the output graph structure."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "illustrates our approach: GR-GCN. In this work, we use a 2-layer GCN (Kipf and Welling, 2016) on G I . The parameters are learnt by miniminizing the GR-GCN loss function:"
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Test accuracy by varying standard deviation of added noise (best seen in color). GR-GCN consistently outperforms the baselines."
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Test accuracy by varying training data (best seen in color)."
},
"TABREF0": {
"content": "<table><tr><td>Dataset</td><td>#Docs</td><td>#Training</td><td>#Test</td><td>#Unique Words</td><td>#Edges</td><td colspan=\"2\">#Class Average Length</td></tr><tr><td>Internal</td><td>200000</td><td>160000</td><td>40000</td><td>41880</td><td>3642910</td><td>2290</td><td>3.38</td></tr><tr><td colspan=\"2\">Electronics 188626</td><td>150,900</td><td>37,726</td><td>291,804</td><td>962,444</td><td>796</td><td>14.23</td></tr><tr><td>Home</td><td>279788</td><td>223,830</td><td>55,958</td><td>176,754</td><td>6549,740</td><td>1100</td><td>9.63</td></tr></table>",
"num": null,
"text": "Summary statistics of datasets",
"html": null,
"type_str": "table"
},
"TABREF1": {
"content": "<table><tr><td>Model</td><td colspan=\"3\">Internal Electronics Home</td></tr><tr><td>TF-IDF + LR (Salton and Buckley, 1988)</td><td>80.9</td><td>59.70</td><td>61.2</td></tr><tr><td>CNN-rand</td><td/><td/><td/></tr></table>",
"num": null,
"text": "Classification accuracy of GR-GCN compared to multiple baselines",
"html": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table/>",
"num": null,
"text": "Performance Comparison",
"html": null,
"type_str": "table"
}
}
}
}