|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:58:08.792058Z" |
|
}, |
|
"title": "Bootstrapping Large-Scale Fine-Grained Contextual Advertising Classifier from Wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Yiping", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "jinyiping@@knorex.com" |
|
}, |
|
{ |
|
"first": "Vishakha", |
|
"middle": [], |
|
"last": "Kadam", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "vishakha.kadam@@knorex.com" |
|
}, |
|
{ |
|
"first": "Dittaya", |
|
"middle": [], |
|
"last": "Wanvarie", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Chulalongkorn University", |
|
"location": { |
|
"country": "Thailand" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Contextual advertising provides advertisers with the opportunity to target the context which is most relevant to their ads. The large variety of potential topics makes it very challenging to collect training documents to build a supervised classification model or compose expert-written rules in a rule-based classification system. Besides, in fine-grained classification, different categories often overlap or cooccur, making it harder to classify accurately. In this work, we propose wiki2cat, a method to tackle large-scaled fine-grained text classification by tapping on the Wikipedia category graph. The categories in the IAB taxonomy are first mapped to category nodes in the graph. Then the label is propagated across the graph to obtain a list of labeled Wikipedia documents to induce text classifiers. The method is ideal for large-scale classification problems since it does not require any manually-labeled document or hand-curated rules or keywords. The proposed method is benchmarked with various learning-based and keyword-based baselines and yields competitive performance on publicly available datasets and a new dataset containing more than 300 fine-grained categories.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Contextual advertising provides advertisers with the opportunity to target the context which is most relevant to their ads. The large variety of potential topics makes it very challenging to collect training documents to build a supervised classification model or compose expert-written rules in a rule-based classification system. Besides, in fine-grained classification, different categories often overlap or cooccur, making it harder to classify accurately. In this work, we propose wiki2cat, a method to tackle large-scaled fine-grained text classification by tapping on the Wikipedia category graph. The categories in the IAB taxonomy are first mapped to category nodes in the graph. Then the label is propagated across the graph to obtain a list of labeled Wikipedia documents to induce text classifiers. The method is ideal for large-scale classification problems since it does not require any manually-labeled document or hand-curated rules or keywords. The proposed method is benchmarked with various learning-based and keyword-based baselines and yields competitive performance on publicly available datasets and a new dataset containing more than 300 fine-grained categories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Despite the fast advancement of text classification technologies, most text classification models are trained and applied to a relatively small number of categories. Popular benchmark datasets contain from two up to tens of categories, such as SST2 dataset for sentiment classification (2 categories) (Socher et al., 2013) , AG news dataset (4 categories) (Zhang et al., 2015) and 20 Newsgroups dataset (Lang, 1995) for topic classification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 301, |
|
"end": 322, |
|
"text": "(Socher et al., 2013)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 356, |
|
"end": 376, |
|
"text": "(Zhang et al., 2015)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 415, |
|
"text": "(Lang, 1995)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the meantime, industrial applications often involve fine-grained classification with a large number of categories. For example, Walmart built a hybrid classifier to categorize products into 5000+ product categories (Sun et al., 2014) , and Yahoo built a contextual advertising classifier with a taxonomy of around 6000 categories (Broder et al., 2007) . Unfortunately, both systems require a huge human effort in composing and maintaining rules and keywords. Readers can neither reproduce their system nor is the system or data available for comparison.", |
|
"cite_spans": [ |
|
{ |
|
"start": 218, |
|
"end": 236, |
|
"text": "(Sun et al., 2014)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 333, |
|
"end": 354, |
|
"text": "(Broder et al., 2007)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we focus on the application of contextual advertising (Jin et al., 2017) , which allows advertisers to target the context most relevant to their ads. However, we cannot fully utilize its power unless we can target the page content using fine-grained categories, e.g., \"coup\u00e9\"' vs. \"hatchback\" instead of \"automotive\" vs. \"sport\". This motivates a classification taxonomy with both high coverage and high granularity. The commonly used contextual taxonomy introduced by Interactive Advertising Bureau (IAB) contains 23 coarse-grained categories and 355 fine-grained categories 1 . Figure 1 shows a snippet of the taxonomy. Large online encyclopedias, such as Wikipedia, contain an updated account of almost all topics. Therefore, we ask an essential question: can we bootstrap a text classifier with hundreds of categories from Wikipedia without any manual labeling?", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 86, |
|
"text": "(Jin et al., 2017)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 590, |
|
"end": 591, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 594, |
|
"end": 600, |
|
"text": "Figure", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We tap on and extend previous work on Wikipedia content analysis (Kittur et al., 2009) to automatically label Wikipedia articles related to each category in our taxonomy by Wikipedia category graph traversal. We then train classification models with the labeled Wikipedia articles. We compare our method with various learning-based and keyword-based baselines and obtain a competitive performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 86, |
|
"text": "(Kittur et al., 2009)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Large knowledge bases like Wikipedia or DMOZ content directory cover a wide range of topics. They also have a category hierarchy in either tree or graph structure, which provides a useful resource for building text classification models. Text classification using knowledge bases can be broadly categorized into two main approaches: vector space model and semantic model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Classification Using Knowledge Base", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Vector space model aims to learn a category vector by aggregating the descendant pages and perform nearest neighbor search during classification. A pruning is usually performed first based on the depth from the root node or the number of child pages to reduce the number of categories. Subsequently, each document forms a document vector, which is aggregated to form the category vector. Lee et al. (2013) used tf-idf representation of the document, while Kim et al. (2018) combined word embeddings and tf-idf representations to obtain a better performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 388, |
|
"end": 405, |
|
"text": "Lee et al. (2013)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 456, |
|
"end": 473, |
|
"text": "Kim et al. (2018)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Classification Using Knowledge Base", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In semantic models, the input document is mapped explicitly to concepts in the knowledge base. The concepts are used either in conjunction with bag-of-words representation (Gabrilovich and Markovitch, 2006) or stand-alone (Chang et al., 2008) to assign categories to the document. Gabrilovich and Markovitch (2006) used a feature generator to predict relevant Wikipedia concepts (articles) related to the input document. These concepts are orthogonal to the labels in specific text classification tasks and are used to enrich the representation of the input document. Experiments on multiple datasets demonstrated that the additional concepts helped improve the performance. Similarly, Zhang et al. (2013) enriched the document representation with both concepts and categories from Wikipedia. Chang et al. (2008) proposed Dataless classification that maps both input documents and category names into Wikipedia concepts using Explicit Semantic Analysis (Gabrilovich et al., 2007) . The idea is similar to Gabrilovich and Markovitch (2006) , except (1) the input is mapped to a real-valued concept vector instead of a discrete list of related categories, and (2) the category name is mapped into the same semantic space, which removes the need for labeled documents.", |
|
"cite_spans": [ |
|
{ |
|
"start": 172, |
|
"end": 206, |
|
"text": "(Gabrilovich and Markovitch, 2006)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 222, |
|
"end": 242, |
|
"text": "(Chang et al., 2008)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 314, |
|
"text": "Gabrilovich and Markovitch (2006)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 686, |
|
"end": 705, |
|
"text": "Zhang et al. (2013)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 793, |
|
"end": 812, |
|
"text": "Chang et al. (2008)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 953, |
|
"end": 979, |
|
"text": "(Gabrilovich et al., 2007)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1005, |
|
"end": 1038, |
|
"text": "Gabrilovich and Markovitch (2006)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Classification Using Knowledge Base", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Most recently, Chu et al. (2020) improved text classification by utilizing naturally labeled documents such as Wikipedia, Stack Exchange subareas, and Reddit subreddits. Instead of training a traditional supervised classifier, they concatenate the category name and the document and train a binary classifier, determining whether the document is related to the category. They benchmarked their proposed method extensively on 11 datasets covering topical and sentiment classification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 32, |
|
"text": "Chu et al. (2020)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Classification Using Knowledge Base", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Our work is most similar to Lee et al. (2013) . However, they only evaluated on random-split Wikipedia documents, while we apply the model to a real-world large-scale text classification problem. We also employed a graph traversal algorithm to label the documents instead of labeling all descendant documents.", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 45, |
|
"text": "Lee et al. (2013)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Classification Using Knowledge Base", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Some previous work tried to understand the distribution of topics in Wikipedia for data analysis and visualization (Mesgari et al., 2015) . Kittur et al. (2009) calculated the distance between each page to top-level category nodes. They then assigned the category with the shortest distance to the page. With this approach, they provided the first quantitative analysis of the distribution of topics in Wikipedia. Farina et al. (2011) extended the method by allowing traversing upward in the category graph and assigning categories proportional to the distance instead of assigning the category with the shortest-path only. More recently, Bekkerman and Donin (2017) visualized Wikipedia by building a two-level coarse-grained/fine-grained graph representation. The edges between categories capture the co-occurrence of categories on the same page. They further pruned edges between categories that rarely appear together. The resulting graph contains 441 largest categories and 4815 edges connecting them.", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 137, |
|
"text": "(Mesgari et al., 2015)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 140, |
|
"end": 160, |
|
"text": "Kittur et al. (2009)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 434, |
|
"text": "Farina et al. (2011)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 639, |
|
"end": 665, |
|
"text": "Bekkerman and Donin (2017)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Wikipedia Content Analysis", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We propose wiki2cat, a simple framework using Wikipedia to bootstrap text categorizers. We first map the target taxonomy to correspond- Figure 2 : Overview of wiki2cat, a framework to bootstrap large-scale text classifiers from Wikipedia. We first map user-defined categories to category nodes in the Wikipedia category graph. Then, we traverse the category graph to label documents automatically. Lastly, we use the labeled documents to train a supervised classifier.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 144, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "ing Wikipedia categories (briefed in Section 3.1). We then traverse the Wikipedia category graph to automatically label Wikipedia articles (Section 3.2). Finally, we induce a classifier from the labeled Wikipedia articles (Section 3.3). Figure 2 overviews the end-to-end process of building classifiers under the wiki2cat framework.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 237, |
|
"end": 245, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Wikipedia contains 2 million categories, which is 4 orders of magnitude larger than IAB taxonomy. We index all Wikipedia category names in Apache Lucene 2 and use the IAB category names to query the closest matches. We perform the following: 1) lemmatize the category names in both taxonomies, 2) index both Wikipedia category names and their alternative names from redirect links (e.g., \"A.D.D.\" and \"Attention deficit disorder\"), 3) split conjunction category names and query separately (e.g., \"Arts & Entertainment\" \u2192 \"Arts\", \"Entertainment\"), and 4) capture small spelling variations with string similarity 3 . Out of all 23 coarse-grained and 355 fine-grained categories in IAB taxonomy, 311 categories (82%) can be mapped trivially. Their category names either match exactly or contain only small variations. E.g., the IAB category \"Pagan/Wiccan\" is matched to three Wikipedia categories \"Paganism\", \"Pagans\", and \"Wiccans\". One author of this paper took roughly 2 hours to curate the remaining 67 categories manually and provided the mapping to Wikipedia categories. Out of the 67 categories, 23 are categories that cannot be matched automatically because the category names look very different, e.g., \"Road-Side Assistance\" and \"Emergency road services\". The rest are categories where the system can find a match, but the string similarity is below the threshold (e.g., correct: \"Chronic Pain\" and \"Chronic Pain Syndromes\"; incorrect: \"College Administration\" and \"Court Administration\"). We use the curated mapping in subsequent sections.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mapping the Target Taxonomy to Wikipedia Categories", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "With the mapping between IAB and Wikipedia categories, we can anchor each IAB category as nodes in the Wikipedia category graph 4 , referred to as the root category nodes. Our task then becomes to obtain a set of labeled Wikipedia articles by performing graph traversal from the root category nodes. From each root category node, the category graph can be traversed using the breadth-first search algorithm to obtain a list of all descendant categories and pages. One may argue that we can take all descendant pages of a Wikipedia category to form the labeled set. However, in Wikipedia page A belongs to category B does not imply a hypernym relation. In fact, some pages have a long list of categories, most of which are at their best remotely related to the main content of the page. E.g., the page \"Truck Stop Women\" 5 is a descendant page of the category \"Trucks\". However, it is a 1974 film, and Figure 3 : Intuition of the pruning for the category \"Trucks\". The page \"Ford F-Max\" belongs to four categories. Three of which can be traversed from \"Trucks\" and one cannot (marked in red and italic).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 901, |
|
"end": 909, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Labeling Wikipedia Articles by Category Graph Traversal", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "the main content of the page is about the plot and the cast.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeling Wikipedia Articles by Category Graph Traversal", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We label Wikipedia pages using a competitionbased algorithm following Kittur et al. (2009) and Farina et al. (2011) . We treat each category node from which a page can be traversed as a candidate category and evaluate across all candidate categories to determine the final category(s) for the page.", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 90, |
|
"text": "Kittur et al. (2009)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 95, |
|
"end": 115, |
|
"text": "Farina et al. (2011)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeling Wikipedia Articles by Category Graph Traversal", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Firstly, all pages are pruned based on the percentage of their parent categories that can be traversed from the root category. Figure 3 shows two Wikipedia pages with a snippet of their ancestor categories. Both pages have a shortest distance of 2 to the category \"Trucks\". However, the page \"Ford F-Max\" is likely more related to \"Trucks\" than the page \"Camping and Caravanning Club\" because most of its parent categories can be traversed from \"Trucks\". We empirically set the threshold that we will prune a page with respect to a root category if less than 30% of its parent categories can be traversed from the root category.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 135, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Labeling Wikipedia Articles by Category Graph Traversal", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "While the categories in IAB taxonomy occur in parallel, the corresponding categories in Wikipedia may occur in a hierarchy. For example, the category \"SUVs\" and \"Trucks\" are in parallel in IAB taxonomy but \"SUVs\" is a descendant category of \"Trucks\" in Wikipedia (Trucks \u203aTrucks by type \u203aLight trucks \u203aSport utility vehicles). While traversing from the root category node, we prune all the branches corresponding to a competing category.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeling Wikipedia Articles by Category Graph Traversal", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Pruning alone will not altogether remove the irrelevant content, because the degree of semantic relatedness is not considered. We measure the semantic relatedness between a page and a category based on two factors, namely the shortest path distance and the number of unique paths between them. Previous work depends only on the shortest path distance (Kittur et al., 2009; Farina et al., 2011) . We observe that if a page is densely connected to a category via many unique paths, it is often an indication of a strong association. We calculate the weight w of a page with respect to a category as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 351, |
|
"end": 372, |
|
"text": "(Kittur et al., 2009;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 373, |
|
"end": 393, |
|
"text": "Farina et al., 2011)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeling Wikipedia Articles by Category Graph Traversal", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "w = k i=0 1 2 d i (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeling Wikipedia Articles by Category Graph Traversal", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where k is the number of unique paths between the page and the category node, and d i is the distance between the two in the ith path. To calculate the final list of categories, the weights for all competing categories are normalized to 1 by summing over each candidate category j and the categories which have a weight higher than 0.3 are returned as the final assigned categories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeling Wikipedia Articles by Category Graph Traversal", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "w j = k j i=0 1 2 d ij /( j k j i=0 1 2 d ij )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Labeling Wikipedia Articles by Category Graph Traversal", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The labeling process labeled in total 1.16 million Wikipedia articles. The blue scattered plot in Figure 4 plots the number of labeled training articles per fine-grained category in log-10 scale. We can see that the majority of the categories have between 100 to 10k articles. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 106, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Labeling Wikipedia Articles by Category Graph Traversal", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The output of the algorithm described in Section 3.2 is a set of labeled Wikipedia pages. In theory, we can apply any supervised learning method to induce classifiers from the labeled dataset. The focus of this work is not to introduce a novel model architecture, but to demonstrate the effectiveness of the framework to bootstrap classifiers without manual labeling. We experiment with three simple and representative classification models. The first model is a linear SVM with tf-idf features, which is a competitive baseline for many NLP tasks (Wang and Manning, 2012 ). The second model is a centroid classifier, which is commonly used in largescale text classification (Lee et al., 2013) . It averages the tf-idf vectors of all documents belonging to each category and classifies by searching for the nearest category vector. The third model uses BERT (Devlin et al., 2019) to generate the semantic representation from the text and uses a single-layer feed-forward classification head on top. We freeze the pre-trained BERT model and train only the classification head for efficient training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 547, |
|
"end": 570, |
|
"text": "(Wang and Manning, 2012", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 674, |
|
"end": 692, |
|
"text": "(Lee et al., 2013)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 857, |
|
"end": 878, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Contextual Classifiers", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The number of labeled Wikipedia documents for each category is highly imbalanced. Minority categories contain only a handful of pages, while some categories have hundreds of thousands of pages. We perform random over-and downsampling to keep 1k documents for each fine-grained category and 20k documents for each coarse-grained category to form the training set. 6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Contextual Classifiers", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We evaluated our method using three contextual classification datasets. The first two are coarsegrained evaluation datasets published by Jin et al. (2020) covering all IAB tier-1 categories except for \"News\" (totaling 22 categories). The datasets are collected using different methods (news-crawl-v2 dataset (nc-v2) by mapping from news categories; browsing dataset by manual labelling) and contain 2,127 and 1,501 documents separately 7 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We compiled another dataset for fine-grained classification comprising of documents labeled with one of the IAB tier-2 categories. The full dataset consists of 134k documents and took an effort of multiple person-year to collect. The sources of the dataset are news websites, URLs occurring in the online advertising traffic and URLs crawled with keywords using Google Custom Search 8 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The number of documents per category can be overviewed in Figure 4 (the orange scatter plot). 23 out of 355 IAB tier-2 categories are not included in the dataset because they are too rare and are not present in our data source. So there are in total 332 fine-grained categories in the datasets. Due to company policy, we can publish only a random sample of the dataset with ten documents per category 9 . We report the performance on both datasets for future work to reproduce our result. To our best knowledge, this dataset will be the only publicly available dataset for fine-grained contextual classification.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 66, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We focus on classifying among fine-grained categories under the same parent category. Figure 5 shows the number of fine-grained categories under each coarse category. While the median number of categories is 10, the classification is challenging because categories are similar to each other. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 94, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Throughout this paper, we use the Wikipedia dump downloaded on 10 December 2019. After removing hidden categories and list pages, the final category graph contains 14.9 million articles, 1.9 million categories and 37.9 million links. The graph is stored in Neo4J database 10 and occupies 4.7GB disk space (not including the page content).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We use the SGD classifier implementation in scikit-learn 11 with default hyperparameters for linear SVM. Words are weighted using tf-idf with a minimum term frequency cutoff of 3. We implement the centroid classifier using TfidfVectorizer in scikit-learn and use numpy to implement the nearest neighbor classification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "For BERT, we use DistilBERT implementation by HuggingFace 12 , a model which is both smaller and faster than the original BERT-base model. We use a single hidden layer with 256 units for the feed-forward classification head. The model is implemented in PyTorch and optimized with Adam optimizer with a learning rate of 0.01.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "10 https://neo4j.com 11 https://scikit-learn.org 12 https://huggingface.co/transformers/ model_doc/distilbert.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "contextual-eval-dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We compare wiki2cat with the following baselines:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "contextual-eval-dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Keyword voting (kw voting): predicts the category whose name occurs most frequently in the input document. If none of the category names is present, the model predicts a random label. \u2022 Dataless (Chang et al., 2008) : maps the input document and the category name into the same semantic space representing Wikipedia concepts using Explicit Semantic Analysis (ESA) (Gabrilovich et al., 2007) . \u2022 Doc2vec (Le and Mikolov, 2014) : similar to the Dataless model. Instead of using ESA, it uses doc2vec to generate the document and category vector. \u2022 STM (Li et al., 2018) : seed-guided topic model. The state-of-the-art model on coarsegrained contextual classification. Underlying, STM calculates each word's co-occurrence and uses it to \"expand\" the knowledge beyond the given seed words. For coarse-grained classification, STM used hand-curated seed words while STM,S label used category names as seed words. Both were trained by Jin et al. (2020) on a private in-domain dataset. We also trained STM using our Wikipedia dataset, referred to as STM,D wiki . For finegrained classification, we report only the result of STM,S label since no previously published seed words are available. Keyword voting and Dataless do not require any training document. Both Doc2vec and STM require unlabeled training corpus. We copy the coarsegrained classification result for Doc2vec, STM, and STM,S label from Jin et al. (2020) . For fine-grained classification, we train Doc2vec and STM,S label using the same set of Wikipedia documents as in wiki2cat.", |
|
"cite_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 217, |
|
"text": "(Chang et al., 2008)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 366, |
|
"end": 392, |
|
"text": "(Gabrilovich et al., 2007)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 405, |
|
"end": 427, |
|
"text": "(Le and Mikolov, 2014)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 551, |
|
"end": 568, |
|
"text": "(Li et al., 2018)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1394, |
|
"end": 1411, |
|
"text": "Jin et al. (2020)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "contextual-eval-dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We present the performance of various models on nc-v2 and browsing dataset in Table 1 . We can observe that wiki2cat using SVM as the learning algorithm outperformed Dataless and Doc2vec baseline. However, it did not perform as well as STM. The STM model was trained using a list of around 30 carefully chosen keywords for each category. It also used in-domain unlabeled documents during training, which we do not use. Jin et al. (2020) demonstrated that the choice of seed keywords has a significant impact on the model's accuracy. STM,S label is the result of STM using only unigrams in the category name as seed keywords. Despite using the same learning algorithm as STM, its performance was much worse than using hand-picked seed words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 419, |
|
"end": 436, |
|
"text": "Jin et al. (2020)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 85, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Result of Coarse-Grained Contextual Classification", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "To investigate the contribution of the in-domain unlabeled document to STM's superior performance, we trained an STM model with the manually-curated keywords in Jin et al. (2020) and the Wikipedia dataset we used to train wiki2cat (denoted as STM,D wiki ). There is a noticeable decrease in performance in STM,D wiki without indomain unlabeled documents. It underperformed w2c svm on nc-v2 dataset and outperformed it on browsing dataset. w2c centroid performed slightly better than w2c svm on the browsing dataset but worse on the nc-v2 dataset. Surprisingly, BERT did not perform as well as the other two much simpler models. We conjecture there are two possible causes. Firstly, BERT has a limitation of sequence length (maximum 512 words). The average sequence length of news-crawl-v2 and browsing datasets are 1,470 and 350 words. Incidentally, there was a more substantial performance gap between BERT and SVM on the news-crawl-v2 dataset. Secondly, our training corpus consists of only Wikipedia articles, while the model was applied to another domain. Therefore, the contextual information that BERT captured may be irrelevant or even counterproductive. We leave a more in-depth analysis to future work and adhere to the SVM and Centroid model hereafter. We now turn our attention to the impact of different graph labeling algorithms on the final classification accuracy. We compare our graph labeling method introduced in Section 3.2 with three methods mentioned in previous work, namely labeling only immediate child pages (child), labeling all descendant pages (descendant), assigning the label with shortest distance (min-dist) as well as another baseline removing the pruning step from our method (no-pruning). We use an SVM model with the same hyperparameters as w2c svm . Their performance is shown in Table 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 178, |
|
"text": "Jin et al. (2020)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1817, |
|
"end": 1824, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Result of Coarse-Grained Contextual Classification", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Using only the immediate child pages led to poor performance. Firstly, it limited the number of training documents. Some categories have only a dozen of immediate child pages. Secondly, the authors of Wikipedia often prefer to assign pages to specific categories instead of general categories. They assign a page to a general category only when it is ambiguous. Despite previous work in Wikipedia content analysis advocated using shortest distance to assign the topic to articles (Kittur et al., 2009; Farina et al., 2011) , we did not observe a substantial improvement using shortest distance over using all descendant pages. Our graph labeling method outperformed all baselines, including its modified version without pruning. Table 3 presents the result on fine-grained classification. We notice a performance difference on the full and sample dataset. However, the relative performance of various models on the two datasets remains consistent. A first observation is that the keyword voting baseline performed very poorly, having 7.5-10.8% accuracy. It shows that the category name itself is not enough to capture the semantics. E.g., the category \"Travel > South America\" does not match a document about traveling in Rio de Janeiro or Buenos Aires but will falsely match content about \"South Korea\" or \"United States of America\". Dataless and STM outperformed the keyword voting baseline by a large margin. However, wiki2cat is clearly the winner, outperforming these baselines by 5-10%. It demonstrated that the automatically labeled documents are helpful for the more challenging fine-grained classification task where categories are more semantically similar and harder to be specified with a handful of keywords.", |
|
"cite_spans": [ |
|
{ |
|
"start": 480, |
|
"end": 501, |
|
"text": "(Kittur et al., 2009;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 502, |
|
"end": 522, |
|
"text": "Farina et al., 2011)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 729, |
|
"end": 736, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Impact of Graph Labeling Algorithms", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We introduced wiki2cat, a simple framework to bootstrap large-scale fine-grained text classifiers from Wikipedia without having to label any document manually. The method was benchmarked on both coarse-grained and fine-grained contextual advertising datasets and achieved competitive performance against various baselines. It performed especially well on fine-grained classification, which both is more challenging and requires more manual labeling in a fully-supervised setting. As an ongoing effort, we are exploring using unlabeled in-domain documents for domain adaptation to achieve better accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "https://www.iab.com/guidelines/ taxonomy/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://lucene.apache.org3 We use Jaro-Winkler string similarity with a threshold of 0.9 to automatically map IAB categories to Wikipedia categories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We construct the category graph using the \"subcat\" (subcategory) relation in the Wikipedia dump. The graph contains both category nodes and page nodes. Pages all appear as leaf nodes while category nodes can be either internal or leaf nodes.5 https://en.wikipedia.org/wiki/Truck_ Stop_Women", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use the original dataset without sampling for the centroid classifier since it is not affected by label imbalance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/YipingNUS/ nle-supplementary-dataset 8 https://developers.google.com/ custom-search/ 9 https://github.com/YipingNUS/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "YJ was supported by the scholarship from 'The 100th Anniversary Chulalongkorn University Fund for Doctoral Scholarship'. We thank anonymous reviewers for their valuable feedback.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Visualizing wikipedia for interactive exploration", |
|
"authors": [ |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Bekkerman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Donin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of KDD 2017 Workshop on Interactive Data Exploration and Analytics (IDEA17)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ron Bekkerman and Olga Donin. 2017. Visualizing wikipedia for interactive exploration. In Proceed- ings of KDD 2017 Workshop on Interactive Data Exploration and Analytics (IDEA17), Halifax, Nova Scotia, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A semantic approach to contextual advertising", |
|
"authors": [ |
|
{ |
|
"first": "Andrei", |
|
"middle": [], |
|
"last": "Broder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcus", |
|
"middle": [], |
|
"last": "Fontoura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vanja", |
|
"middle": [], |
|
"last": "Josifovski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lance", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 30th International ACM SIGIR Conference on Research and Development in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "559--566", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrei Broder, Marcus Fontoura, Vanja Josifovski, and Lance Riedel. 2007. A semantic approach to contex- tual advertising. In Proceedings of the 30th Interna- tional ACM SIGIR Conference on Research and De- velopment in Information Retrieval, pages 559-566.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Importance of semantic representation: Dataless classification", |
|
"authors": [ |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lev-Arie", |
|
"middle": [], |
|
"last": "Ratinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vivek", |
|
"middle": [], |
|
"last": "Srikumar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "830--835", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ming-Wei Chang, Lev-Arie Ratinov, Dan Roth, and Vivek Srikumar. 2008. Importance of semantic rep- resentation: Dataless classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 2, pages 830-835.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Natcat: Weakly supervised text classification with naturally annotated datasets", |
|
"authors": [ |
|
{ |
|
"first": "Zewei", |
|
"middle": [], |
|
"last": "Chu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Stratos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2009.14335" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zewei Chu, Karl Stratos, and Kevin Gimpel. 2020. Natcat: Weakly supervised text classification with naturally annotated datasets. arXiv preprint arXiv:2009.14335.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 4171-4186, Minneapolis, Minnesota.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Automatically assigning wikipedia articles to macrocategories", |
|
"authors": [ |
|
{ |
|
"first": "Jacopo", |
|
"middle": [], |
|
"last": "Farina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Riccardo", |
|
"middle": [], |
|
"last": "Tasso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Laniado", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of Hypertext", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacopo Farina, Riccardo Tasso, and David Laniado. 2011. Automatically assigning wikipedia articles to macrocategories. In Proceedings of Hypertext.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Overcoming the brittleness bottleneck using wikipedia: Enhancing text categorization with encyclopedic knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Evgeniy", |
|
"middle": [], |
|
"last": "Gabrilovich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaul", |
|
"middle": [], |
|
"last": "Markovitch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "1301--1306", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Evgeniy Gabrilovich and Shaul Markovitch. 2006. Overcoming the brittleness bottleneck using wikipedia: Enhancing text categorization with en- cyclopedic knowledge. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 6, pages 1301-1306.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Computing semantic relatedness using wikipediabased explicit semantic analysis", |
|
"authors": [ |
|
{ |
|
"first": "Evgeniy", |
|
"middle": [], |
|
"last": "Gabrilovich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaul", |
|
"middle": [], |
|
"last": "Markovitch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Twentieth International Joint Conference on Artificial Intelligence", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "1606--1611", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Evgeniy Gabrilovich, Shaul Markovitch, et al. 2007. Computing semantic relatedness using wikipedia- based explicit semantic analysis. In Proceedings of the Twentieth International Joint Conference on Ar- tificial Intelligence, volume 7, pages 1606-1611.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Combining lightly-supervised text classification models for accurate contextual advertising", |
|
"authors": [ |
|
{ |
|
"first": "Yiping", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dittaya", |
|
"middle": [], |
|
"last": "Wanvarie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phu", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "545--554", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yiping Jin, Dittaya Wanvarie, and Phu Le. 2017. Com- bining lightly-supervised text classification models for accurate contextual advertising. In Proceedings of the Eighth International Joint Conference on Nat- ural Language Processing (Volume 1: Long Papers), pages 545-554.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Learning from noisy out-of-domain corpus using dataless classification. Natural Language Engineering", |
|
"authors": [ |
|
{ |
|
"first": "Yiping", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dittaya", |
|
"middle": [], |
|
"last": "Wanvarie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Phu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yiping Jin, Dittaya Wanvarie, and Phu T. V. Le. 2020. Learning from noisy out-of-domain corpus using dataless classification. Natural Language Engineer- ing.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Incorporating word embeddings into open directory project based large-scale classification", |
|
"authors": [ |
|
{ |
|
"first": "Kang-Min", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aliyeva", |
|
"middle": [], |
|
"last": "Dinara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Byung-Ju", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sangkeun", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "376--388", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kang-Min Kim, Aliyeva Dinara, Byung-Ju Choi, and SangKeun Lee. 2018. Incorporating word embed- dings into open directory project based large-scale classification. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Min- ing, pages 376-388. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "What's in wikipedia? mapping topics and conflict using socially annotated category structure", |
|
"authors": [ |
|
{ |
|
"first": "Aniket", |
|
"middle": [], |
|
"last": "Kittur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bongwon", |
|
"middle": [], |
|
"last": "Chi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Suh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1509--1512", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aniket Kittur, Ed H Chi, and Bongwon Suh. 2009. What's in wikipedia? mapping topics and conflict using socially annotated category structure. In Pro- ceedings of the SIGCHI Conference on Human Fac- tors in Computing Systems, pages 1509-1512.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Newsweeder: Learning to filter netnews", |
|
"authors": [ |
|
{ |
|
"first": "Ken", |
|
"middle": [], |
|
"last": "Lang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the Twelfth International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "331--339", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ken Lang. 1995. Newsweeder: Learning to filter net- news. In Proceedings of the Twelfth International Conference on Machine Learning, pages 331-339. Elsevier.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Distributed representations of sentences and documents", |
|
"authors": [ |
|
{ |
|
"first": "Quoc", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1188--1196", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. In Proceed- ings of the International Conference on Machine Learning, pages 1188-1196.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Semantic contextual advertising based on the open directory project", |
|
"authors": [ |
|
{ |
|
"first": "Jung-Hyun", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jongwoo", |
|
"middle": [], |
|
"last": "Ha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jin-Yong", |
|
"middle": [], |
|
"last": "Jung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sangkeun", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ACM Transactions on the Web (TWEB)", |
|
"volume": "7", |
|
"issue": "4", |
|
"pages": "1--22", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jung-Hyun Lee, Jongwoo Ha, Jin-Yong Jung, and Sangkeun Lee. 2013. Semantic contextual advertis- ing based on the open directory project. ACM Trans- actions on the Web (TWEB), 7(4):1-22.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Seed-guided topic model for document filtering and classification", |
|
"authors": [ |
|
{ |
|
"first": "Chenliang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shiqian", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Xing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aixin", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zongyang", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACM Transactions on Information Systems", |
|
"volume": "37", |
|
"issue": "1", |
|
"pages": "1--37", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chenliang Li, Shiqian Chen, Jian Xing, Aixin Sun, and Zongyang Ma. 2018. Seed-guided topic model for document filtering and classification. ACM Transac- tions on Information Systems, 37(1):1-37.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "The sum of all human knowledge: A systematic review of scholarly research on the content of wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Mostafa", |
|
"middle": [], |
|
"last": "Mesgari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chitu", |
|
"middle": [], |
|
"last": "Okoli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohamad", |
|
"middle": [], |
|
"last": "Mehdi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arto", |
|
"middle": [], |
|
"last": "Finn \u00c5rup Nielsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lanam\u00e4ki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Journal of the Association for Information Science and Technology", |
|
"volume": "66", |
|
"issue": "2", |
|
"pages": "219--245", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mostafa Mesgari, Chitu Okoli, Mohamad Mehdi, Finn \u00c5rup Nielsen, and Arto Lanam\u00e4ki. 2015. The sum of all human knowledge: A systematic review of scholarly research on the content of wikipedia. Journal of the Association for Information Science and Technology, 66(2):219-245.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Recursive deep models for semantic compositionality over a sentiment treebank", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Perelygin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Chuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1631--1642", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Chimera: Large-scale classification using machine learning, rules, and crowdsourcing", |
|
"authors": [ |
|
{ |
|
"first": "Chong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Narasimhan", |
|
"middle": [], |
|
"last": "Rampalli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anhai", |
|
"middle": [], |
|
"last": "Doan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the VLDB Endowment", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "1529--1540", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chong Sun, Narasimhan Rampalli, Frank Yang, and AnHai Doan. 2014. Chimera: Large-scale classi- fication using machine learning, rules, and crowd- sourcing. Proceedings of the VLDB Endowment, 7(13):1529-1540.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Baselines and bigrams: Simple, good sentiment and topic classification", |
|
"authors": [ |
|
{ |
|
"first": "Sida", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "90--94", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sida Wang and Christopher D Manning. 2012. Base- lines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Lin- guistics: Short Papers-Volume 2, pages 90-94. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Character-level convolutional networks for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junbo", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann", |
|
"middle": [], |
|
"last": "Lecun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "649--657", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Proceedings of Advances in Neural In- formation Processing Systems, pages 649-657.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Improving semisupervised text classification by using wikipedia knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Zhilin", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huaizhong", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pengfei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huazhong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongming", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the International Conference on Web-Age Information Management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--36", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhilin Zhang, Huaizhong Lin, Pengfei Li, Huazhong Wang, and Dongming Lu. 2013. Improving semi- supervised text classification by using wikipedia knowledge. In Proceedings of the International Conference on Web-Age Information Management, pages 25-36. Springer.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "Snippet of IAB Content CategorizationTaxonomy.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "Blue: # of automatically labeled Wikipedia articles per fine-grained category in log-10 scale. (mean=2.95, std=0.86). Orange: # of articles per fine-grained category in the full test set in log-10 scale (mean=1.94, std=0.78).", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"text": "Number of fine-grained categories per coarsegrained category in our fine-grained contextual classification evaluation dataset.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Performance of the SVM model trained with datasets labeled using different labeling algorithms." |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Performance of various models on IAB finegrained classification datasets. * indicates a statistically significant improvement from baselines with p-value<0.05 using single-sided sample T-test." |
|
} |
|
} |
|
} |
|
} |