Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U17-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:11:23.416243Z"
},
"title": "On Extending Neural Networks with Loss Ensembles for Text Classification",
"authors": [
{
"first": "Hamideh",
"middle": [],
"last": "Hajiabadi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ferdowsi University of Mashhad (FUM) Mashhad",
"location": {
"country": "Iran"
}
},
"email": "[email protected]"
},
{
"first": "Diego",
"middle": [],
"last": "Molla-Aliod",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Macquarie University Sydney",
"location": {
"region": "New South Wales",
"country": "Australia"
}
},
"email": "[email protected]"
},
{
"first": "Reza",
"middle": [],
"last": "Monsefi",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Ensemble techniques are powerful approaches that combine several weak learners to build a stronger one. As a meta learning framework, ensemble techniques can easily be applied to many machine learning techniques. In this paper we propose a neural network extended with an ensemble loss function for text classification. The weight of each weak loss function is tuned within the training phase through the gradient propagation optimization method of the neural network. The approach is evaluated on several text classification datasets. We also evaluate its performance in various environments with several degrees of label noise. Experimental results indicate an improvement of the results and strong resilience against label noise in comparison with other methods.",
"pdf_parse": {
"paper_id": "U17-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "Ensemble techniques are powerful approaches that combine several weak learners to build a stronger one. As a meta learning framework, ensemble techniques can easily be applied to many machine learning techniques. In this paper we propose a neural network extended with an ensemble loss function for text classification. The weight of each weak loss function is tuned within the training phase through the gradient propagation optimization method of the neural network. The approach is evaluated on several text classification datasets. We also evaluate its performance in various environments with several degrees of label noise. Experimental results indicate an improvement of the results and strong resilience against label noise in comparison with other methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In statistics and machine learning, ensemble methods use multiple learning algorithms to obtain better predictive performance (Mannor and Meir, 2001) . It has been proved that ensemble methods can boost weak learners whose accuracies are slightly better than random guessing into arbitrarily accurate strong learners (Bai et al., 2014; Zhang et al., 2016) . When it could not be possible to directly design a strong complicated learning system, ensemble methods would be a possible solution. In this paper, we are inspired by ensemble techniques to combine several weak loss functions in order to design a stronger ensemble loss function for text classification.",
"cite_spans": [
{
"start": 126,
"end": 149,
"text": "(Mannor and Meir, 2001)",
"ref_id": "BIBREF8"
},
{
"start": 317,
"end": 335,
"text": "(Bai et al., 2014;",
"ref_id": "BIBREF0"
},
{
"start": 336,
"end": 355,
"text": "Zhang et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we will focus on multi-class classification where the class to predict is encoded as a vector y with the one-hot encoding of the target label, and the output of a classifier\u0177 = f (x; \u2713) is a vector of probability estimates of each label given input sample x and training parameters \u2713. Then, a loss function L(y,\u0177) is a positive function that measures the error of estimation (Steinwart and Christmann, 2008) . Different loss functions have different properties, and some well-known loss functions are shown in Table 1 . Different loss functions lead to different Optimum Bayes Estimators having their own unique characteristics. So, in each environment, picking a specific loss function will affect performance significantly (Xiao et al., 2017; Zhao et al., 2010) .",
"cite_spans": [
{
"start": 389,
"end": 421,
"text": "(Steinwart and Christmann, 2008)",
"ref_id": "BIBREF16"
},
{
"start": 739,
"end": 758,
"text": "(Xiao et al., 2017;",
"ref_id": "BIBREF17"
},
{
"start": 759,
"end": 777,
"text": "Zhao et al., 2010)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 524,
"end": 531,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose an approach for combining loss functions which performs substantially better especially when facing annotation noise. The framework is designed as an extension to regular neural networks, where the loss function is replaced with an ensemble of loss functions, and the ensemble weights are learned as part of the gradient propagation process. We implement and evaluate our proposed algorithm on several text classification datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is structured as follows. An overview of several loss functions for classification is briefly introduced in Section 2. The proposed framework and the proposed algorithm are explained in Section 3. Section 4 contains experimental results on classifying several text datasets. The paper is concluded in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A typical machine learning problem can be reduced to an expected loss function minimization problem (Bartlett et al., 2006; Painsky and Rosset, 2016) . Rosasco et al. (2004) studied the impact of choosing different loss functions from the viewpoint of statistical learning theory. In this section, Name of loss function L(y,\u0177) Zero-One (Xiao et al., 2017) L",
"cite_spans": [
{
"start": 100,
"end": 123,
"text": "(Bartlett et al., 2006;",
"ref_id": "BIBREF1"
},
{
"start": 124,
"end": 149,
"text": "Painsky and Rosset, 2016)",
"ref_id": "BIBREF12"
},
{
"start": 152,
"end": 173,
"text": "Rosasco et al. (2004)",
"ref_id": "BIBREF13"
},
{
"start": 336,
"end": 355,
"text": "(Xiao et al., 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "0 1 = ( 0 z 0 1 z < 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Hinge Loss (Masnadi-Shirazi and Vasconcelos, 2009; Steinwart, 2002 )",
"cite_spans": [
{
"start": 11,
"end": 50,
"text": "(Masnadi-Shirazi and Vasconcelos, 2009;",
"ref_id": "BIBREF10"
},
{
"start": 51,
"end": 66,
"text": "Steinwart, 2002",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "L H = ( 0 z 1 max(0, 1 z) z < 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Smoothed Hinge (Zhao et al., 2010) L S H = 8 > < > :",
"cite_spans": [
{
"start": 15,
"end": 34,
"text": "(Zhao et al., 2010)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "0 z 1 1 z 2 2 0 \uf8ff z < 1 max(0, 1 z) z \uf8ff 0 Square Loss L S = ky \u0177k 2 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Correntropy Loss (Liu et al., 2007 (Liu et al., , 2006 )",
"cite_spans": [
{
"start": 17,
"end": 34,
"text": "(Liu et al., 2007",
"ref_id": "BIBREF6"
},
{
"start": 35,
"end": 54,
"text": "(Liu et al., , 2006",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "L C = exp ky \u0177k 2 2 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Cross-Entropy Loss (Masnadi-Shirazi et al., 2010) L C E = log (1 + exp ( z)) Absolute Loss L A = ky \u0177k 1 Table 1 : Several well-known loss functions, where",
"cite_spans": [
{
"start": 19,
"end": 49,
"text": "(Masnadi-Shirazi et al., 2010)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 105,
"end": 112,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "z = y \u2022\u0177 2 R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "several well-known loss functions are briefly introduced, followed by a review of ensemble methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In the literature, loss functions are divided into margin-based and distance-based categories. Margin-based loss functions are often used for classification purposes (Steinwart and Christmann, 2008; Khan et al., 2013; Chen et al., 2017 ). Since we evaluate our work on classification of text datasets, in this paper we focus on marginbased loss functions.",
"cite_spans": [
{
"start": 166,
"end": 198,
"text": "(Steinwart and Christmann, 2008;",
"ref_id": "BIBREF16"
},
{
"start": 199,
"end": 217,
"text": "Khan et al., 2013;",
"ref_id": "BIBREF4"
},
{
"start": 218,
"end": 235,
"text": "Chen et al., 2017",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "A margin-based loss function is defined as a penalty function L(y,\u0177) based in a margin z = y \u2022\u0177. In any given application, some marginbased loss functions might have several disadvantages and advantages and we could not certainly tell which loss function is preferable in general. For example, consider the Zero-One loss function which penalizes all the misclassified samples with the constant value of 1 and the correctly classified samples with no loss. This loss function would result in a robust classifier when facing outliers but it would have a terrible performance in an application with margin focus (Zhao et al., 2010) .",
"cite_spans": [
{
"start": 609,
"end": 628,
"text": "(Zhao et al., 2010)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "A loss function is margin enforcing if minimization of the expected loss function leads to a classifier enhancing the margin (Masnadi-Shirazi and Vasconcelos, 2009) . Learning a classifier with an acceptable margin would increase generalization. Enhancing the margin would be possible if the loss function returns a small amount of loss for the correct samples close to the classification hyperplane. For example, Zero-One does not penalize correct samples at all and therefore it does not enhance the margin, while Hinge Loss is a margin enhancing loss function.",
"cite_spans": [
{
"start": 125,
"end": 164,
"text": "(Masnadi-Shirazi and Vasconcelos, 2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "The general idea of ensemble techniques is to combine different expert ideas aiming at boosting the accuracy based on enhanced decision making. Predominantly, the underlying idea is that the decision made by a committee of experts is more reliable than the decision of one expert alone (Bai et al., 2014; Mannor and Meir, 2001) . Ensemble techniques as a framework have been applied to a variety of real problems and better results have been achieved in comparison to using a single expert.",
"cite_spans": [
{
"start": 286,
"end": 304,
"text": "(Bai et al., 2014;",
"ref_id": "BIBREF0"
},
{
"start": 305,
"end": 327,
"text": "Mannor and Meir, 2001)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Having considered the importance of the loss function in learning algorithms, in order to reach a better learning system, we are inspired by ensemble techniques to design an ensemble loss function. The weight applied to each weak loss function is tuned through the gradient propagation optimization of a neural network working on a text classification dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Other works (Shi et al., 2015; BenTaieb et al., 2016) have combined two loss functions where the weights are specified as a hyperparameter set prior to the learning process (e.g. during a fine-tuning process with crossvalidation). In this paper, we combine more than two functions and the hyperparameter is not set a-priory but it is learned during the training process.",
"cite_spans": [
{
"start": 12,
"end": 30,
"text": "(Shi et al., 2015;",
"ref_id": "BIBREF14"
},
{
"start": 31,
"end": 53,
"text": "BenTaieb et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Let (x, y) be a sample where x 2 R N is the input and y 2 {0, 1} C is the one-hot encoding of the label (C is the number of classes). Let \u2713 be the parameters of a neural network classifier with a top softmax layer so that the probability estimates are\u0177 = sof tmax(f (x; \u2713)). Let {L i (y,\u0177)} M i=1 denote M weak loss functions. In addition to finding the optimal \u2713, the goal is to find the best weights , { 1 , 2 , . . . , M }, to combine M weak loss functions in order to generate a better application-tailored loss function. We need to add a further constraint to avoid yielding near zero values for all i weights. The proposed ensemble loss function is defined as below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = M X j=1 j L j (y,\u0177), M X j=1 j = 1",
"eq_num": "(1)"
}
],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "The optimization problem could be defined as follows, given T training samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "minimize \u2713, T X i=1 M X j=1 j L j (y i ,\u0177 i ) s.t. M X j=1 j = 1, i 0",
"eq_num": "(2)"
}
],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "To make the optimization algorithm simpler, we use 2 i instead of i , so the second constraint i 0 can be omitted. We then incorporate the constraint as a regularization term based on the concept of Augmented Lagrangian. The modified objective function using Augmented Lagrangian is presented as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "minimize \u2713, T X i=1 M X j=1 2 j L j (y i ,\u0177 i )+ \u2318 1 ( M X j=1 2 j 1) + \u2318 2 ( M X j=1 2 j 1) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "(3) Note that the amount of \u2318 2 must be significantly greater that \u2318 1 (Nocedal and Wright, 2006) . The first and the second terms of the objective function cause 2 i values to approach zero but the third term satisfies P M j=1 2 j = 1. Figure 1 illustrates the framework of the proposed approach with the dashed box representing the contribution of this paper. In the training phase, the weight of each weak loss function is trained through the gradient propagation optimization method. The accuracy of the model is calculated in a test phase not shown in the figure. ",
"cite_spans": [
{
"start": 71,
"end": 97,
"text": "(Nocedal and Wright, 2006)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 237,
"end": 245,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "We have applied the proposed ensemble loss function to several text datasets. Table 2 provides a brief description of the datasets. To reach a better ensemble loss function we choose three loss functions with different approaches in facing with outliers, as weak loss functions: Correntropy Loss which does not assign a high weight to samples with big errors, Hinge Loss which penalizes linearly and Cross-entropy Loss function which highly penalizes the samples whose predictions are far from the targets. We compared results with 3 loss functions which are widely used in neural networks: Cross-entropy, Square Loss, and Hinge Loss.",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 85,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "We picked \u2318 1 near zero and \u2318 2 = 200 in (3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "Since this work is a proof of concept, the neural networks of each application are simply a softmax of the linear combination of input features plus bias:\u0177",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "= softmax(x \u2022 W + b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "where the input features x are the word frequencies in the input text. Thus, \u2713 in our notation is composed of W and b. We use Python and its Ten-sorFlow package for implementing the proposed approach. The results are shown in Table 3 . The table compares the results of using individual loss functions and the ensemble loss.",
"cite_spans": [],
"ref_spans": [
{
"start": 226,
"end": 233,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "This data set is a collection of 20,000 messages,collected from 20 different net-news newsgroups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "20-newsgroup",
"sec_num": null
},
{
"text": "The NLTK corpus moviereviews data set has the reviews, and they are labeled already as positive or negative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Movie-reviews in corpus",
"sec_num": null
},
{
"text": "It is a collection of sample emails (i.e. a text corpus). In this corpus, each email has already been labeled as Spam or Ham.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Email-Classification (TREC)",
"sec_num": null
},
{
"text": "The data was originally collected and labeled by Carnegie Group, Inc. and Reuters, Ltd. in the course of developing the CON-STRUE text categorization system We have also compared the robustness of the proposed loss function with the use of individual loss functions. In particular, we add label noise by randomly modifying the target label in the training samples, and keep the evaluation set intact. We conducted experiments with 10% and 30% of noise, where e.g. 30% of noise means randomly changing 30% of the labels in the training data. Tables 4 and 5 show the results, with the best results shown in boldface. We can observe that, in virtually all of the experiments, the ensemble loss is at least as good as the individual losses, and in only two cases the loss is (slightly) worse. And, in general, the ensemble loss performed comparatively better as we increased the label noise. We have used a very simple neural architecture in this work but in principle this method could be used for systems that use any neural networks. In future work we will explore the integration of more complex neural networks such as those using convolutions and recurrent networks. We also plan to study the application of this method to other tasks such as sequence labeling (e.g. for NER and PoS tagging). Another possible extension could focus on handling sparseness by adding a regularization term.",
"cite_spans": [],
"ref_spans": [
{
"start": 541,
"end": 555,
"text": "Tables 4 and 5",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Reuters-21578",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Bayesian framework for online classifier ensemble",
"authors": [
{
"first": "Qinxun",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Lam",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Sclaroff",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 31st International Conference on Machine Learning (ICML-14)",
"volume": "",
"issue": "",
"pages": "1584--1592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qinxun Bai, Henry Lam, and Stan Sclaroff. 2014. A Bayesian framework for online classifier ensemble. In Proceedings of the 31st International Conference on Machine Learning (ICML-14). pages 1584-1592.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Convexity, classification, and risk bounds",
"authors": [
{
"first": "L",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Bartlett",
"suffix": ""
},
{
"first": "Jon",
"middle": [
"D"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcauliffe",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of the American Statistical Association",
"volume": "101",
"issue": "473",
"pages": "138--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter L Bartlett, Michael I Jordan, and Jon D McAuliffe. 2006. Convexity, classification, and risk bounds. Journal of the American Statistical Associ- ation 101(473):138-156.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Multi-loss convolutional networks for gland analysis in microscopy",
"authors": [
{
"first": "A\u00efcha",
"middle": [],
"last": "Bentaieb",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Ghassan",
"middle": [],
"last": "Hamarneh",
"suffix": ""
}
],
"year": 2016,
"venue": "Biomedical Imaging (ISBI 2016)",
"volume": "",
"issue": "",
"pages": "642--645",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A\u00efcha BenTaieb, Jeremy Kawahara, and Ghassan Hamarneh. 2016. Multi-loss convolutional networks for gland analysis in microscopy. In Biomedical Imaging (ISBI 2016). pages 642-645.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Kernel risksensitive loss: Definition, properties and application to robust adaptive filtering",
"authors": [
{
"first": "Badong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Haiquan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Nanning",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Jose C",
"middle": [],
"last": "Principe",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE Transactions on Signal Processing",
"volume": "65",
"issue": "11",
"pages": "2888--2901",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Badong Chen, Lei Xing, Bin Xu, Haiquan Zhao, Nan- ning Zheng, and Jose C Principe. 2017. Kernel risk- sensitive loss: Definition, properties and application to robust adaptive filtering. IEEE Transactions on Signal Processing 65(11):2888-2901.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semi-supervised image classification with huberized Laplacian support vector machines",
"authors": [
{
"first": "Inayatullah",
"middle": [],
"last": "Khan",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Abdul",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Horst",
"middle": [],
"last": "Bais",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bischof",
"suffix": ""
}
],
"year": 2013,
"venue": "Emerging Technologies (ICET)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inayatullah Khan, Peter M Roth, Abdul Bais, and Horst Bischof. 2013. Semi-supervised image classifica- tion with huberized Laplacian support vector ma- chines. In Emerging Technologies (ICET), 2013",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "IEEE 9th International Conference on. IEEE",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "IEEE 9th International Conference on. IEEE, pages 1-6.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Correntropy: Properties and applications in non-Gaussian signal processing",
"authors": [
{
"first": "W",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "P",
"middle": [
"P"
],
"last": "Pokharel",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Principe",
"suffix": ""
}
],
"year": 2007,
"venue": "IEEE Transactions on Signal Processing",
"volume": "55",
"issue": "11",
"pages": "5286--5298",
"other_ids": {
"DOI": [
"10.1109/TSP.2007.896065"
]
},
"num": null,
"urls": [],
"raw_text": "W. Liu, P. P. Pokharel, and J. C. Principe. 2007. Correntropy: Properties and applications in non- Gaussian signal processing. IEEE Transac- tions on Signal Processing 55(11):5286-5298. https://doi.org/10.1109/TSP.2007.896065.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Correntropy: A localized similarity measure",
"authors": [
{
"first": "Weifeng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "P",
"middle": [
"P"
],
"last": "Pokharel",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Principe",
"suffix": ""
}
],
"year": 2006,
"venue": "The 2006 IEEE International Joint Conference on Neural Network Proceedings",
"volume": "",
"issue": "",
"pages": "4919--4924",
"other_ids": {
"DOI": [
"10.1109/IJCNN.2006.247192"
]
},
"num": null,
"urls": [],
"raw_text": "Weifeng Liu, P. P. Pokharel, and J. C. Principe. 2006. Correntropy: A localized similarity measure. In The 2006 IEEE International Joint Conference on Neural Network Proceedings. pages 4919-4924. https://doi.org/10.1109/IJCNN.2006.247192.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Weak learners and improved rates of convergence in boosting",
"authors": [
{
"first": "Shie",
"middle": [],
"last": "Mannor",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Meir",
"suffix": ""
}
],
"year": 2001,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "280--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shie Mannor and Ron Meir. 2001. Weak learners and improved rates of convergence in boosting. In Ad- vances in Neural Information Processing Systems. pages 280-286.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "On the design of robust classifiers for computer vision",
"authors": [
{
"first": "Hamed",
"middle": [],
"last": "Masnadi-Shirazi",
"suffix": ""
},
{
"first": "Vijay",
"middle": [],
"last": "Mahadevan",
"suffix": ""
},
{
"first": "Nuno",
"middle": [],
"last": "Vasconcelos",
"suffix": ""
}
],
"year": 2010,
"venue": "Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "779--786",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hamed Masnadi-Shirazi, Vijay Mahadevan, and Nuno Vasconcelos. 2010. On the design of robust clas- sifiers for computer vision. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Con- ference on. IEEE, pages 779-786.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "On the design of loss functions for classification: theory, robustness to outliers, and SavageBoost",
"authors": [
{
"first": "Hamed",
"middle": [],
"last": "Masnadi",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Shirazi",
"suffix": ""
},
{
"first": "Nuno",
"middle": [],
"last": "Vasconcelos",
"suffix": ""
}
],
"year": 2009,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "1049--1056",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hamed Masnadi-Shirazi and Nuno Vasconcelos. 2009. On the design of loss functions for classification: theory, robustness to outliers, and SavageBoost. In Advances in neural information processing systems. pages 1049-1056.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Penalty and augmented Lagrangian methods. Numerical Optimization pages",
"authors": [
{
"first": "Jorge",
"middle": [],
"last": "Nocedal",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wright",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "497--528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jorge Nocedal and Stephen J Wright. 2006. Penalty and augmented Lagrangian methods. Numerical Optimization pages 497-528.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Isotonic modeling with non-differentiable loss functions with application to Lasso regularization",
"authors": [
{
"first": "Amichai",
"middle": [],
"last": "Painsky",
"suffix": ""
},
{
"first": "Saharon",
"middle": [],
"last": "Rosset",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE transactions on pattern analysis and machine intelligence",
"volume": "38",
"issue": "",
"pages": "308--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amichai Painsky and Saharon Rosset. 2016. Isotonic modeling with non-differentiable loss functions with application to Lasso regularization. IEEE transac- tions on pattern analysis and machine intelligence 38(2):308-321.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Are loss functions all the same?",
"authors": [
{
"first": "Lorenzo",
"middle": [],
"last": "Rosasco",
"suffix": ""
},
{
"first": "Ernesto",
"middle": [
"De"
],
"last": "Vito",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Caponnetto",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Piana",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Verri",
"suffix": ""
}
],
"year": 2004,
"venue": "Neural Computation",
"volume": "16",
"issue": "5",
"pages": "1063--1076",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lorenzo Rosasco, Ernesto De Vito, Andrea Capon- netto, Michele Piana, and Alessandro Verri. 2004. Are loss functions all the same? Neural Computa- tion 16(5):1063-1076.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A hybrid loss for multiclass and structured prediction",
"authors": [
{
"first": "Qinfeng",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Reid",
"suffix": ""
},
{
"first": "Tiberio",
"middle": [],
"last": "Caetano",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "37",
"issue": "1",
"pages": "2--12",
"other_ids": {
"DOI": [
"10.1109/TPAMI.2014.2306414"
]
},
"num": null,
"urls": [],
"raw_text": "Qinfeng Shi, Mark Reid, Tiberio Caetano, An- ton Van Den Hengel, and Zhenhua Wang. 2015. A hybrid loss for multiclass and struc- tured prediction. IEEE Transactions on Pattern Analysis and Machine Intelligence 37(1):2-12. https://doi.org/10.1109/TPAMI.2014.2306414.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Support vector machines are universally consistent",
"authors": [
{
"first": "Ingo",
"middle": [],
"last": "Steinwart",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Complexity",
"volume": "18",
"issue": "3",
"pages": "768--791",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ingo Steinwart. 2002. Support vector machines are universally consistent. Journal of Complexity 18(3):768-791.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Support Vector Machines",
"authors": [
{
"first": "Ingo",
"middle": [],
"last": "Steinwart",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Christmann",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ingo Steinwart and Andreas Christmann. 2008. Sup- port Vector Machines. Springer Science & Business Media.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Ramp loss based robust one-class SVM",
"authors": [
{
"first": "Yingchao",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Huangang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wenli",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2017,
"venue": "Pattern Recognition Letters",
"volume": "85",
"issue": "",
"pages": "15--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yingchao Xiao, Huangang Wang, and Wenli Xu. 2017. Ramp loss based robust one-class SVM. Pattern Recognition Letters 85:15-20.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bayesian tracking fusion framework with online classifier ensemble for immersive visual applications",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Zhuo",
"suffix": ""
},
{
"first": "Yanning",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hanqiao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kangli",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2016,
"venue": "Multimedia Tools and Applications",
"volume": "75",
"issue": "9",
"pages": "5075--5092",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Zhang, Tao Zhuo, Yanning Zhang, Hanqiao Huang, and Kangli Chen. 2016. Bayesian track- ing fusion framework with online classifier ensem- ble for immersive visual applications. Multimedia Tools and Applications 75(9):5075-5092.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "From convex to nonconvex: A loss function analysis for binary classification",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Musa",
"middle": [],
"last": "Mammadov",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Yearwood",
"suffix": ""
}
],
"year": 2010,
"venue": "Data Mining Workshops (ICDMW), 2010 IEEE International Conference on. IEEE",
"volume": "",
"issue": "",
"pages": "1281--1288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Zhao, Musa Mammadov, and John Yearwood. 2010. From convex to nonconvex: A loss func- tion analysis for binary classification. In Data Min- ing Workshops (ICDMW), 2010 IEEE International Conference on. IEEE, pages 1281-1288.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "The proposed learning diagram"
},
"TABREF0": {
"num": null,
"type_str": "table",
"text": "Description of dataset",
"html": null,
"content": "<table><tr><td colspan=\"2\">Dataset Cross-</td><td colspan=\"3\">Hinge Square Ensemble</td></tr><tr><td/><td>entropy</td><td/><td/><td/></tr><tr><td>20-</td><td>0.80</td><td>0.69</td><td>0.82</td><td>0.85</td></tr><tr><td colspan=\"2\">newsgroups</td><td/><td/><td/></tr><tr><td>Movie-</td><td>0.83</td><td>0.81</td><td>0.85</td><td>0.83</td></tr><tr><td>review</td><td/><td/><td/><td/></tr><tr><td>Email-</td><td>0.88</td><td>0.78</td><td>0.96</td><td>0.97</td></tr><tr><td colspan=\"2\">Classification</td><td/><td/><td/></tr><tr><td>(TREC)</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Reuters 0.79</td><td>0.79</td><td>0.81</td><td>0.81</td></tr></table>"
},
"TABREF1": {
"num": null,
"type_str": "table",
"text": "Accuracy",
"html": null,
"content": "<table/>"
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "Accuracy in data with 10% label noise The proposed loss function shows an improvement when compared with the use of wellknown individual loss functions. Furthermore, the approach is more robust against the presence of label noise. Moreover, according to our experiments, the gradient descent method quickly converged.",
"html": null,
"content": "<table><tr><td colspan=\"2\">5 Conclusion</td><td/><td/><td/></tr><tr><td colspan=\"5\">This paper proposed a new loss function based on</td></tr><tr><td colspan=\"5\">ensemble methods. This work focused on text</td></tr><tr><td colspan=\"5\">classification tasks and can be considered as an</td></tr><tr><td colspan=\"5\">initial attempt to explore the use of ensemble loss</td></tr><tr><td colspan=\"2\">functions. Dataset Cross-</td><td colspan=\"3\">Hinge Square Ensemble</td></tr><tr><td/><td>entropy</td><td/><td/><td/></tr><tr><td>20-</td><td>0.57</td><td>0.64</td><td>0.55</td><td>0.82</td></tr><tr><td colspan=\"2\">newsgroups</td><td/><td/><td/></tr><tr><td>movie-</td><td>0.55</td><td>0.54</td><td>0.55</td><td>0.6</td></tr><tr><td>review</td><td/><td/><td/><td/></tr><tr><td>Email-</td><td>0.80</td><td>0.46</td><td>0.81</td><td>0.93</td></tr><tr><td colspan=\"2\">Classification</td><td/><td/><td/></tr><tr><td>(TREC)</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Reuters 0.64</td><td>0.54</td><td>0.53</td><td>0.68</td></tr></table>"
},
"TABREF4": {
"num": null,
"type_str": "table",
"text": "Accuracy in data with 30% label noise",
"html": null,
"content": "<table/>"
}
}
}
}