|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:10:36.215788Z" |
|
}, |
|
"title": "Using the Past Knowledge to Improve Sentiment Classification", |
|
"authors": [ |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Peking University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Wenpeng", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Peking University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Peking University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper studies sentiment classification in the lifelong learning setting that incrementally learns a sequence of sentiment classification tasks. It proposes a new lifelong learning model (called L2PG) that can retain and selectively transfer the knowledge learned in the past to help learn the new task. A key innovation of this proposed model is a novel parameter-gate (p-gate) mechanism that regulates the flow or transfer of the previously learned knowledge to the new task. Specifically, it can selectively use the network parameters (which represent the retained knowledge gained from the previous tasks) to assist the learning of the new task t. Knowledge distillation is also employed in the process to preserve the past knowledge by approximating the network output at the state when task t \u2212 1 was learned. Experimental results show that L2PG outperforms strong baselines, including even multiple task learning.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper studies sentiment classification in the lifelong learning setting that incrementally learns a sequence of sentiment classification tasks. It proposes a new lifelong learning model (called L2PG) that can retain and selectively transfer the knowledge learned in the past to help learn the new task. A key innovation of this proposed model is a novel parameter-gate (p-gate) mechanism that regulates the flow or transfer of the previously learned knowledge to the new task. Specifically, it can selectively use the network parameters (which represent the retained knowledge gained from the previous tasks) to assist the learning of the new task t. Knowledge distillation is also employed in the process to preserve the past knowledge by approximating the network output at the state when task t \u2212 1 was learned. Experimental results show that L2PG outperforms strong baselines, including even multiple task learning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "A typical sentiment analysis (SA) or social media company that provides sentiment analysis services has to work for a large number of clients (Liu, 2012) . Each client normally wants to study people's opinions about a particular category of products or services, which we also call a domain. If we regard each such study/project as a task, we can model a SA company's working on a large number of studies/projects for clients as performing a sequence of SA tasks. A natural question that one would ask is whether after analyzing opinions about a number of products or services (tasks), the SA system of the company can do better on a new task by retaining the knowledge learned from the past/previous tasks and selectively transfer the prior knowledge to the new task to help it learn better. The answer should be yes because words and phrases used to express opinions or sentiments in different domains are similar and thus can mostly be shared or transferred across domains, although different domains do have domain specific sentiment expressions. This is a lifelong learning setting (Thrun, 1998; Silver et al., 2013; Chen and Liu, 2016) . This paper focuses on lifelong sentiment classification (Chen et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 153, |
|
"text": "(Liu, 2012)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1087, |
|
"end": 1100, |
|
"text": "(Thrun, 1998;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 1101, |
|
"end": 1121, |
|
"text": "Silver et al., 2013;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1122, |
|
"end": 1141, |
|
"text": "Chen and Liu, 2016)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1200, |
|
"end": 1219, |
|
"text": "(Chen et al., 2015)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Problem Definition: We consider incrementally learning a sequence of supervised sentiment classification (SC) tasks, 1, ..., t, .... Each task t has a training dataset D t train = {x t i , y t i } nt i=1 , where x t i is an input instance and y t i is its label, and n t is the number of training examples of the tth task. Our goal is to design a lifelong learning algorithm f (\u2022; \u03b8 t ) or neural network that can retain the knowledge learned in the past and selectively transfer the knowledge to improve the learning of each new task t. It is assumed that after each task is learned, its training data is deleted and thus not available to help learn any subsequent tasks. This is a common scenario in practice because clients usually want to ensure the confidentiality of their data and don't want their data shared or used by others.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This problem is clearly related a continual learning (CL) Parisi et al., 2019; Li and Hoiem, 2017; Wu et al., 2018; Schwarz et al., 2018; Hu et al., 2019; Ahn et al., 2019) , which also aims to learn a sequence of tasks incrementally. However, the main objective of the current CL techniques is to solve the catastrophic forgetting (CF) problem (McCloskey and Cohen, 1989) . That is, in learning each new task, the network parameters need to be modified in order to learn the new task. However, this modification can result in accuracy degradation for the previously learned tasks. In the problem defined above, our goal is to forward transfer the past knowledge to improve the new task learning. We don't need to ensure the classifiers or models learned for previous tasks still work well. 1 However, as we will see in the experiment section, the proposed method is able to outperform the current state-of-the-art CL algorithms. Although there is some existing work on lifelong sentiment classification (Chen et al., 2015; based on naive Bayes. Our deep learning model is based on an entirely different approach and it performs markedly better.", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 78, |
|
"text": "Parisi et al., 2019;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 79, |
|
"end": 98, |
|
"text": "Li and Hoiem, 2017;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 99, |
|
"end": 115, |
|
"text": "Wu et al., 2018;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 116, |
|
"end": 137, |
|
"text": "Schwarz et al., 2018;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 138, |
|
"end": 154, |
|
"text": "Hu et al., 2019;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 155, |
|
"end": 172, |
|
"text": "Ahn et al., 2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 345, |
|
"end": 372, |
|
"text": "(McCloskey and Cohen, 1989)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1004, |
|
"end": 1023, |
|
"text": "(Chen et al., 2015;", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To solve the proposed lifelong sentiment classification problem using a single neural network, two objectives have to be achieved. The first objective is to selectively transfer some pieces of knowledge learned in the past to assist the new task learning. Knowledge selection is critical here because not every piece of the past knowledge is useful (some even harmful) to the new task. The second objective is to preserve the knowledge learned in the past during learning the new task because if many pieces of previous knowledge are corrupted due to updates made in learning a new task, future tasks will not be able to benefit from them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper proposes a novel model, called L2PG (Lifelong Learning with Parameter-Gates), to achieve the objectives. To achieve the first objective, we propose a novel mechanism called the parameter-gate (p-gate) to give suitable importance values to the network parameters representing the past knowledge according to how useful they are to the new task and transfer them to the new task to enable it to learn better. We split the parameters \u03b8 t of the proposed model f (\u2022; \u03b8 t ) into three subsets: (1) the shared parameters \u03b8 s,t , (2) the task classification parameters \u03b8 c,t and (3) the p-gate parameters, where the shared parameters \u03b8 s,t and pgate parameters are continuously updated with the learning of each new task t. \u03b8 c,t remains unchanged for task t once the task is learned/trained. In learning a new task t, we only randomly initialize the task classification parameters \u03b8 c,t , and use an input p-gate to select parameters (or knowledge) from the shared parameters \u03b8 s,t\u22121 of the network state after learning task t \u2212 1 that are helpful to the new task t and use a block p-gate to block part of the previous training step parameters of \u03b8 s,t that are not useful (or harmful) to task t.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To achieve the second objective, knowledge dis-1 Lifelong learning and continual learning are often regarded as the same. Here, we follow (Thrun, 1998) and make this distinction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 151, |
|
"text": "(Thrun, 1998)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "tillation (Hinton et al., 2015) is used to ensure that the updated network can preserve the previous model's knowledge in learning the new task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 31, |
|
"text": "(Hinton et al., 2015)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper makes three main contributions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 It proposes a novel deep learning model L2PG that uses a novel p-gate mechanism and knowledge distillation for lifelong sentiment classification. To the best of our knowledge, this approach has not been reported in the existing lifelong or continual learning literature. \u2022 Unlike traditional gates that regulate the feature information flow through the sequence chain, the goal of the proposed p-gates is to select useful parameters (which represent the learned knowledge from previous tasks) to be transferred to the new task to make it learn better. In other words, p-gates regulate the knowledge transfer from the past to the present. \u2022 It creates a 3-class sentence level sentiment classification corpus from reviews of 10 diverse product categories for lifelong learning evaluation. Such evaluations need many tasks. To our knowledge, no existing sentence sentiment classification corpus fits this need. Experimental results show that L2PG outperforms state-of-the-art baselines including multitask learning, which optimizes all the tasks at the same time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our work is related to sentiment classification (Liu, 2012) , lifelong learning and continual learning. For sentiment classification, recent deep learning models have been shown to outperform traditional methods (Kim, 2014; Devlin et al., 2018; Shen et al., 2018; Qin et al., 2020) . However, these models don't retain or transfer the knowledge to new tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 59, |
|
"text": "(Liu, 2012)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 212, |
|
"end": 223, |
|
"text": "(Kim, 2014;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 224, |
|
"end": 244, |
|
"text": "Devlin et al., 2018;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 245, |
|
"end": 263, |
|
"text": "Shen et al., 2018;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 264, |
|
"end": 281, |
|
"text": "Qin et al., 2020)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Lifelong learning: Most relevant to our work is lifelong learning (Thrun, 1998; Silver et al., 2013; Ruvolo and Eaton, 2013; Liu, 2014, 2016) . For lifelong sentiment classification, Chen et al. (2015) used naive Bayes to leverage word probabilities under different classes in old tasks/domains as priors to help optimize the new task learning. worked similarly but their method can improve the model of a previous task without retraining. Xia et al. (2017) proposed a voting method but their method works on the same data from different time periods. Lv et al. (2019) proposed a model using two networks, one for knowledge retention and one for feature learning. But it was shown to be weaker than . L2PG has a very different approach and performs markedly better. studied aspect level sentiment classification, which is not the goal of L2PG. However, to the best of our knowledge, none of these methods used gated mechanisms to regulate the transfer of knowledge in the lifelong learning process.", |
|
"cite_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 79, |
|
"text": "(Thrun, 1998;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 80, |
|
"end": 100, |
|
"text": "Silver et al., 2013;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 101, |
|
"end": 124, |
|
"text": "Ruvolo and Eaton, 2013;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 125, |
|
"end": 141, |
|
"text": "Liu, 2014, 2016)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 183, |
|
"end": 201, |
|
"text": "Chen et al. (2015)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 440, |
|
"end": 457, |
|
"text": "Xia et al. (2017)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 552, |
|
"end": 568, |
|
"text": "Lv et al. (2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Continual learning: It is similar to lifelong learning, but its main goal is to overcome catastrophic forgetting to ensure learning of a new task will not forget the models learned for previous tasks (McCloskey and Cohen, 1989; Goodfellow et al., 2013) . For example, LWF (Li and Hoiem, 2017) uses knowledge distillation loss to ensure that after learning a new task, it can still approximate the performance of the old tasks. EWC (Kirkpatrick et al., 2017) introduces constraints to control parameter changes when learning a new task. HAT (Serr\u00e0 et al., 2018) masks units that are important to previous tasks by a hard attention. PGMA (Hu et al., 2019) generates a subset of parameters. Two reviews of continual learning can be found in Parisi et al., 2019) . Our lifelong learning setting focuses on transferring the past knowledge to the current task. We don't ensure that the models learned in the past still work well after learning a new task. Although Progressive Networks (Rusu et al., 2016) also tries to help future learning through knowledge transfer, but it is not scalable as its network size scales quadratically in the number of tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 227, |
|
"text": "(McCloskey and Cohen, 1989;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 252, |
|
"text": "Goodfellow et al., 2013)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 292, |
|
"text": "(Li and Hoiem, 2017)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 431, |
|
"end": 457, |
|
"text": "(Kirkpatrick et al., 2017)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 540, |
|
"end": 560, |
|
"text": "(Serr\u00e0 et al., 2018)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 636, |
|
"end": 653, |
|
"text": "(Hu et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 738, |
|
"end": 758, |
|
"text": "Parisi et al., 2019)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Knowledge Distillation Loss was proposed in (Hinton et al., 2015) for transferring knowledge in a large model to a smaller one. LWF uses knowledge distillation to help deal with forgetting. Dhar et al. (2019) proposed an information preserving penalty, attention distillation loss, to preserve the information about existing classes. This setting is different from ours as it incrementally learns more classes. Each of our tasks is an independent sentiment classification problem with multiple classes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 65, |
|
"text": "(Hinton et al., 2015)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 190, |
|
"end": 208, |
|
"text": "Dhar et al. (2019)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The working of the proposed model L2PG in learning the new task t is illustrated in Figure 1 . Our learner f (\u2022; \u03b8 t ) consists of three modules and two loss functions. The first module is the shared knowledge module (SK), which consists of a CNN (i.e., convolutional neural network) with various fil-it's a charming journey.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 92, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Proposed L2PG Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Task t+1 Task t-1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task t Word embedding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "x PG Figure 1 : The proposed L2PG model. In learning task t, the parameters in the yellow boxes are temporary copies of the parameters of task t \u2212 1 (a superscript \u2022 is used to indicate a copy) and are not changed (they are deleted after learning task t). The parameters in the blue boxes and blue disk are updated. Green lines are for knowledge distillation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 13, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task t Word embedding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "ters. It contains the shared knowledge across tasks in its parameters \u03b8 s,t . The second module is the task classification module (TC) with parameters \u03b8 c,t , which is a fully connected layer for the classification of task t. There is one TC for each task and it is fixed once t is learned. The third module is the p-gate module (PG).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task t Word embedding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In learning each new task t, a temporary copy of SK and of TC (in the yellow boxes of Figure 1 ) are made from the state of the network after task t \u2212 1 was learned. For clarity, we use the superscript \u2022 to indicate a copy of something. For example, \u03b8 s,t\u22121,\u2022 and \u03b8 c,t\u22121,\u2022 denote the copies of \u03b8 s,t\u22121 and \u03b8 c,t\u22121 respectively. They are fixed and not updated during the learning of task t. SK (in the blue box) and PG (in the blue disk) are updated in learning task t, and are also used in testing. The goal of PG is to identify useful knowledge for task t from the parameters \u03b8 s,t\u22121,\u2022 of SK after task t \u2212 1 training and to block the unhelpful or harmful knowledge in SK (see Sec. 3.3) for the current task. Knowledge distillation is used to ensure that in learning task t, the knowledge gained from the previous tasks are not forgotten. Updating the parameters of SK, TC and PG are done through back propagation. The two loss functions used are knowledge distillation loss and cross entropy loss.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 94, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task t Word embedding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Let the training data of task t be D t train , and an instance of it with length L (after padding or cutting) be x t i with label y t i . Training of SK (in the blue box of Figure 1) for the new task t starts with SK of the task t \u2212 1 model f (\u2022; \u03b8 t\u22121 ). After training of task t, f (\u2022; \u03b8 t\u22121 ) becomes SK of the model f (\u2022; \u03b8 t ) for task t. During training, the input instance goes through SK to get advanced features to be used by task t's TC module. Let V t ij \u2208 R k be the word vector corresponding to the jth word of x t i and X t i \u2208 R L\u00d7k be the embedding matrix of x t i . SK receives X t i from the input layer, and then extracts advanced features C t i in the form of a n-gram, i.e.,", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 182, |
|
"text": "Figure 1)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Shared Knowledge Module (SK)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "C t i = [c 1 , c 2 , ..., c L\u2212n+1 ] = [c j ] L\u2212n+1 j=1 (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shared Knowledge Module (SK)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where c j represents the output produced by CNN's filter on", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shared Knowledge Module (SK)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "X t i [j : j + n \u2212 1, :].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shared Knowledge Module (SK)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Mathematically, a convolution operation consists of a filter W t \u2208 R n\u00d7k and a bias b t \u2208 R. c j can be expressed as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shared Knowledge Module (SK)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "c j = g(W t \u2022 X t i [j : j + n \u2212 1, :] + b t )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Shared Knowledge Module (SK)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where g is a nonlinear activation function such as Relu. We use a Maxpooling operation over the feature map and take the maximum value C t i = max{C t i } as the feature corresponding to this particular filter. The shared knowledge from SK of the current task t is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shared Knowledge Module (SK)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "C t i = SK(X i ; \u03b8 s,t )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Shared Knowledge Module (SK)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where \u03b8 s,t is the whole set of parameters of SK of the current task t.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shared Knowledge Module (SK)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Using Eq. 3 we obtain a high-level representation of the input instance x t i . Then, we pass the feature of x t i through TC of the task t to obtain the classification result,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Classification Module (TC)", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "y t i = Softmax(C t i \u2022 W t c + b t c )", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Task Classification Module (TC)", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where W t c , b t c are the weight and bias of the classifier. Like SK above, we refer the classifier from the TC module of the current task t as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Classification Module (TC)", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "y t i = TC(C t i ; \u03b8 c,t )", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Task Classification Module (TC)", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where \u03b8 c,t is the set of all parameters of the TC (classifier) of the current task t. As mentioned earlier, TC is a fully connected layer (in the top blue box of Figure 1 ) and is randomly initialized.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 171, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task Classification Module (TC)", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Recall that in learning the new task t, the proposed p-gate mechanism (PG) selectively transfers some pieces of knowledge from the parameters \u03b8 s,t\u22121 after task t \u2212 1 is learned, i.e., f (\u2022; \u03b8 t\u22121 ), to the current task t. At the same time, PG also needs to block the knowledge that is not helpful to the current task or knowledge that may cause forgetting for previous tasks. We achieve the goals using two p-gates, an input p-gate and a block p-gate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P-Gate Module (PG)", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The input p-gate uses the Sigmoid function to determine what proportion of each parameter in the SK from the previous task should help the current task to learn. The input p-gate is formulated as,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P-Gate Module (PG)", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "z = Sigmoid(W z \u2022 \u03b8 s,t\u22121,\u2022 )", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "P-Gate Module (PG)", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where \u03b8 s,t\u22121,\u2022 is a copy of \u03b8 s,t\u22121 , the parameters of the network state after task t \u2212 1 was learned (see the top yellow box in Figure 1 ), and W z is the set of trainable input p-gate's parameters. \u03b8 s,t\u22121,\u2022 does not change during training. z ij \u2192 1 means that the corresponding parameter is almost completely helpful to the learning of the current task, and z ij \u2192 0 means that the parameter is of no help (or harmful) to the current task t.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 139, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "P-Gate Module (PG)", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The block p-gate blocks some SK's parameters from the previous training step S \u2212 1 in the training process of the current task t. \u03b8 s,t S\u22121 serves as the initial parameters of \u03b8 s,t S of the current training step S. The block p-gate is formulated as,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P-Gate Module (PG)", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "b = Sigmoid(W b \u2022 \u03b8 s,t S )", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "P-Gate Module (PG)", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where W b is the set of trainable block p-gate's parameters. b ij \u2192 0 means that the current parameter almost certainly has a negative effect on the next learning or may lead to forgetting. Both the input p-gate's parameters W z and block p-gate's parameters W b are trained by minimizing the loss function of the current task t's classification module TC. After this step of training using a batch of examples for task t is completed, SK's parameters of step S is revised by the following combination operation, i.e., the trained \u03b8 s,t S is replaced,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P-Gate Module (PG)", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b8 s,t S := z * \u03b8 s,t\u22121,\u2022 + b * \u03b8 s,t S", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "P-Gate Module (PG)", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "This operation is to reduce the interference of the new task t on the existing knowledge learned in the past and cause forgetting. After the parameter combination and revision is done, the training goes to the next step/iteration S + 1 using another batch of data. Note that this combination and replacement operation is not used if S is the last step of an epoch.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P-Gate Module (PG)", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In order for the model to retain old knowledge during the learning process, we use the knowledge distillation loss in (Hinton et al., 2015) to encourage the outputs of one network to approximate the outputs of another, similar to LWF. Therefore, when we start training task t, we first use f (\u2022; \u03b8 t\u22121 ) to get the softmax output Y t o = {y t oi } nt i=1 of all training instances of t and Y t o = { y t oi } nt i=1 is the softmax outputs of SK of task t combining TC of task t \u2212 1, which are used to build a knowledge distillation loss. Let Y t = {y t i } nt i=1 be all ground truth labels of task t and Y t = { y t i } nt i=1 be the softmax outputs of f (\u2022; \u03b8 t ) used to build the cross entropy loss. n t is the number of training examples of task t.", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 139, |
|
"text": "(Hinton et al., 2015)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective of Optimization", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We now present the L2PG's optimization goals when sequentially learning each new task t.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective of Optimization", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Knowledge Distillation Loss: It is defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective of Optimization", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L D (Y t o , Y t o ) = \u2212 nt i=1 y t oi \u2022 log( y t oi ) (9) y t oi = (y t oi ) 1/K j (y t oj ) 1/K , y t oi = ( y t oi ) 1/K j ( y t oj ) 1/K .", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Objective of Optimization", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "where K is a hyperparameter and Hinton et al. (2015) suggests K > 1, which increases the weight of smaller logit values and encourages the network to better encode similarities among classes. Classification Loss: The classification loss of the current learner f (\u2022; \u03b8 t ) for task t is cross entropy of Y t and Y t ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective of Optimization", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L C (Y t , Y t ) = \u2212 nt i=1 y t i \u2022 log( y t i )", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "Objective of Optimization", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "So, the total loss is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective of Optimization", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L = L C (Y t , Y t ) + \u03bbL D (Y t o , Y t o ) + \u03b2R(\u03b8 t )", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "Objective of Optimization", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "where \u03bb and \u03b2 are hyperparameters, R(\u03b8 t ) is the regularization term (we use L2 regularizer), and \u03b8 t includes \u03b8 c,t , \u03b8 s,t , W b and W z . The algorithm of L2PG for training the new task t is given in Algorithm 1, which is self-explanatory.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective of Optimization", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We now evaluate L2PG and compare it with two main types of baselines, i.e., those under lifelong sentiment classification and those under continual learning for dealing with catastrophic forgetting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Algorithm 1 L2PG -Learning the new task t 1: Input: Training set D train t of task t, and shared parameters \u03b8 s,t\u22121,\u2022 and task classification parameters \u03b8 c,t\u22121,\u2022 from task t \u2212 1. 2: Initialize: \u03b8 s,t 0 \u2190 \u03b8 s,t\u22121 // 0 denotes training step \u03b8 c,t 0 \u2190 Random(|\u03b8 c,t |) 3: for each training step S = 0, 1, ..., M do ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Y t o = f (X t S ; \u03b8 s,t\u22121,\u2022 , \u03b8 c,t\u22121,\u2022 ); 7: Y t o = f (X t S ; \u03b8 s,t S , \u03b8 c,t\u22121,\u2022 ) ; 8:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "// compute output for loss L C 9:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Y t = f (X t S ; \u03b8 s,t S , \u03b8 c,t S );", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "10:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Update parameters:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "11:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Parameters \u03b8 s,t S , \u03b8 c,t S , W z and W b are updated by minimize Eq. 12;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "12: // Use the trained p-gate parameters to // select the knowledge for the next step 13:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "z = Sigmoid(W z \u2022 \u03b8 s,t\u22121,\u2022 ); 14: b = Sigmoid(W b \u2022 \u03b8 s,t S ); 15: \u03b8 s,t S := z * \u03b8 s,t\u22121,\u2022 + b * \u03b8 s,t", |
|
"eq_num": "S 16" |
|
} |
|
], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": ": end for", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We carried out experiments on two datasets. The first dataset is for document level sentiment classification with two classes, positive and negative. It consists of reviews of 16 diverse kinds of products (domains) commonly used in multi-task text classification (Liu et al., 2017) . The reviews of the first 14 products are from Amazon.com. The remaining two are about movie reviews (IMDB and MR). The number of training and testing samples for each product (or task) is about 1,400 and 400, respectively. We call this dataset Mix-16, which gives us 16 tasks, one per product category/domain. The second dataset is for sentence-level sentiment classification and is created by us. It consists of review sentences of 10 types of products/domains crawled from Amazon.com, which gives us 10 tasks. Each sentence is labeled with positive, negative or neutral. The sentences with conflict opinions (e.g., both positive and negative) are not used. Sentence sentiment classification of each domain forms a task. The review sentences of each product are annotated by two annotators independently. We trained all the annotators and provided them with an annotation instruction document. After training, each of them was asked to", |
|
"cite_spans": [ |
|
{ |
|
"start": 263, |
|
"end": 281, |
|
"text": "(Liu et al., 2017)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Avg.L T rain T est |V | Air conditioner 15 1,018 439 2,714 Diaper 17 1,065 459 2,685 Stove 15 1,084 467 2,813 Headphone 15 1,186 510 3,476 Bike 16 1,021 441 3,097 Luggage 17 1,211 520 3,380 Smartphone 16 1,187 511 3,778 GPS 17 1,318 567 3,976 TV perform annotation of 50 sentences to assess their annotation quality. They started their annotation only after we were satisfied with their annotations. After they completed their annotations, sentences with disagreements were identified and discussed by the annotators to come to an agreement. The Kappa score for annotator agreement was 0.7947. Note that we are aware that there are some existing sentence sentiment classification data, but each of them is only from reviews of a single product. We are unable to create many different domain tasks from them to suit lifelong learning. Furthermore, they mostly have only two classes, positive and negative, which do not reflect all review sentences because many review sentences express no sentiment (neutral), e.g., \"I bought this camera yesterday.\" That is why we created the new dataset with 10 different categories of products, which give us 10 tasks for lifelong learning. 2 We denote this dataset as Amazon-10.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1217, |
|
"end": 1218, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 286, |
|
"text": "| Air conditioner 15 1,018 439 2,714 Diaper 17 1,065 459 2,685 Stove 15 1,084 467 2,813 Headphone 15 1,186 510 3,476 Bike 16 1,021 441 3,097 Luggage 17 1,211 520 3,380 Smartphone 16 1,187 511 3,778 GPS 17 1,318 567 3,976 TV", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We consider the following baselines for comparison with the proposed L2PG model. The feature extraction module (e.g., SK of L2PG) of all models including L2PG uses CNN (Kim, 2014) and each classifier is a fully connected layer (e.g., TC of L2PG for each task).", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 179, |
|
"text": "(Kim, 2014)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "I-CNN: I-CNN is a single-task CNN classifier, where one CNN model performs each task independently, no sharing of knowledge across tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "S-CNN: S-CNN is I-CNN but uses one CNN model (one feature extractor and one classifier) to incrementally learn all tasks. No mechanism is used to deal with knowledge transfer or forgetting. LWF-T: This is a continual learning model based on Learning without forgetting (LWF) (Li and Hoiem, 2017) . It uses knowledge distillation to deal with catastrophic forgetting. Since LWF was originally designed for image classification, we modified it for text classification using the same model as the above, i.e., CNN for the shared parameter module, one fully connected layer for each task's classifier (each task has its own classifier). When training the new task, the parameters of the task-specific classifiers of the previous tasks are fixed. We denote this LWF model as LWF-T.", |
|
"cite_spans": [ |
|
{ |
|
"start": 275, |
|
"end": 295, |
|
"text": "(Li and Hoiem, 2017)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "HAT: This is a well-known algorithm for continual learning that deals with catastrophic forgetting (Serr\u00e0 et al., 2018) . Since HAT (or UCL below) was also designed for image classification, we again adapted it for text. HAT has almost no forgetting for image classification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 119, |
|
"text": "(Serr\u00e0 et al., 2018)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "UCL: This is a latest continual learning model (Ahn et al., 2019 ) that improves HAT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 64, |
|
"text": "(Ahn et al., 2019", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "LSC: This is the naive Bayes-based lifelong sentiment classification model in (Chen et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 97, |
|
"text": "(Chen et al., 2015)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "LNB: LNB is similar to LSC but is able to improve the model of a previous task without retraining. The system in (Lv et al., 2019) is not compared as it performed poorer than LNB .", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 130, |
|
"text": "(Lv et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "MTL: This is a multi-task learning baseline using CNN as the shared knowledge module as L2PG and each task has its own task-specific classifier like L2PG, HAT, and UCL. 3 In (Li and Hoiem, 2017) , MTL's performance was regarded as the upper bound of continual learning because the training data of all tasks are available during training. But for L2PG, after each sentiment classification task is learned, its data is assumed deleted.", |
|
"cite_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 170, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 174, |
|
"end": 194, |
|
"text": "(Li and Hoiem, 2017)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Training details. For all models in our experiments, the word embedding are randomly initialized as 300-dimension vectors and then modified during training. We use filter sizes of [3, 4, 5] with 100 feature maps each in the CNN module, and dropout rate of 0.5. In L2PG, we set mini-batch size to 50, learning rate to 0.001, temperature T = 2 and \u03bb, \u03b2 = 1. We use the same feature extractor CNN and classifier as other models. For HAT and UCL, we modified their code for text and optimized their parameters (their original parameters performed poorly for text), but we did not change their algorithms. HAT and UCL need 300 and 100 epochs to achieve the best results respectively, but for others, 20 epochs are sufficient. For LSC and LNB, we use their original code. Note that LSC and LNB can only deal with two-class sentiment classification due to the limitation of its knowledge sharing mechanism. Thus we cannot run it on the second dataset which has three classes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 180, |
|
"end": 183, |
|
"text": "[3,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 184, |
|
"end": 186, |
|
"text": "4,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 187, |
|
"end": 189, |
|
"text": "5]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "For our lifelong learning setting, we use 5 random task sequences to compute the accuracy as different task sequences may give different results. 4 For each sequence, each task (also a domain) is used as the last task in turn to collect its test result. This is because we are only interested in improving the accuracy of the current/new task based on knowledge learned in the previous tasks. Table 2 and Table 3 give the mean accuracy of each task when it is the last task for Mix-16 and Amazon-10 respectively. The average accuracy of each column is given in the last row of each table. L2PG significantly outperforms every baseline on both datasets with p-value < 0.01 on paired t-test. Compared with I-CNN, L2PG increases the averaged accuracy by 4.01%", |
|
"cite_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 147, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 393, |
|
"end": 412, |
|
"text": "Table 2 and Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "on Mix-16 and 5.76% on Amazon-10. This is because I-CNN treats each task independently, but L2PG performs knowledge transfer. Even naive single continual learning of S-CNN outperforms I-CNN by 2.20%, 2.87% on the two datasets respectively. This shows that significant knowledge sharing exists in sentiment classification tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Compare with Lifelong Sentiment Classification Models: LSC and LNB are only designed for 2-class lifelong sentiment classification. They cannot handle three classes in Amazon-10 and thus have no result for it. L2PG is only compared to LSC and LNB on Mix-16. In Table 2 , we see that L2PG outperforms LSC and LNB by 2.53%, 2.77% respectively. One reason is that LSC and LNB are naive Bayes approaches, which cannot model the contextual relationship due to its conditional independence assumption on features (words). L2PG does not have this limitation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 261, |
|
"end": 268, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "For continual learning models LWF-T, HAT and UCL, to be consistent with the lifelong setting of L2PG, we also take turns to put each task as the last and use the final model to get the accuracy of the last task (there is no forgetting for the last task). The average accuracy of L2PG on both datasets is Table 4 : Ablation experiments on Amazon-10. For each system, the result is the average of all tasks' accuracy in the lifelong learning setting, where L D is the knowledge distillation loss. markedly higher than these models. For example, on Amazon-10, L2PG's average accuracy is 2.20% higher than LWF-T, 6.48% higher than HAT and 2.66% higher than UCL. As we can see, continual learning models LWF-T and UCL (the latest algorithm) that only deals with catastrophic forgetting also achieve better results than I-CNN as the tasks are similar and share a great deal of knowledge (HAT is markedly worse). However, since they do not have specific mechanisms to perform knowledge transfer, they are weaker than L2PG. Compare with MTL: Under the condition that the same CNN is used as the feature extractor and a fully connected layer is used as a task-specific classifier for each task, L2PG is on average 1.22% better than MTL on Mix-16 and 2.99% better than MTL on Amazon-10. MTL is often considered the upper bound of continual learning because it trains all the tasks together. However, its loss is the sum of the losses of all tasks, which does not mean it optimizes for every individual task. L2PG in the lifelong learning setting tries to do the best for the new/current task.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 304, |
|
"end": 311, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Compare with Continual Learning Models:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Ablation Experiments and Analysis: To show the usefulness of each component of L2PG, we perform ablation experiments on the Amazon-10 data without using knowledge distillation loss, the p-gate modele (PG), or both. Their results are given in Table 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 249, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Compare with Continual Learning Models:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When only removing the knowledge distillation loss from L2PG (w/o L D ), which we call L2PG-NK, the average accuracy drops by about 1.15%, which indicates that using knowledge distillation loss to actively preserve the old knowledge is useful. When only removing the p-gate module from L2PG (w/o PG), which is actually LWF-T, the average accuracy drops by about 2.10%, which shows that our PG mechanism can choose and transfer the right knowledge to the new task. Without both knowledge distillation loss and PG (w/o L D or P G), which is actually S-CNN, the result is much worse. Comparing L2PG-NK with LWF-T and S-CNN, we can see L2PG-NK's average score is 0.95% higher than LWF-T, 1.74% higher than S-CNN, which indicates that even without distillation loss, the PG mechanism can effectively retain the past knowledge and use it effectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compare with Continual Learning Models:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here we run L2PG as a continual learning system. Like LWF-T, HAT and UCL, after all tasks are learned, L2PG is tested on every task's test data (note, in the lifelong learning setting, we only test on the last task). The continual learning results on the two datasets are presented in Figures 2 and 3 , where six models are compared, namely, I-CNN, S-CNN, LWF-T, HAT, UCL and L2PG. From the figures, we observe that L2PG actually can outperform all the other five models. This is due to the fact that L2PG encourages knowledge transfer, while the continual learning systems LWF-T, HAT and UCL only focus on preserving the past knowledge.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 285, |
|
"end": 300, |
|
"text": "Figures 2 and 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "L2PG in the Continual Learning Setting", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "This paper proposed an effective model L2PG for lifelong sentiment classification. L2PG not only can retain what it has learned, but also selectively transfer the past knowledge to learn the new task better. The key component is the proposed parameter gate (p-gate) mechanism that is able to select the right previously learned knowledge or parameters to transfer to the new task. Knowledge distillation is also employed to maintain the knowledge or models learned for the previous tasks. Empirical evaluation showed L2PG outperforms strong baselines in lifelong learning, continual learning, and even multi-task learning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our code and the newly created dataset can be found from https://github.com/Qqinmaster/L2PG", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that we use a comparable architecture for MTL to other baseline models for fair comparisons. It is not the stateof-the-art model reported in the literature, which uses more sophisticated architectures and achieves better results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Because LSC and LNB pairs are naive bayes based methods, they will not be affected by task order under lifelong learning setting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Uncertainty-based continual learning with adaptive regularization", |
|
"authors": [ |
|
{ |
|
"first": "Hongjoon", |
|
"middle": [], |
|
"last": "Ahn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungmin", |
|
"middle": [], |
|
"last": "Cha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donggyu", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taesup", |
|
"middle": [], |
|
"last": "Moon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4394--4404", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hongjoon Ahn, Sungmin Cha, Donggyu Lee, and Taesup Moon. 2019. Uncertainty-based continual learning with adaptive regularization. In Advances in Neural Information Processing Systems, pages 4394-4404.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Topic modeling using topics from many domains, lifelong learning and big data", |
|
"authors": [ |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiyuan Chen and Bing Liu. 2014. Topic modeling using topics from many domains, lifelong learning and big data. In ICML.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Lifelong machine learning", |
|
"authors": [ |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Synthesis Lectures on Artificial Intelligence and Machine Learning", |
|
"volume": "10", |
|
"issue": "3", |
|
"pages": "1--145", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiyuan Chen and Bing Liu. 2016. Lifelong machine learning. Synthesis Lectures on Artificial Intelli- gence and Machine Learning, 10(3):1-145.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Lifelong machine learning", |
|
"authors": [ |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Synthesis Lectures on Artificial Intelligence and Machine Learning", |
|
"volume": "12", |
|
"issue": "3", |
|
"pages": "1--207", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiyuan Chen and Bing Liu. 2018. Lifelong machine learning. Synthesis Lectures on Artificial Intelli- gence and Machine Learning, 12(3):1-207.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Lifelong learning for sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianzu", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "ACL-2015", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "750--756", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiyuan Chen, Nianzu Ma, and Bing Liu. 2015. Life- long learning for sentiment classification. In ACL- 2015, pages 750-756.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Learning without memorizing", |
|
"authors": [ |
|
{ |
|
"first": "Prithviraj", |
|
"middle": [], |
|
"last": "Dhar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajat Vikram", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kuan-Chuan", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ziyan", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rama", |
|
"middle": [], |
|
"last": "Chellappa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5138--5146", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Prithviraj Dhar, Rajat Vikram Singh, Kuan-Chuan Peng, Ziyan Wu, and Rama Chellappa. 2019. Learn- ing without memorizing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5138-5146.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "An empirical investigation of catastrophic forgetting in gradient-based neural networks", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehdi", |
|
"middle": [], |
|
"last": "Goodfellow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Da", |
|
"middle": [], |
|
"last": "Mirza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Xiao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1312.6211" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. 2013. An em- pirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Distilling the knowledge in a neural network", |
|
"authors": [ |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1503.02531" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Overcoming catastrophic forgetting for continual learning via model adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Wenpeng", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhou", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chongyang", |
|
"middle": [], |
|
"last": "Tao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhengwei", |
|
"middle": [], |
|
"last": "Tao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinwen", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongyan", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wenpeng Hu, Zhou Lin, Bing Liu, Chongyang Tao, Zhengwei Tao, Jinwen Ma, Dongyan Zhao, and Rui Yan. 2019. Overcoming catastrophic forgetting for continual learning via model adaptation. In ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Convolutional neural networks for sentence classification", |
|
"authors": [ |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. Eprint Arxiv.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Overcoming catastrophic forgetting in neural networks", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Kirkpatrick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Razvan", |
|
"middle": [], |
|
"last": "Pascanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Neil", |
|
"middle": [], |
|
"last": "Rabinowitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Veness", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Desjardins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrei", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Rusu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kieran", |
|
"middle": [], |
|
"last": "Milan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Quan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tiago", |
|
"middle": [], |
|
"last": "Ramalho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Agnieszka", |
|
"middle": [], |
|
"last": "Grabska-Barwinska", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the national academy of sciences", |
|
"volume": "114", |
|
"issue": "", |
|
"pages": "3521--3526", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Ag- nieszka Grabska-Barwinska, et al. 2017. Over- coming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521-3526.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Learning without forgetting", |
|
"authors": [ |
|
{ |
|
"first": "Zhizhong", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Derek", |
|
"middle": [], |
|
"last": "Hoiem", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "IEEE transactions on pattern analysis and machine intelligence", |
|
"volume": "40", |
|
"issue": "", |
|
"pages": "2935--2947", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhizhong Li and Derek Hoiem. 2017. Learning with- out forgetting. IEEE transactions on pattern analy- sis and machine intelligence, 40(12):2935-2947.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Sentiment analysis and opinion mining. Synthesis lectures on human language technologies", |
|
"authors": [ |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "1--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bing Liu. 2012. Sentiment analysis and opinion min- ing. Synthesis lectures on human language technolo- gies, 5(1):1-167.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Adversarial multi-task learning for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Pengfei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xipeng", |
|
"middle": [], |
|
"last": "Qiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuanjing", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1704.05742" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-task learning for text classifica- tion. arXiv preprint arXiv:1704.05742.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Sentiment classification by leveraging the shared knowledge from a sequence of domains", |
|
"authors": [ |
|
{ |
|
"first": "Guangyi", |
|
"middle": [], |
|
"last": "Lv", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuai", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Enhong", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kun", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "DASFAA", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "795--811", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guangyi Lv, Shuai Wang, Bing Liu, Enhong Chen, and Kun Zhang. 2019. Sentiment classification by lever- aging the shared knowledge from a sequence of do- mains. In DASFAA, pages 795-811.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Catastrophic interference in connectionist networks: The sequential learning problem", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Mccloskey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Neal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Psychology of learning and motivation", |
|
"volume": "24", |
|
"issue": "", |
|
"pages": "109--165", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael McCloskey and Neal J Cohen. 1989. Catas- trophic interference in connectionist networks: The sequential learning problem. In Psychology of learn- ing and motivation, volume 24, pages 109-165. El- sevier.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Continual Lifelong Learning with Neural Networks: A Review", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "German", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ronald", |
|
"middle": [], |
|
"last": "Parisi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kemker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Jose", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Part", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Kanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wermter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "113", |
|
"issue": "", |
|
"pages": "54--71", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "German I Parisi, Ronald Kemker, Jose L Part, Christo- pher Kanan, and Stefan Wermter. 2019. Continual Lifelong Learning with Neural Networks: A Review. Neural Networks, 113:54-71.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Feature projection for improved text classification", |
|
"authors": [ |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenpeng", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qi Qin, Wenpeng Hu, and Bing Liu. 2020. Feature pro- jection for improved text classification. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Razvan Pascanu, and Raia Hadsell. 2016. Progressive neural networks", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Andrei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rusu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Neil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Rabinowitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hubert", |
|
"middle": [], |
|
"last": "Desjardins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Soyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koray", |
|
"middle": [], |
|
"last": "Kirkpatrick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kavukcuoglu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.04671" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrei A Rusu, Neil C Rabinowitz, Guillaume Des- jardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. 2016. Progressive neural networks. arXiv preprint arXiv:1606.04671.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "ELLA: An efficient lifelong learning algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Ruvolo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Eaton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "507--515", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul Ruvolo and Eric Eaton. 2013. ELLA: An efficient lifelong learning algorithm. In ICML, pages 507- 515.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Progress & compress: A scalable framework for continual learning", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Schwarz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jelena", |
|
"middle": [], |
|
"last": "Luketina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wojciech", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Czarnecki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Agnieszka", |
|
"middle": [], |
|
"last": "Grabska-Barwinska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yee", |
|
"middle": [ |
|
"Whye" |
|
], |
|
"last": "Teh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Razvan", |
|
"middle": [], |
|
"last": "Pascanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raia", |
|
"middle": [], |
|
"last": "Hadsell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1805.06370" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Schwarz, Jelena Luketina, Wojciech M Czar- necki, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. 2018. Progress & compress: A scalable framework for con- tinual learning. arXiv preprint arXiv:1805.06370.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Overcoming catastrophic forgetting with hard attention to the task", |
|
"authors": [ |
|
{ |
|
"first": "Joan", |
|
"middle": [], |
|
"last": "Serr\u00e0", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D\u00eddac", |
|
"middle": [], |
|
"last": "Sur\u00eds", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marius", |
|
"middle": [], |
|
"last": "Miron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandros", |
|
"middle": [], |
|
"last": "Karatzoglou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1801.01423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joan Serr\u00e0, D\u00eddac Sur\u00eds, Marius Miron, and Alexandros Karatzoglou. 2018. Overcoming catastrophic forget- ting with hard attention to the task. arXiv preprint arXiv:1801.01423.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Baseline needs more love: On simple wordembedding-based models and associated pooling mechanisms", |
|
"authors": [], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Baseline needs more love: On simple word- embedding-based models and associated pooling mechanisms. ACL-2018.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Lifelong machine learning systems: Beyond learning algorithms", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Silver", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lianghao", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "2013 AAAI spring symposium series", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel L Silver, Qiang Yang, and Lianghao Li. 2013. Lifelong machine learning systems: Beyond learn- ing algorithms. In 2013 AAAI spring symposium se- ries.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Lifelong learning algorithms", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sebastian Thrun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Learning to learn", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "181--209", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Thrun. 1998. Lifelong learning algorithms. In Learning to learn, pages 181-209. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Forward and backward knowledge transfer for sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuai", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianzu", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yan", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1906.03506" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hao Wang, Bing Liu, Shuai Wang, Nianzu Ma, and Yan Yang. 2019. Forward and backward knowledge transfer for sentiment classification. arXiv preprint arXiv:1906.03506.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Lifelong learning memory networks for aspect sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Shuai", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guangyi", |
|
"middle": [], |
|
"last": "Lv", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sahisnu", |
|
"middle": [], |
|
"last": "Mazumder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geli", |
|
"middle": [], |
|
"last": "Fei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "IEEE BigData", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shuai Wang, Guangyi Lv, Sahisnu Mazumder, Geli Fei, and Bing Liu. 2018. Lifelong learning memory net- works for aspect sentiment classification. In IEEE BigData 2018.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Incremental classifier learning with generative adversarial networks", |
|
"authors": [ |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinpeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lijuan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuancheng", |
|
"middle": [], |
|
"last": "Ye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zicheng", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yandong", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhengyou", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yun", |
|
"middle": [], |
|
"last": "Fu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1802.00853" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, Zhengyou Zhang, and Yun Fu. 2018. Incremental classifier learning with generative adversarial networks. arXiv preprint arXiv:1802.00853.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Distantly supervised lifelong learning for large-scale social media sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jie", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huihui", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "IEEE Transactions on Affective Computing", |
|
"volume": "8", |
|
"issue": "4", |
|
"pages": "480--491", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rui Xia, Jie Jiang, and Huihui He. 2017. Distantly su- pervised lifelong learning for large-scale social me- dia sentiment analysis. IEEE Transactions on Affec- tive Computing, 8(4):480-491.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Interactive attention transfer network for cross-domain sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hefu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongke", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hengshu", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Enhong", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kai Zhang, Hefu Zhang, Qi Liu, Hongke Zhao, Heng- shu Zhu, and Enhong Chen. 2019. Interactive at- tention transfer network for cross-domain sentiment classification. AAAI-2019.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "Mix-16: accuracy of its 16 tasks of continual learning. LSC, LNB and MTL are not used as they don't work in the continual learning setting. Ai r-c on dit ion Bi ke Di ap er GP S He ad ph on e Ho te l Lu gg ag e Sm ar tp ho ne St ov e Amazon-10: accuracy of its 10 tasks of continual learning. LSC, LNB & MTL do not work in the continual learning setting.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Dataset statistics of Amazon-10. Avg.l: Average sentence length. T rain, T est: number of training and test sentences respectively. |V |: Vocabulary size." |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>I-CNN</td><td>S-CNN</td><td>LWF-T</td><td>HAT</td><td>UCL</td><td>MTL</td><td>L2PG</td></tr><tr><td>Bike</td><td colspan=\"7\">64.44(\u00b1 0.79) 65.47 (\u00b1 1.04) 65.88 (\u00b1 1.07) 62.17 (\u00b1 4.42) 65.85 (\u00b1 0.81) 66.32 (\u00b1 0.79) 67.48 (\u00b1 0.47)</td></tr><tr><td>GPS</td><td>60.98 (\u00b10.47)</td><td>66.03 (\u00b11.72)</td><td>67.13 (\u00b11.11)</td><td>60.49 (\u00b13.45)</td><td>66.08 (\u00b10.93)</td><td>64.93 (\u00b11.50)</td><td>68.78 (\u00b10.75)</td></tr><tr><td>Hotel</td><td colspan=\"4\">65.01 (\u00b1 0.71) 64.50 (\u00b1 1.03) 66.05 (\u00b1 1.15) 60.28 (\u00b11.55)</td><td>66.44 (\u00b10.88)</td><td>64.66 (\u00b10.61)</td><td>68.73 (\u00b11.48)</td></tr><tr><td>Luggage</td><td>69.23 (\u00b10.35)</td><td colspan=\"3\">73.36 (\u00b10.64) 73.42 (\u00b1 0.25) 70.16 (\u00b11.23)</td><td>73.41 (\u00b10.37)</td><td>73.22 (\u00b10.51)</td><td>76.58 (\u00b10.71)</td></tr><tr><td>Diaper</td><td>63.83 (\u00b10.84)</td><td>65.94 (\u00b11.22)</td><td>66.33 (\u00b11.49)</td><td>62.77 (\u00b11.35)</td><td>64.74 (\u00b10.51)</td><td>66.12 (\u00b10.88)</td><td>68.05 (\u00b11.26)</td></tr><tr><td>Smartphone</td><td>60.61 (\u00b11.18)</td><td>66.76 (\u00b10.55)</td><td>67.73 (\u00b10.93)</td><td>60.43 (\u00b13.63)</td><td>65.63 (\u00b11.16)</td><td>66.10 (\u00b11.28)</td><td>69.74 (\u00b10.35)</td></tr><tr><td>Stove</td><td>67.23 (\u00b10.94)</td><td>68.28 (\u00b10.64)</td><td>69.89 (\u00b11.05)</td><td>67.19 (\u00b12.05)</td><td>68.24 (\u00b10.43)</td><td>69.92 (\u00b10.67)</td><td>70.67 (\u00b11.28)</td></tr><tr><td>Headphone</td><td>62.74 (\u00b10.62)</td><td>65.17 (\u00b11.21)</td><td>65.61(\u00b1 0.95)</td><td>61.36 (\u00b12.41)</td><td colspan=\"3\">65.90 (\u00b10.68) 64.18 (\u00b1 1.14) 68.18 (\u00b11.08)</td></tr><tr><td>TV</td><td>61.27 (\u00b10.46)</td><td>64.43 (\u00b10.36)</td><td>65.34 (\u00b11.55)</td><td>61.18 (\u00b11.37)</td><td>64.58 (\u00b10.42)</td><td>64.18 (\u00b10.65)</td><td>66.70 (\u00b11.24)</td></tr><tr><td colspan=\"2\">Air-condition 61.63 (\u00b10.67)</td><td>65.77 (\u00b10.85)</td><td>66.22 (\u00b10.79)</td><td>63.87 (\u00b12.21)</td><td>67.10 (\u00b11.27)</td><td>65.10 (\u00b11.24)</td><td>69.66 (\u00b11.02)</td></tr><tr><td>Average</td><td>63.70 (\u00b10.71)</td><td>66.57 (\u00b10.93)</td><td>67.36 (\u00b11.04)</td><td>62.98 (\u00b12.37)</td><td>66.80 (\u00b10.75)</td><td>66.47 (\u00b10.93)</td><td>69.46 (\u00b10.97)</td></tr></table>", |
|
"text": ": Average accuracy (%) of each task (or domain) over 5 different task sequences for every candidate model under the lifelong learning setting. LSC and LNB don't have \u00b1sd as they are task sequence independent." |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Amazon-10: Average accuracy (%) of each task (or domain) over 5 different task sequences for every candidate model under the lifelong learning setting. LSC and LNB are not used here because their algorithms cannot handle more than 2 classes in a task." |
|
} |
|
} |
|
} |
|
} |