ACL-OCL / Base_JSON /prefixI /json /icnlsp /2021.icnlsp-1.29.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:33:21.967810Z"
},
"title": "Domain and Task-Informed Sample Selection for Cross-Domain Target-based Sentiment Analysis",
"authors": [
{
"first": "Kasturi",
"middle": [],
"last": "Bhattacharjee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {}
},
"email": ""
},
{
"first": "Rashmi",
"middle": [],
"last": "Gangadharaiah",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {}
},
"email": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {}
},
"email": ""
},
{
"first": "Aws",
"middle": [],
"last": "Ai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A challenge for target-based sentiment analysis is that most datasets are domain-specific and thus building supervised models for a new target domain requires substantial annotation effort. Domain adaptation for this task has two dimensions: the nature of the targets (e.g., entity types, properties associated with entities, or arbitrary spans) and the opinion words used to describe the sentiment towards the target. We present a data sampling strategy informed by the difference between the target and source domains across these two dimensions (i.e., targets and opinion words) with the goal of selecting a small number of examples that would be hard to learn in the new target domain compared to the source domain, and thus good candidates for annotation. This obtains performance in the 86-100% range compared to the full supervised model using only \u21e04-15% of the full training data.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "A challenge for target-based sentiment analysis is that most datasets are domain-specific and thus building supervised models for a new target domain requires substantial annotation effort. Domain adaptation for this task has two dimensions: the nature of the targets (e.g., entity types, properties associated with entities, or arbitrary spans) and the opinion words used to describe the sentiment towards the target. We present a data sampling strategy informed by the difference between the target and source domains across these two dimensions (i.e., targets and opinion words) with the goal of selecting a small number of examples that would be hard to learn in the new target domain compared to the source domain, and thus good candidates for annotation. This obtains performance in the 86-100% range compared to the full supervised model using only \u21e04-15% of the full training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Target-based sentiment analysis aims to detect sentiments associated with specific targets in a given document. For instance, in Table 1 , the targets service, decor, food, portions have positive sentiment whereas operating system and kim kardashian have a negative sentiment. A key challenge for this task is that domain differences manifest themselves in terms of target types as well as the choice of opinion words used to express the sentiments towards those targets. Current datasets vary in their types of targets such as entities of various types (e.g., Person, Location, Organization, Food), predefined aspect/property categories (e.g., quality and price) or arbitrary spans that can denote an event (\"The opening night was a success\"). For instance, as shown in Table 1 , for Restaurant reviews, one is likely to find target spans that are related to food (food, portions), ambience (decor)",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 771,
"end": 778,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The service is excellent, the decor is great, and the food is delicious and comes in large portions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Restaurants",
"sec_num": null
},
{
"text": "I have had another Mac, but it got slow due to an older operating system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Laptops",
"sec_num": null
},
{
"text": "No, twitter, I don't want to follow kim kardashian -why is she famous btw or Chris Brown. (Pontiki et al., 2016) , Laptop review (Pontiki et al., 2014) , and Twitter dataset (Dong et al., 2014). or service. Tweets might contain celebrity references (kim kardashian) as targets, while a Laptop review is likely to have references to software (operating system). Moreover, sentiment expressions vary from domain-to-domain as well. As shown in Table 1 , we encounter sentiment expressions such as delicious for Restaurants domain, older for Laptops domain, and famous for Twitter that contains sentiment towards people.",
"cite_spans": [
{
"start": 90,
"end": 112,
"text": "(Pontiki et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 129,
"end": 151,
"text": "(Pontiki et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 174,
"end": 194,
"text": "(Dong et al., 2014).",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 441,
"end": 448,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Twitter",
"sec_num": null
},
{
"text": "Obtaining fine-grained sentiment annotations for specific spans of text is often time-consuming, expensive and requires domain expertise. Thus, we often encounter scenarios where we have labeled data from one or more domains (source domains) but none or very little labeled data from a new and different domain of interest (target domain). In this paper, we focus on a novel data sampling strategy for cross-domain target-based sentiment analysis that does not require sentiment labels but just the targets. It takes advantage of the two dimensions of domain differences for this task: targets and sentiment expressions. Our goal is complementary to work on transfer learning for domain adaptation for this task (Rietzler et al., 2020) .",
"cite_spans": [
{
"start": 712,
"end": 735,
"text": "(Rietzler et al., 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter",
"sec_num": null
},
{
"text": "Our proposed selection strategy aims to pick examples that are informative and representative of the target domain. To capture informativeness, a commonly used criteria in active learning settings (Settles and Craven, 2008; McCallum and Nigam, 1998) , we use entropy-based sampling (Wang et al., 2017; Wang and Shang, 2014; Settles, 2009) . This helps us sample examples that the model is most uncertain about in its sentiment predictions for given targets. Although entropy-based sampling is popular in active learning settings, to the best of our knowledge, it has not been applied to the task of sample selection for cross-domain targeted sentiment analysis. Further, we use Relative Salience (Mohammad, 2011) to pick examples containing sentiment expressions that are more representative of the target domain w.r.t the source domain. The efficacy of our data sampling strategy is tested by comparing the performance of the trained models on the sampled data against models trained on strong baselines such as entropy-based sampling (Section 3). Our proposed sampling strategy achieves performance in the 86-100% range compared to the full supervised model using only \u21e04-15% of the full training data.",
"cite_spans": [
{
"start": 197,
"end": 223,
"text": "(Settles and Craven, 2008;",
"ref_id": "BIBREF8"
},
{
"start": 224,
"end": 249,
"text": "McCallum and Nigam, 1998)",
"ref_id": "BIBREF2"
},
{
"start": 282,
"end": 301,
"text": "(Wang et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 302,
"end": 323,
"text": "Wang and Shang, 2014;",
"ref_id": "BIBREF10"
},
{
"start": 324,
"end": 338,
"text": "Settles, 2009)",
"ref_id": "BIBREF7"
},
{
"start": 696,
"end": 712,
"text": "(Mohammad, 2011)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter",
"sec_num": null
},
{
"text": "We use three labeled datasets in English for targebased sentiment analysis that vary in domain -Se-mEval 2016 Task 5 (Pontiki et al., 2016) containing restaurant reviews (R); SemEval 2014 Task 4 (Pontiki et al., 2014) containing laptop reviews (L) and a Twitter dataset (T) introduced by Dong et al., which contains tweets about celebrities (Britney Spears, Lady Gaga), products (xbox, Windows 7), and companies (Google). A document for R and L refers to a sentence of a review, with most documents containing a single target, and some containing multiple targets as well (30% of R-train, 38% of L-train). A tweet is a document for T, with each of them containing a single target. R and T contain Positive, Negative and Neutral sentiment labels for the target spans while L contains Conflict as a sentiment label. To maintain parity with R and T, we drop the conflict label from L. We retain the original train-test splits for all 3 datasets. Additionally, we sample 10% of the training data at random for a validation set. ",
"cite_spans": [
{
"start": 117,
"end": 139,
"text": "(Pontiki et al., 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2"
},
{
"text": "Entropy-based Sampling. In order to sample documents that contain hard-to-classify spans from the target domain, we use an uncertainty-based sampling method, that uses entropy (Shannon, 1948) to discover documents containing targets the model is uncertain about. Let D s and D t represent the training data for the source and target domains respectively. For each document in D t , we predict the probability distribution over the 3 sentiment labels for each target, using a model trained on D s , and compute the entropy per target prediction. The average entropy across all targets of the document indicates the overall uncertainty for the document. This aims to select documents based on informativeness.",
"cite_spans": [
{
"start": 176,
"end": 191,
"text": "(Shannon, 1948)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Relative Salience (RS) based Sampling. We use Relative Salience (Mohammad, 2011) as a way to extract sentiment expressions that are more representative of the target domain when compared to the source domain. Based on the simplifying assumption that sentiment towards target spans are expressed through adjectives, we first extract all adjectives for each dataset using a Partsof-Speech tagger. For each cross-domain experiment, we compute the RS of an adjective w as, Table 3 ). For each cross-domain scenario, we select documents from the target training set that contain any of the top 10 adjectives with the highest RS score.",
"cite_spans": [
{
"start": 64,
"end": 80,
"text": "(Mohammad, 2011)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 469,
"end": 476,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "RS(w|D s , D t ) = f t /N t f s /N s ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "RS+Entropy Sampling. Our proposed method of sampling involves selecting documents collected from both the Relative Salience and Entropy-based methods in different proportions for model training. Given the number of documents we wish to sample, the various combinations we experiment with include selecting 50%-50%, 30%-70% and 20%-80% from RS and entropy-based strategies, respectively. Depending on the combination, we first pick the top k documents ordered from highest to lowest entropy score, followed by the remaining number of documents picked from the RS set. In Table 4 , we provide a few document samples picked by RS and Entropy. As expected, the RS method picks examples containing sentiment expressions that are more relevant to the target domain. With L (source) ! R (target), we see sentiment expressions such as friendly, delicious and romantic that are more representative of the Restaurant domain (see Table 3 ). Meanwhile, the Entropy-based approach selects examples that the model is most uncertain about. For example, targets such as Lobster bisque are unlikely to be present in the Laptops domain and result in the model's uncertainty in predictions. A similar behavior is observed with R!L and L!T.",
"cite_spans": [],
"ref_spans": [
{
"start": 570,
"end": 577,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 919,
"end": 927,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "The underlying model we use for target-based sentiment classification is a BERT model (Devlin et al., 2019) . The model accepts as input the entire document and target spans with boundaries. The document is first encoded by BERT and span boundaries are used to pool tokens to form a span representation. Using span representation and the document as context, we perform multi-class classification to predict the sentiment for each span, by minimizing cross-entropy loss across sentiment labels.",
"cite_spans": [
{
"start": 86,
"end": 107,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model & Experimental Setup",
"sec_num": "4"
},
{
"text": "Experimental Setup & Baselines. SemEval datasets both consists of reviews in two different domains (restaurants and laptops). For our experiments, we explore both (R!L) and (L!R) as cross-domain settings. Further, we use the Twitter dataset that is different in genre to both L and R, and choose L!T as the cross-domain setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model & Experimental Setup",
"sec_num": "4"
},
{
"text": "We first train the BERT model on labeled training data of the source domain. Documents from the target domain are then sampled using our proposed sampling method which is used to train the model. Model performance on target domain is reported using Macro F1. We experiment with a varying number n of sampled documents, starting with a small value (25 documents for Laptops and Restaurants, and 50 for Twitter) and going up to \u21e015% of the training data for our experiments. Our baselines includes selecting a subset of n documents from the target domain at random as well as selecting the top n using entropy-based sampling only. For each experiment, we use the corresponding validation set for hyper-parameter optimization. Table 6 : Targets from test set that were incorrectly labeled by model trained using entropy-based sampled data, but were correctly predicted by model trained using the RS+Entropy sampled data. 5 Results Figure 1 shows the mean Macro F1 scores (with standard deviation over 3 runs) for all three crossdomain settings with various sizes of sampled data. We find our proposed method to outperform both baselines for each cross-domain setting. In addition, Table 7 represents the amount of sampled data used by the model for training in these cross-domain settings and corresponding Macro F1 achieved as compared to a model trained with the full labeled training data. For R!L, we achieve 100% of Macro F1 as compared to the fully supervised case with only \u21e04% of the training documents (4% of training instances). For L!T, we obtain 92.26% of the supervised setting with \u21e011% of the training documents (\u21e011% of training instances). For L!R, our proposed method achieves within \u21e086.68% of the fully supervised setting with \u21e015% of the training documents (\u21e015% of training instances). Further, as shown in Error Analysis In Table 6 , we show examples of targets for each cross-domain setting for which the model trained on Entropy-based sampled data makes errors in prediction, while model trained on RS+Entropy sampled data predicts correctly.",
"cite_spans": [],
"ref_spans": [
{
"start": 724,
"end": 731,
"text": "Table 6",
"ref_id": null
},
{
"start": 928,
"end": 936,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1178,
"end": 1185,
"text": "Table 7",
"ref_id": "TABREF10"
},
{
"start": 1844,
"end": 1851,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model & Experimental Setup",
"sec_num": "4"
},
{
"text": "We propose a data sampling strategy for crossdomain target-based sentiment analysis that selects examples based on the two dimensions of domain differences for the task -targets and sentiment expressions. The proposed method combining Relative Salience and Entropy based sampling, when applied to three different cross-domain settings, is able to extract samples that are both informative and representative of the target domain. This helps the model achieve 86-100% of fully supervised performance using only 4-15% of the full training data, thus helping to reduce annotation cost. Further, it outperforms random and entropy-based baselines both in label-wise and overall model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Adaptive recursive neural network for target-dependent Twitter sentiment classification",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Chuanqi",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "49--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent Twitter sentiment clas- sification. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 49-54, Baltimore, Maryland. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Employing EM and Pool-Based Active Learning for Text Classification",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Nigam",
"suffix": ""
}
],
"year": 1998,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. McCallum and K. Nigam. 1998. Employing EM and Pool-Based Active Learning for Text Classifica- tion. In ICML.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "From once upon a time to happily ever after: Tracking emotions in novels and fairy tales",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 5th ACL-HLT Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities",
"volume": "",
"issue": "",
"pages": "105--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad. 2011. From once upon a time to happily ever after: Tracking emotions in novels and fairy tales. In Proceedings of the 5th ACL-HLT Workshop on Language Technology for Cultural Her- itage, Social Sciences, and Humanities, pages 105- 114, Portland, OR, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "SemEval-2016 task 5: Aspect based sentiment analysis",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Pontiki",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Galanis",
"suffix": ""
},
{
"first": "Haris",
"middle": [],
"last": "Papageorgiou",
"suffix": ""
},
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
},
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
},
{
"first": "Al-",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Smadi",
"suffix": ""
},
{
"first": "Yanyan",
"middle": [],
"last": "Al-Ayyoub",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Orph\u00e9e",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "De Clercq",
"suffix": ""
},
{
"first": "Marianna",
"middle": [],
"last": "Hoste",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Apidianaki",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Tannier",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Loukachevitch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kotelnikov",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "19--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Moham- mad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph\u00e9e De Clercq, V\u00e9ronique Hoste, Marianna Apidianaki, Xavier Tannier, Na- talia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel, Salud Mar\u00eda Jim\u00e9nez-Zafra, and G\u00fcl\u015fen Eryigit. 2016. SemEval-2016 task 5: Aspect based senti- ment analysis. In Proceedings of the 10th Interna- tional Workshop on Semantic Evaluation (SemEval- 2016), pages 19-30, San Diego, California. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "SemEval-2014 task 4: Aspect based sentiment analysis",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Pontiki",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Galanis",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Pavlopoulos",
"suffix": ""
},
{
"first": "Harris",
"middle": [],
"last": "Papageorgiou",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "27--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: As- pect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27-35, Dublin, Ireland. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Adapt or get left behind: Domain adaptation through BERT language model finetuning for aspect-target sentiment classification",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Rietzler",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Stabinger",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Opitz",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Engl",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4933--4941",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Rietzler, Sebastian Stabinger, Paul Opitz, and Stefan Engl. 2020. Adapt or get left behind: Domain adaptation through BERT language model finetuning for aspect-target sentiment classification. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4933-4941, Mar- seille, France. European Language Resources Asso- ciation.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Active learning literature survey",
"authors": [
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burr Settles. 2009. Active learning literature survey. Computer Sciences Technical Report 1648, Univer- sity of Wisconsin-Madison.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An analysis of active learning strategies for sequence labeling tasks",
"authors": [
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Craven",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1070--1079",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burr Settles and Mark Craven. 2008. An analysis of ac- tive learning strategies for sequence labeling tasks. In Proceedings of the 2008 Conference on Empiri- cal Methods in Natural Language Processing, pages 1070-1079, Honolulu, Hawaii. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A mathematical theory of communication",
"authors": [
{
"first": "C",
"middle": [
"E"
],
"last": "Shannon",
"suffix": ""
}
],
"year": 1948,
"venue": "The Bell System Technical Journal",
"volume": "27",
"issue": "3",
"pages": "379--423",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. E. Shannon. 1948. A mathematical theory of com- munication. The Bell System Technical Journal, 27(3):379-423.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A new active labeling method for deep learning",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Shang",
"suffix": ""
}
],
"year": 2014,
"venue": "2014 International Joint Conference on Neural Networks (IJCNN)",
"volume": "",
"issue": "",
"pages": "112--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Wang and Yi Shang. 2014. A new active la- beling method for deep learning. In 2014 In- ternational Joint Conference on Neural Networks (IJCNN), pages 112-119.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Uncertainty sampling based active learning with diversity constraint by sparse selection",
"authors": [
{
"first": "Gaoang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jenq-Neng",
"middle": [],
"last": "Hwang",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Rose",
"suffix": ""
},
{
"first": "Farron",
"middle": [],
"last": "Wallace",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE 19th International Workshop on Multimedia Signal Processing (MMSP)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gaoang Wang, Jenq-Neng Hwang, Craig Rose, and Far- ron Wallace. 2017. Uncertainty sampling based ac- tive learning with diversity constraint by sparse se- lection. In 2017 IEEE 19th International Workshop on Multimedia Signal Processing (MMSP), pages 1- 6.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "F1 on the corresponding test sets (a) Laptops for R!L (b) Restaurants for L!R (c) Twitter for L!T.",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"text": "Target spans (in bold) and sentiment expressions (italicized) from Restaurant review",
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF2": {
"text": "Dataset stats. R=SemEval 2016 Restaurant Reviews, L=SemEval 2014 Laptop Reviews,",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>T=Twitter.</td><td>Pos=Positive, Neg=Negative and</td></tr><tr><td colspan=\"2\">Neu=Neutral sentiments.</td></tr><tr><td>Setting</td><td>Highest RS scoring words</td></tr><tr><td>R!L</td><td>easy, new, other, same, many, perfect</td></tr><tr><td>L!R</td><td>good, delicious, friendly, attentive, romantic</td></tr><tr><td>L!T</td><td>new, real, bad, last, famous, dead</td></tr></table>"
},
"TABREF3": {
"text": "Words with highest Relative Salience (RS) scores for each cross-domain setting.",
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF4": {
"text": "try the seasonal, and always delicious, specials. Entropy I had Lobster Bisque it has 2 oz. of Maine Lobster in it.R!L Relative Salience I like how the Mac OS is so simple and easy to use.Entropy pros: the macbook pro notebook has a large battery life and you wont have to worry to charge your laptop every five hours or so. Sonny helped me grow, and become more aware of the media, and paparazzi, and the famous life. It makes me think twice.\" -demi lovato.",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>Setting</td><td>Sampling Strategy</td><td>Sample Documents Picked</td></tr><tr><td colspan=\"3\">L!R Be sure to L!T Relative Salience Relative Salience Gorbachev's 80th birthday was a huge success! among the guests were arnold \"Entropy schwarzenegger , Sharon Stone and Kevin Spacey. Exciting!</td></tr></table>"
},
"TABREF5": {
"text": "Examples selected by RS-based and Entropy-based sampling for various cross-domain settings. Italics shows sentiment expressions used by RS, while bold shows the targets picked by the Entropy-based method.",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>t stand for source and target respectively. Note that</td></tr><tr><td>labels are not considered for this, just the raw doc-</td></tr><tr><td>uments. Thus, RS score of a sentiment expression</td></tr><tr><td>captures its importance in the target domain, w.r.t</td></tr><tr><td>the source domain (see examples in</td></tr></table>"
},
"TABREF7": {
"text": "F1 for each sentiment class obtained using various sampling strategies.",
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF8": {
"text": "Quality night , amazing costumes but got ta say lady gaga was the best though.. poor gaga left shoes and phone in my car ha Negative Positive",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>Setting</td><td>Samples</td><td>Entropy</td><td>RS+Entropy</td></tr><tr><td>R!L</td><td>Price was higher when purchased on MAC when compared to price showing on PC when I bought this product.</td><td>Neutral</td><td>Negative</td></tr><tr><td>L!R</td><td>Nice ambience, but highly overrated place.</td><td>Neutral</td><td>Positive</td></tr><tr><td>L!T</td><td/><td/><td/></tr></table>"
},
"TABREF9": {
"text": "RS+Entropy strategy outperforms both Entropy and Random baselines for each class, across all cross-domain settings.",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>Setting</td><td>% of Supervised Model Macro F1</td><td>%Train</td></tr><tr><td>R!L L!T L!R</td><td>100 92.26 86.68</td><td>\u21e04 \u21e011 \u21e015</td></tr></table>"
},
"TABREF10": {
"text": "Comparison with fully supervised setting.",
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>"
}
}
}
}