Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q18-1002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:10:21.957519Z"
},
"title": "Multiple Instance Learning Networks for Fine-Grained Sentiment Analysis",
"authors": [
{
"first": "Stefanos",
"middle": [],
"last": "Angelidis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"addrLine": "10 Crichton Street",
"postCode": "EH8 9AB",
"settlement": "Edinburgh"
}
},
"email": "[email protected]"
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"addrLine": "10 Crichton Street",
"postCode": "EH8 9AB",
"settlement": "Edinburgh"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We consider the task of fine-grained sentiment analysis from the perspective of multiple instance learning (MIL). Our neural model is trained on document sentiment labels, and learns to predict the sentiment of text segments, i.e. sentences or elementary discourse units (EDUs), without segment-level supervision. We introduce an attention-based polarity scoring method for identifying positive and negative text snippets and a new dataset which we call SPOT (as shorthand for Segment-level POlariTy annotations) for evaluating MILstyle sentiment models like ours. Experimental results demonstrate superior performance against multiple baselines, whereas a judgement elicitation study shows that EDU-level opinion extraction produces more informative summaries than sentence-based alternatives.",
"pdf_parse": {
"paper_id": "Q18-1002",
"_pdf_hash": "",
"abstract": [
{
"text": "We consider the task of fine-grained sentiment analysis from the perspective of multiple instance learning (MIL). Our neural model is trained on document sentiment labels, and learns to predict the sentiment of text segments, i.e. sentences or elementary discourse units (EDUs), without segment-level supervision. We introduce an attention-based polarity scoring method for identifying positive and negative text snippets and a new dataset which we call SPOT (as shorthand for Segment-level POlariTy annotations) for evaluating MILstyle sentiment models like ours. Experimental results demonstrate superior performance against multiple baselines, whereas a judgement elicitation study shows that EDU-level opinion extraction produces more informative summaries than sentence-based alternatives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sentiment analysis has become a fundamental area of research in Natural Language Processing thanks to the proliferation of user-generated content in the form of online reviews, blogs, internet forums, and social media. A plethora of methods have been proposed in the literature that attempt to distill sentiment information from text, allowing users and service providers to make opinion-driven decisions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The success of neural networks in a variety of applications (Bahdanau et al., 2015; Le and Mikolov, 2014; Socher et al., 2013) and the availability of large amounts of labeled data have led to an increased focus on sentiment classification. Supervised models are typically trained on documents (Johnson and Zhang, 2015a; Johnson and Zhang, 2015b; Tang et al., 2015; Yang et al., 2016) , sentences (Kim, 2014) , or phrases (Socher et al., 2011; [Rating: ] I had a very mixed experience at The Stand.",
"cite_spans": [
{
"start": 60,
"end": 83,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF2"
},
{
"start": 84,
"end": 105,
"text": "Le and Mikolov, 2014;",
"ref_id": "BIBREF25"
},
{
"start": 106,
"end": 126,
"text": "Socher et al., 2013)",
"ref_id": "BIBREF40"
},
{
"start": 294,
"end": 320,
"text": "(Johnson and Zhang, 2015a;",
"ref_id": "BIBREF20"
},
{
"start": 321,
"end": 346,
"text": "Johnson and Zhang, 2015b;",
"ref_id": "BIBREF21"
},
{
"start": 347,
"end": 365,
"text": "Tang et al., 2015;",
"ref_id": "BIBREF44"
},
{
"start": 366,
"end": 384,
"text": "Yang et al., 2016)",
"ref_id": "BIBREF53"
},
{
"start": 397,
"end": 408,
"text": "(Kim, 2014)",
"ref_id": "BIBREF23"
},
{
"start": 422,
"end": 443,
"text": "(Socher et al., 2011;",
"ref_id": "BIBREF39"
},
{
"start": 444,
"end": 452,
"text": "[Rating:",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The burger and fries were good. The chocolate shake was divine: rich and creamy. The drive-thru was horrible. It took us at least 30 minutes to order when there were only four cars in front of us. We complained about the wait and got a half-hearted apology. I would go back because the food is good, but my only hesitation is the wait.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "+ The burger and fries were good + The chocolate shake was divine + I would go back because the food is good -The drive-thru was horrible -It took us at least 30 minutes to order Figure 1 : An EDU-based summary of a 2-out-of-5 star review with positive and negative snippets. Socher et al., 2013) annotated with sentiment labels and used to predict sentiment in unseen texts. Coarse-grained document-level annotations are relatively easy to obtain due to the widespread use of opinion grading interfaces (e.g., star ratings accompanying reviews). In contrast, the acquisition of sentence-or phrase-level sentiment labels remains a laborious and expensive endeavor despite its relevance to various opinion mining applications, e.g., detecting or summarizing consumer opinions in online product reviews. The usefulness of finer-grained sentiment analysis is illustrated in the example of Figure 1 , where snippets of opposing polarities are extracted from a 2-star restaurant review. Although, as a whole, the review conveys negative sentiment, aspects of the reviewer's experience were clearly positive. This goes largely unnoticed when focusing solely on the review's overall rating.",
"cite_spans": [
{
"start": 276,
"end": 296,
"text": "Socher et al., 2013)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [
{
"start": 179,
"end": 187,
"text": "Figure 1",
"ref_id": null
},
{
"start": 886,
"end": 894,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Summary",
"sec_num": null
},
{
"text": "In this work, we consider the problem of segmentlevel sentiment analysis from the perspective of Multiple Instance Learning (MIL; Keeler, 1991) .",
"cite_spans": [
{
"start": 130,
"end": 143,
"text": "Keeler, 1991)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": null
},
{
"text": "Instead of learning from individually labeled segments, our model only requires document-level supervision and learns to introspectively judge the sentiment of constituent segments. Beyond showing how to utilize document collections of rated reviews to train fine-grained sentiment predictors, we also investigate the granularity of the extracted segments. Previous research (Tang et al., 2015; Yang et al., 2016; Cheng and Lapata, 2016; Nallapati et al., 2017) has predominantly viewed documents as sequences of sentences. Inspired by recent work in summarization (Li et al., 2016) and sentiment classification (Bhatia et al., 2015) , we also represent documents via Rhetorical Structure Theory's (Mann and Thompson, 1988) Elementary Discourse Units (EDUs). Although definitions for EDUs vary in the literature, we follow standard practice and take the elementary units of discourse to be clauses (Carlson et al., 2003) . We employ a state-of-the-art discourse parser (Feng and Hirst, 2012) to identify them.",
"cite_spans": [
{
"start": 375,
"end": 394,
"text": "(Tang et al., 2015;",
"ref_id": "BIBREF44"
},
{
"start": 395,
"end": 413,
"text": "Yang et al., 2016;",
"ref_id": "BIBREF53"
},
{
"start": 414,
"end": 437,
"text": "Cheng and Lapata, 2016;",
"ref_id": "BIBREF8"
},
{
"start": 438,
"end": 461,
"text": "Nallapati et al., 2017)",
"ref_id": "BIBREF31"
},
{
"start": 565,
"end": 582,
"text": "(Li et al., 2016)",
"ref_id": "BIBREF27"
},
{
"start": 612,
"end": 633,
"text": "(Bhatia et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 698,
"end": 723,
"text": "(Mann and Thompson, 1988)",
"ref_id": "BIBREF28"
},
{
"start": 898,
"end": 920,
"text": "(Carlson et al., 2003)",
"ref_id": "BIBREF7"
},
{
"start": 969,
"end": 991,
"text": "(Feng and Hirst, 2012)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": null
},
{
"text": "Our contributions in this work are three-fold: a novel multiple instance learning neural model which utilizes document-level sentiment supervision to judge the polarity of its constituent segments; the creation of SPOT, a publicly available dataset which contains Segment-level POlariTy annotations (for sentences and EDUs) and can be used for the evaluation of MIL-style models like ours; and the empirical finding (through automatic and human-based evaluation) that neural multiple instance learning is superior to more conventional neural architectures and other baselines on detecting segment sentiment and extracting informative opinions in reviews. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": null
},
{
"text": "Our work lies at the intersection of multiple research areas, including sentiment classification, opinion mining and multiple instance learning. We review related work in these areas below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Sentiment Classification Sentiment classification is one of the most popular tasks in sentiment analysis. Early work focused on unsupervised methods and the creation of sentiment lexicons (Turney, 2002; Hu and Liu, 2004; Wiebe et al., 2005; Baccianella et al., 2010) based on which the overall po-larity of a text can be computed (e,g., by aggregating the sentiment scores of constituent words). More recently, Taboada et al. (2011) introduced SO-CAL, a state-of-the-art method that combines a rich sentiment lexicon with carefully defined rules over syntax trees to predict sentence sentiment.",
"cite_spans": [
{
"start": 188,
"end": 202,
"text": "(Turney, 2002;",
"ref_id": "BIBREF45"
},
{
"start": 203,
"end": 220,
"text": "Hu and Liu, 2004;",
"ref_id": "BIBREF19"
},
{
"start": 221,
"end": 240,
"text": "Wiebe et al., 2005;",
"ref_id": "BIBREF48"
},
{
"start": 241,
"end": 266,
"text": "Baccianella et al., 2010)",
"ref_id": "BIBREF1"
},
{
"start": 411,
"end": 432,
"text": "Taboada et al. (2011)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Supervised learning techniques have subsequently dominated the literature (Pang et al., 2002; Pang and Lee, 2005; Qu et al., 2010; Xia and Zong, 2010; Wang and Manning, 2012; Le and Mikolov, 2014) thanks to user-generated sentiment labels or large-scale crowd-sourcing efforts (Socher et al., 2013) . Neural network models in particular have achieved state-of-the-art performance on various sentiment classification tasks due to their ability to alleviate feature engineering. Kim (2014) introduced a very successful CNN architecture for sentence-level classification, whereas other work (Socher et al., 2011; Socher et al., 2013) uses recursive neural networks to learn sentiment for segments of varying granularity (i.e., words, phrases, and sentences). We describe Kim's (2014) approach in more detail as it is also used as part of our model.",
"cite_spans": [
{
"start": 74,
"end": 93,
"text": "(Pang et al., 2002;",
"ref_id": "BIBREF35"
},
{
"start": 94,
"end": 113,
"text": "Pang and Lee, 2005;",
"ref_id": "BIBREF34"
},
{
"start": 114,
"end": 130,
"text": "Qu et al., 2010;",
"ref_id": "BIBREF38"
},
{
"start": 131,
"end": 150,
"text": "Xia and Zong, 2010;",
"ref_id": "BIBREF50"
},
{
"start": 151,
"end": 174,
"text": "Wang and Manning, 2012;",
"ref_id": "BIBREF46"
},
{
"start": 175,
"end": 196,
"text": "Le and Mikolov, 2014)",
"ref_id": "BIBREF25"
},
{
"start": 277,
"end": 298,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF40"
},
{
"start": 477,
"end": 487,
"text": "Kim (2014)",
"ref_id": "BIBREF23"
},
{
"start": 588,
"end": 609,
"text": "(Socher et al., 2011;",
"ref_id": "BIBREF39"
},
{
"start": 610,
"end": 630,
"text": "Socher et al., 2013)",
"ref_id": "BIBREF40"
},
{
"start": 768,
"end": 780,
"text": "Kim's (2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Let x i denote a k-dimensional word embedding of the i-th word in text segment s of length n. The segment's input representation is the concatenation of word embeddings x 1 , . . . , x n , resulting in word matrix X. Let X i:i+j refer to the concatenation of embeddings x i , . . . , x i+j . A convolution filter W \u2208 R lk , applied to a window of l words, produces a new feature",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "c i = ReLU(W \u2022 X i:i+l + b),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "where ReLU is the Rectified Linear Unit non-linearity, '\u2022' denotes the entrywise product followed by a sum over all elements and b \u2208 R is a bias term. Applying the same filter to every possible window of word vectors in the segment, produces a feature map c = [c 1 , c 2 , . . . , c n\u2212l+1 ]. Multiple feature maps for varied window sizes are applied, resulting in a fixed-size segment representation v via max-overtime pooling. We will refer to the application of convolution to an input word matrix X, as CNN(X). A final sentiment prediction is produced using a softmax classifier and the model is trained via backpropagation using sentence-level sentiment labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "The availability of large-scale datasets (Diao et al., 2014; Tang et al., 2015) has also led to the development of document-level sentiment classifiers which exploit hierarchical neural representations.",
"cite_spans": [
{
"start": 41,
"end": 60,
"text": "(Diao et al., 2014;",
"ref_id": "BIBREF13"
},
{
"start": 61,
"end": 79,
"text": "Tang et al., 2015)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "These are obtained by first building representations of sentences and aggregating those into a document feature vector (Tang et al., 2015) . Yang et al. (2016) further acknowledge that words and sentences are deferentially important in different contexts. They present a model which learns to attend (Bahdanau et al., 2015) to individual text parts when constructing document representations. We describe such an architecture in more detail as we use it as a point of comparison with our own model. Given document d comprising segments (s 1 , . . . , s m ), a Hierarchical Network with attention (henceforth HIERNET; based on Yang et al., 2016) produces segment representations (v 1 , . . . , v m ) which are subsequently fed into a bidirectional GRU module (Bahdanau et al., 2015) , whose resulting hidden vectors (h 1 , . . . , h m ) are used to produce attention weights (a 1 , . . . , a m ) (see Section 3.2 for more details on the attention mechanism). A document is represented as the weighted average of the segments' hidden vec-",
"cite_spans": [
{
"start": 119,
"end": 138,
"text": "(Tang et al., 2015)",
"ref_id": "BIBREF44"
},
{
"start": 141,
"end": 159,
"text": "Yang et al. (2016)",
"ref_id": "BIBREF53"
},
{
"start": 300,
"end": 323,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF2"
},
{
"start": 758,
"end": 781,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "tors v d = i a i h i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "A final sentiment prediction is obtained using a softmax classifier and the model is trained via back-propagation using document-level sentiment labels. The architecture is illustrated in Figure 2 (a). In their proposed model, Yang et al. (2016) use bidirectional GRU modules to represent segments as well as documents, whereas we use a more efficient CNN encoder to compose words into segment vectors 2 (i.e., v i = CNN(X i )). Note that models like HIERNET do not naturally predict sentiment for individual segments; we discuss how they can be used for segment-level opinion extraction in Section 5.2.",
"cite_spans": [
{
"start": 227,
"end": 245,
"text": "Yang et al. (2016)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [
{
"start": 188,
"end": 196,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Our own work draws inspiration from representation learning (Tang et al., 2015; Kim, 2014) , especially the idea that not all parts of a document convey sentiment-worthy clues (Yang et al., 2016) . Our model departs from previous approaches in that it provides a natural way of predicting the polarity of individual text segments without requiring segment-level annotations. Moreover, our attention mechanism directly facilitates opinion detection rather than simply aggregating sentence representations into a single document vector.",
"cite_spans": [
{
"start": 60,
"end": 79,
"text": "(Tang et al., 2015;",
"ref_id": "BIBREF44"
},
{
"start": 80,
"end": 90,
"text": "Kim, 2014)",
"ref_id": "BIBREF23"
},
{
"start": 176,
"end": 195,
"text": "(Yang et al., 2016)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Opinion Mining A standard setting for opinion mining and summarization (Lerman et al., 2009; Carenini et al., 2006; Ganesan et al., 2010; Di Fabbrizio et al., 2014; Gerani et al., 2014) assumes a set of documents that contain opinions about some entity of interest (e.g., camera). The goal of the system is to generate a summary that is representative of the average opinion and speaks to its important aspects (e.g., picture quality, battery life, value). Output summaries can be extractive (Lerman et al., 2009) or abstractive (Gerani et al., 2014; Di Fabbrizio et al., 2014) and the underlying systems exhibit varying degrees of linguistic sophistication from identifying aspects (Lerman et al., 2009) to using RSTstyle discourse analysis, and manually defined templates (Gerani et al., 2014; Di Fabbrizio et al., 2014) .",
"cite_spans": [
{
"start": 71,
"end": 92,
"text": "(Lerman et al., 2009;",
"ref_id": "BIBREF26"
},
{
"start": 93,
"end": 115,
"text": "Carenini et al., 2006;",
"ref_id": "BIBREF6"
},
{
"start": 116,
"end": 137,
"text": "Ganesan et al., 2010;",
"ref_id": "BIBREF16"
},
{
"start": 138,
"end": 164,
"text": "Di Fabbrizio et al., 2014;",
"ref_id": "BIBREF12"
},
{
"start": 165,
"end": 185,
"text": "Gerani et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 492,
"end": 513,
"text": "(Lerman et al., 2009)",
"ref_id": "BIBREF26"
},
{
"start": 529,
"end": 550,
"text": "(Gerani et al., 2014;",
"ref_id": "BIBREF17"
},
{
"start": 551,
"end": 577,
"text": "Di Fabbrizio et al., 2014)",
"ref_id": "BIBREF12"
},
{
"start": 683,
"end": 704,
"text": "(Lerman et al., 2009)",
"ref_id": "BIBREF26"
},
{
"start": 774,
"end": 795,
"text": "(Gerani et al., 2014;",
"ref_id": "BIBREF17"
},
{
"start": 796,
"end": 822,
"text": "Di Fabbrizio et al., 2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Our proposed method departs from previous work in that it focuses on detecting opinions in individual documents. Given a review, we predict the polarity of every segment, allowing for the extraction of sentiment-heavy opinions. We explore the usefulness of EDU segmentation inspired by Li et al. 2016, who show that EDU-based summaries align with near-extractive summaries constructed by news editors. Importantly, our model is trained in a weakly-supervised fashion on large scale document classification datasets without recourse to finegrained labels or gold-standard opinion summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Multiple Instance Learning Our models adopt a Multiple Instance Learning (MIL) framework. MIL deals with problems where labels are associated with groups of instances or bags (documents in our case), while instance labels (segment-level polarities) are unobserved. An aggregation function is used to combine instance predictions and assign labels on the bag level. The goal is either to label bags (Keeler and Rumelhart, 1992; Dietterich et al., 1997; Maron and Ratan, 1998) or to simultaneously infer bag and instance labels (Zhou et al., 2009; Wei et al., 2014; Kotzias et al., 2015) . We view segment-level sentiment analysis as an instantiation of the latter variant.",
"cite_spans": [
{
"start": 398,
"end": 426,
"text": "(Keeler and Rumelhart, 1992;",
"ref_id": "BIBREF22"
},
{
"start": 427,
"end": 451,
"text": "Dietterich et al., 1997;",
"ref_id": "BIBREF14"
},
{
"start": 452,
"end": 474,
"text": "Maron and Ratan, 1998)",
"ref_id": null
},
{
"start": 526,
"end": 545,
"text": "(Zhou et al., 2009;",
"ref_id": "BIBREF57"
},
{
"start": 546,
"end": 563,
"text": "Wei et al., 2014;",
"ref_id": "BIBREF52"
},
{
"start": 564,
"end": 585,
"text": "Kotzias et al., 2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Initial MIL efforts for binary classification made the strong assumption that a bag is negative only if all of its instances are negative, and positive otherwise (Dietterich et al., 1997; Maron and Ratan, 1998; Zhang et al., 2002; Andrews and Hofmann, 2004; Carbonetto et al., 2008) . Subsequent work re-laxed this assumption, allowing for prediction combinations better suited to the tasks at hand. Weidmann et al. (2003) introduced a generalized MIL framework, where a combination of instance types is required to assign a bag label. Zhou et al. (2009) used graph kernels to aggregate predictions, exploiting relations between instances in object and text categorization. Xu and Frank (2004) proposed a multiple-instance logistic regression classifier where instance predictions were simply averaged, assuming equal and independent contribution toward bag classification. More recently, Kotzias et al. (2015) used sentence vectors obtained by a pre-trained hierarchical CNN (Denil et al., 2014) as features under an unweighted average MIL objective. Prediction averaging was further extended by Pappas and Popescu-Belis (2014; , who used a weighted summation of predictions, an idea which we also adopt in our work.",
"cite_spans": [
{
"start": 162,
"end": 187,
"text": "(Dietterich et al., 1997;",
"ref_id": "BIBREF14"
},
{
"start": 188,
"end": 210,
"text": "Maron and Ratan, 1998;",
"ref_id": null
},
{
"start": 211,
"end": 230,
"text": "Zhang et al., 2002;",
"ref_id": "BIBREF55"
},
{
"start": 231,
"end": 257,
"text": "Andrews and Hofmann, 2004;",
"ref_id": "BIBREF0"
},
{
"start": 258,
"end": 282,
"text": "Carbonetto et al., 2008)",
"ref_id": "BIBREF5"
},
{
"start": 400,
"end": 422,
"text": "Weidmann et al. (2003)",
"ref_id": null
},
{
"start": 536,
"end": 554,
"text": "Zhou et al. (2009)",
"ref_id": "BIBREF57"
},
{
"start": 674,
"end": 693,
"text": "Xu and Frank (2004)",
"ref_id": "BIBREF51"
},
{
"start": 889,
"end": 910,
"text": "Kotzias et al. (2015)",
"ref_id": "BIBREF24"
},
{
"start": 976,
"end": 996,
"text": "(Denil et al., 2014)",
"ref_id": "BIBREF11"
},
{
"start": 1097,
"end": 1128,
"text": "Pappas and Popescu-Belis (2014;",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Applications of MIL are many and varied. MIL was first explored by Keeler and Rumelhart (1992) for recognizing handwritten post codes, where the position and value of individual digits was unknown. MIL techniques have since been applied to drug activity prediction (Dietterich et al., 1997) , image retrieval (Maron and Ratan, 1998; Zhang et al., 2002) , object detection (Zhang et al., 2006; Carbonetto et al., 2008; Cour et al., 2011 ), text classification (Andrews and Hofmann, 2004), image captioning (Wu et al., 2015) , paraphrase detection (Xu et al., 2014) , and information extraction (Hoffmann et al., 2011) .",
"cite_spans": [
{
"start": 67,
"end": 94,
"text": "Keeler and Rumelhart (1992)",
"ref_id": "BIBREF22"
},
{
"start": 265,
"end": 290,
"text": "(Dietterich et al., 1997)",
"ref_id": "BIBREF14"
},
{
"start": 309,
"end": 332,
"text": "(Maron and Ratan, 1998;",
"ref_id": null
},
{
"start": 333,
"end": 352,
"text": "Zhang et al., 2002)",
"ref_id": "BIBREF55"
},
{
"start": 372,
"end": 392,
"text": "(Zhang et al., 2006;",
"ref_id": "BIBREF56"
},
{
"start": 393,
"end": 417,
"text": "Carbonetto et al., 2008;",
"ref_id": "BIBREF5"
},
{
"start": 418,
"end": 435,
"text": "Cour et al., 2011",
"ref_id": "BIBREF9"
},
{
"start": 505,
"end": 522,
"text": "(Wu et al., 2015)",
"ref_id": "BIBREF49"
},
{
"start": 546,
"end": 563,
"text": "(Xu et al., 2014)",
"ref_id": "BIBREF52"
},
{
"start": 593,
"end": 616,
"text": "(Hoffmann et al., 2011)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "When applied to sentiment analysis, MIL takes advantage of supervision signals on the document level in order to train segment-level sentiment predictors. Although their work is not couched in the framework of MIL, T\u00e4ckstr\u00f6m and McDonald (2011) show how sentence sentiment labels can be learned as latent variables from document-level annotations using hidden conditional random fields. Pappas and Popescu-Belis (2014) use a multiple instance regression model to assign sentiment scores to specific aspects of products. The Group-Instance Cost Function (GICF), proposed by Kotzias et al. (2015) , averages sentence sentiment predictions during trainng, while ensuring that similar sentences receive similar polarity labels. Their work uses a pre-trained hierarchical CNN to obtain sentence embeddings, but is not trainable end-to-end, in contrast with our proposed network. Additionally, none of the aforementioned efforts explicitly evaluate opinion extraction quality.",
"cite_spans": [
{
"start": 215,
"end": 244,
"text": "T\u00e4ckstr\u00f6m and McDonald (2011)",
"ref_id": "BIBREF42"
},
{
"start": 573,
"end": 594,
"text": "Kotzias et al. (2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In this section we describe how multiple instance learning can be used to address some of the drawbacks seen in previous approaches, namely the need for expert knowledge in lexicon-based sentiment analysis (Taboada et al., 2011) , expensive finegrained annotation on the segment level (Kim, 2014; Socher et al., 2013) or the inability to naturally predict segment sentiment (Yang et al., 2016) .",
"cite_spans": [
{
"start": 206,
"end": 228,
"text": "(Taboada et al., 2011)",
"ref_id": "BIBREF41"
},
{
"start": 285,
"end": 296,
"text": "(Kim, 2014;",
"ref_id": "BIBREF23"
},
{
"start": 297,
"end": 317,
"text": "Socher et al., 2013)",
"ref_id": "BIBREF40"
},
{
"start": 374,
"end": 393,
"text": "(Yang et al., 2016)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Under multiple instance learning (MIL), a dataset D is a collection of labeled bags, each of which is a group of unlabeled instances. Specifically, each document d is a sequence (bag) of segments (instances). This sequence d = (s 1 , s 2 , . . . , s m ) is obtained from a document segmentation policy (see Section 4 for details). A discrete sentiment label",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": "y d \u2208 [1, C]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": "is associated with each document, where the labelset is ordered and classes 1 and C correspond to maximally negative and maximally positive sentiment. It is assumed that y d is an unknown function of the unobserved segment-level labels:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y d = f (y 1 , y 2 , . . . , y m )",
"eq_num": "(1)"
}
],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": "Probabilistic sentiment classifiers will produce document-level predictions\u0177 d by selecting the most probable class according to class distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": "p d = p (1) d , . . . , p (C) d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": ". In a non-MIL framework a classifier would learn to predict the document's sentiment by directly conditioning on its segments' feature representations or their aggregate:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p d =f \u03b8 (v 1 , v 2 , . . . , v m )",
"eq_num": "(2)"
}
],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": "In contrast, a MIL classifier will produce a class distribution p i for each segment and additionally learn to combine these into a document-level prediction:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p i =\u011d \u03b8s (v i ) ,",
"eq_num": "(3)"
}
],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p d =f \u03b8 d (p 1 , p 2 , . . . , p m ) .",
"eq_num": "(4)"
}
],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": "In this work,\u011d andf are defined using a single neural network, described below. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": "Hierarchical neural models like HIERNET have been used to predict document-level polarity by first encoding sentences and then combining these representations into a document vector. Hierarchical vector composition produces powerful sentiment predictors, but lacks the ability to introspectively judge the polarity of individual segments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple Instance Learning Network",
"sec_num": "3.2"
},
{
"text": "Our Multiple Instance Learning Network (henceforth MILNET) is based on the following intuitive assumptions about opinionated text. Each segment conveys a degree of sentiment polarity, ranging from very negative to very positive. Additionally, segments have varying degrees of importance, in relation to the overall opinion of the author. The overarching polarity of a text is an aggregation of segment polarities, weighted by their importance. Thus, our model attempts to predict the polarity of segments and decides which parts of the document are good indicators of its overall sentiment, allowing for the detection of sentiment-heavy opinions. An illustration of MILNET is shown in Figure 2(b) ; the model consists of three components: a CNN segment encoder, a softmax segment classifier and an attentionbased prediction weighting module.",
"cite_spans": [],
"ref_spans": [
{
"start": 685,
"end": 696,
"text": "Figure 2(b)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Multiple Instance Learning Network",
"sec_num": "3.2"
},
{
"text": "v i = CNN(X i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "is produced for each segment, using the CNN architecture described in Section 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "Segment Classification Obtaining a separate representation v i for every segment in a document allows us to produce individual segment sentiment predictions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "p i = p (1) i , . . . , p (C) i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": ". This is achieved using a softmax classifier:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p i = softmax(W c v i + b c ) ,",
"eq_num": "(5)"
}
],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "where W c and b c are the classifier's parameters, shared across all segments. Individual distributions p i are shown in Figure 2 (b) as small bar-charts. Document Classification In the simplest case, document-level predictions can be produced by taking the average of segment class distributions:",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 129,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "p (c) d = 1 / m i p (c) i , c \u2208 [1, C]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": ". This is, however, a crude way of combining segment sentiment, as not all parts of a document convey important sentiment clues. We opt for a segment attention mechanism which rewards text units that are more likely to be good sentiment predictors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "Our attention mechanism is based on a bidirectional GRU component (Bahdanau et al., 2015) and",
"cite_spans": [
{
"start": 66,
"end": 89,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "The starters were quite bland. I didn't enjoy most of them, but the burger was brilliant! inspired by Yang et al. (2016) . However, in contrast to their work, where attention is used to combine sentence representations into a single document vector, we utilize a similar technique to aggregate individual sentiment predictions.",
"cite_spans": [
{
"start": 102,
"end": 120,
"text": "Yang et al. (2016)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "We first use separate GRU modules to produce forward and backward hidden vectors, which are then concatenated:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2212 \u2192 h i = \u2212 \u2212\u2212 \u2192 GRU(v i ), (6) \u2190 \u2212 h i = \u2190 \u2212\u2212 \u2212 GRU(v i ),",
"eq_num": "(7)"
}
],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i = [ \u2212 \u2192 h i , \u2190 \u2212 h i ], i \u2208 [1, m] .",
"eq_num": "(8)"
}
],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "The importance of each segment is measured with the aid of a vector h a , as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i = tanh(W a h i + b a ) ,",
"eq_num": "(9)"
}
],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a i = exp(h T i h a ) i exp(h T i h a ) ,",
"eq_num": "(10)"
}
],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "where Equation (9) defines a one-layer MLP that produces an attention vector for the i-th segment. Attention weights a i are computed as the normalized similarity of each h i with h a . Vector h a , which is randomly initialized and learned during training, can be thought of as a trained key, able to recognize sentiment-heavy segments. The attention mechanism is depicted in the dashed box of Figure 2 , with attention weights shown as shaded circles. Finally, we obtain a document-level distribution over sentiment labels as the weighted sum of segment distributions (see top of Figure 2 ",
"cite_spans": [],
"ref_spans": [
{
"start": 395,
"end": 403,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 582,
"end": 590,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(b)): p (c) d = i a i p (c) i , c \u2208 [1, C] .",
"eq_num": "(11)"
}
],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "Training The model is trained end-to-end on documents with user-generated sentiment labels. We use the negative log likelihood of the document-level prediction as an objective function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = \u2212 d log p (y d ) d",
"eq_num": "(12)"
}
],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "4 Polarity-based Opinion Extraction",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "After training, our model can produce segment-level sentiment predictions for unseen texts in the form of class probability distributions. A direct application of our method is opinion extraction, where highly positive and negative snippets are selected from the original document, producing extractive sentiment summaries, as described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "Polarity Scoring In order to extract opinion summaries, we need to rank segments according to their sentiment polarity. We introduce a method that takes our model's confidence in the prediction into account, by reducing each segment's class probability distribution p i to a single real-valued polarity score. To achieve this, we first define a real-valued class weight vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "w = w (1) , . . . , w (C) | w (c) \u2208 [\u22121, 1]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "that assigns uniformly-spaced weights to the ordered labelset, such that w (c+1) \u2212 w (c) = 2 C\u22121 . For example, in a 5-class scenario, the class weight vector would be w = \u22121, \u22120.5, 0, 0.5, 1 . We compute the polarity score of a segment as the dot-product of the probability distribution p i with vector w:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "polarity(s i ) = c p (c) i w (c) \u2208 [\u22121, 1]",
"eq_num": "(13)"
}
],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "Gated Polarity As a way of increasing the effectiveness of our method, we introduce a gated extension that uses the attention mechanism of our model to further differentiate between segments that carry significant sentiment cues and those that do not:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "gated-polarity(s i ) = a i \u2022 polarity(s i ) , (14)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "where a i is the attention weight assigned to the i-th segment. This forces the polarity scores of segments the model does not attend to closer to 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "An illustration of our polarity scoring function is provided in Figure 3 , where the class predictions (top) of three restaurant review segments are mapped to their corresponding polarity scores (bottom). We observe that our method produces the desired result; segments 1 and 2 convey negative sentiment and receive negative scores, whereas the third segment is mapped to a positive score. Although the same discrete class label is assigned to the first two, the second segment's score is closer to 0 (neutral) as its class probability mass is more evenly distributed.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 72,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Segment Encoding An encoding",
"sec_num": null
},
{
"text": "As mentioned earlier, one of the hypotheses investigated in this work regards the use of subsentential units as the basis of extraction. Specifically, our model was applied to sentences and Elementary Discourse Units (EDUs), obtained from a Rhetorical Structure Theory (RST) parser (Feng and Hirst, 2012). According to RST, documents are first segmented into EDUs corresponding roughly to independent clauses which are then recursively combined into larger discourse spans. This results in a tree representation of the document, where connected nodes are characterized by discourse relations. We only utilize RST's segmentation, and leave the potential use of the tree structure to future work. Figure 3 illustrates why EDUbased segmentation might be beneficial for opinion extraction. The second and third EDUs correspond to the sentence: I didn't enjoy most of them, but the burger was brilliant. Taken as a whole, the sentence conveys mixed sentiment, whereas the EDUs clearly convey opposing sentiment.",
"cite_spans": [],
"ref_spans": [
{
"start": 695,
"end": 703,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Segmentation Policies",
"sec_num": null
},
{
"text": "In this section we describe the data used to assess the performance of our model. We also give details on model training and comparison systems. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "Our models were trained on two large-scale sentiment classification collections. The Yelp'13 corpus was introduced in Tang et al. 2015and contains customer reviews of local businesses, each associated with human ratings on a scale from 1 (negative) to 5 (positive). The IMDB corpus of movie reviews was obtained from Diao et al. (2014) ; each review is associated with user ratings ranging from 1 to 10. Both datasets are split into training (80%), validation (10%) and test (10%) sets. A summary of statistics for each collection is provided in Table 1 .",
"cite_spans": [
{
"start": 317,
"end": 335,
"text": "Diao et al. (2014)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 546,
"end": 553,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "In order to evaluate model performance on the segment level, we constructed a new dataset named SPOT (as a shorthand for Segment POlariTy) by annotating documents from the Yelp'13 and IMDB collections. Specifically, we sampled reviews from each collection such that all document-level classes are represented uniformly, and the document lengths are representative of the respective corpus. Documents were segmented into sentences and EDUs, resulting in two segment-level datasets per collection. Statistics are summarized in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 525,
"end": 532,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "Each review was presented to three Amazon Mechanical Turk (AMT) annotators who were asked to judge the sentiment conveyed by each segment (i.e., sentence or EDU) as negative, neutral, or pos- itive. We assigned labels using a majority vote or a fourth annotator in the rare cases of no agreement (< 5%). Figure 4 shows the distribution of segment labels for each document-level class. As expected, documents with positive labels contain a larger number of positive segments compared to documents with negative labels and vice versa. Neutral segments are distributed in an approximately uniform manner across document classes. Interestingly, the proportion of neutral EDUs is significantly higher compared to neutral sentences. The observation reinforces our argument in favor of EDU segmentation, as it suggests that a sentence with positive or negative overall polarity may still contain neutral EDUs. Discarding neutral EDUs, could therefore lead to more concise opinion extraction compared to relying on entire sentences. We further experimented on two collections introduced by Kotzias et al. (2015) which also originate from the YELP'13 and IMDB datasets. Each collection consists of 1,000 randomly sampled sentences annotated with binary sentiment labels.",
"cite_spans": [
{
"start": 1082,
"end": 1103,
"text": "Kotzias et al. (2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 304,
"end": 312,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "On the task of segment classification we compared MILNET, our multiple instance learning network, against the following methods:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Comparison",
"sec_num": "5.2"
},
{
"text": "Majority: Majority class applied to all instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Comparison",
"sec_num": "5.2"
},
{
"text": "State-of-the-art lexicon-based system that classifies segments into positive, neutral, and negative classes (Taboada et al., 2011) .",
"cite_spans": [
{
"start": 108,
"end": 130,
"text": "(Taboada et al., 2011)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SO-CAL:",
"sec_num": null
},
{
"text": "Seg-CNN: Fully-supervised CNN segment classifier trained on SPOT's labels (Kim, 2014) .",
"cite_spans": [
{
"start": 74,
"end": 85,
"text": "(Kim, 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SO-CAL:",
"sec_num": null
},
{
"text": "The Group-Instance Cost Function model introduced in Kotzias et al. (2015) . This is an unweighted average prediction aggregation MIL method that uses sentence features from a pretrained convolutional neural model.",
"cite_spans": [
{
"start": 53,
"end": 74,
"text": "Kotzias et al. (2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GICF:",
"sec_num": null
},
{
"text": "HIERNET: HIERNET does not explicitly generate individual segment predictions. Segment polarity scores are obtained by assigning the documentlevel prediction to every segment. We can then produce finer-grained polarity distinctions via gating, using the model's attention weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GICF:",
"sec_num": null
},
{
"text": "We further illustrate the differences between HI-ERNET and MILNET in Figure 5 , which includes short descriptions and simplified equations for each model. MILNET naturally produces distinct segment polarities, while HIERNET assigns a single polarity score to every segment. In both cases, gating is a further means of identifying neutral segments. Finally, we differentiate between variants of HI-ERNET and MILNET according to: Polarity source: Controls whether we assign polarities via segment-specific or document-wide predictions. HIERNET only allows for documentwide predictions. MILNET can use both.",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 77,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "GICF:",
"sec_num": null
},
{
"text": "Attention: We use models without gating (no subscript), with gating (gt subscript) as well as models trained with the attention mechanism disabled, falling back to simple averaging (avg subscript).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GICF:",
"sec_num": null
},
{
"text": "We trained MILNET and HIERNET using Adadelta (Zeiler, 2012) for 25 epochs. Mini-batches of 200 documents were organized based on the reviews' segment and document lengths so the amount of padding was minimized. We used 300-dimensional pre-trained word2vec embeddings. We tuned hyperparameters on the validation sets of the document classification collections, resulting in the following configuration (unless otherwise noted). For the CNN segment encoder, we used window sizes of 3, 4 and 5 words with 100 feature maps per window size, resulting in 300-dimensional segment vectors. The GRU hidden vector dimensions for each direction were set to 50 and the attention vector dimensionality to 100. We used L2-normalization and dropout to regularize the softmax classifiers and additional dropout on the internal GRU connections. Real-valued polarity scores produced by the two models are mapped to discrete labels using two appropriate thresholds t 1 , t 2 \u2208 [\u22121, 1], so that a segment s is classified as negative if polarity(s) < t 1 , positive if polarity(s) > t 2 or neutral otherwise. 3 To evaluate performance, we use macro-averaged F1 which is unaffected by class imbalance. We select optimal thresholds using 10-fold cross-validation and report mean scores across folds.",
"cite_spans": [
{
"start": 1088,
"end": 1089,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training and Evaluation",
"sec_num": "5.3"
},
{
"text": "The fully-supervised convolutional segment classifier (Seg-CNN) uses the same window size and feature map configuration as our segment encoder. Seg-CNN was trained on SPOT using segment labels directly and 10-fold cross-validation (identical folds as in our main models). Seg-CNN is not directly comparable to MILNET (or HIERNET) due to differences in supervision type (segment vs. document labels) and training size (1K-2K segment labels vs. \u223c250K document labels). However, the comparison is indicative of the utility of fine-grained sentiment predictors that do not rely on expensive segment-level annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training and Evaluation",
"sec_num": "5.3"
},
{
"text": "We evaluated models in two ways. We first assessed their ability to classify segment polarity in reviews using the newly created SPOT dataset and, additionally, the sentence corpora of Kotzias et al. (2015) . Our second suite of experiments focused on opinion extraction: we conducted a judgment elicitation study to determine whether extracts produced by MILNET are useful and of higher quality compared to HIERNET and other baselines. We were also interested to find out whether EDUs provide a better basis for opinion extraction than sentences. Table 3 summarizes our results. The first block in the table reports the performance of the majority class baseline. The second block considers models that do not utilize segment-level predictions, namely HIERNET which assigns polarity scores to segments using its document-level predictions, as well as the variant of MILNET which similarly uses document-level predictions only (Equation (11)). In the third block, MILNET's segment-level predictions are used. Each block further differentiates between three levels of attention integration, as previ- Noreen, 1989) , p < 0.05).",
"cite_spans": [
{
"start": 185,
"end": 206,
"text": "Kotzias et al. (2015)",
"ref_id": "BIBREF24"
},
{
"start": 1100,
"end": 1113,
"text": "Noreen, 1989)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 548,
"end": 555,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "ously described. The final block shows the performance of SO-CAL and the Seg-CNN classifier. When considering models that use documentlevel supervision, MILNET with gated, segmentspecific polarities obtains the best classification performance across all four datasets. Interestingly, it performs comparably to Seg-CNN, the fullysupervised segment classifier, which provides additional evidence that MILNET can effectively identify segment polarity without the need for segmentlevel annotations. Our model also outperforms the strong SO-CAL baseline in all but one datasets which is remarkable given the expert knowledge and linguistic information used to develop the latter. Document-level polarity predictions result in lower classification performance across the board. Differences between the standard hierarchical and multiple instance networks are less pronounced in this case, as MILNET loses the advantage of producing segment-specific sentiment predictions. Models without attention perform worse in most cases. The use of gated polarities benefits all model configurations, indicating the method's ability to selectively focus on segments with significant sentiment cues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segment Classification",
"sec_num": "6.1"
},
{
"text": "We further analyzed the polarities assigned by MILNET and HIERNET to positive, negative, and Table 5 : Accuracy scores on the sentence classification datasets introduced in Kotzias et al. (2015) . neutral segments. Figure 6 illustrates the distribution of polarity scores produced by the two models on the Yelp'13 dataset (sentence segmentation).",
"cite_spans": [
{
"start": 173,
"end": 194,
"text": "Kotzias et al. (2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 93,
"end": 100,
"text": "Table 5",
"ref_id": null
},
{
"start": 215,
"end": 223,
"text": "Figure 6",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Segment Classification",
"sec_num": "6.1"
},
{
"text": "In the case of negative and positive sentences, both models demonstrate appropriately skewed distributions. However, the neutral class appears to be particularly problematic for HIERNET, where polarity scores are scattered across a wide range of values. In contrast, MILNET is more successful at identifying neutral sentences, as its corresponding distribution has a single mode near zero. Attention gating addresses this issue by moving the polarity scores of sentiment-neutral segments towards zero. This is illustrated in Table 4 where we observe that gated variants of both models do a better job at identifying neutral segments. The effect is very significant for HIERNET, while MILNET benefits slightly and remains more effective overall. Similar trends were observed in all four SPOT datasets. In order to examine the effect of training size, we trained multiple models using subsets of the original document collections. We trained on five random subsets for each training size, ranging from 100 documents to the full training set, and tested segment classification performance on SPOT. The results, averaged across trials, are presented in Figure 7 . With the exception of the IMDB EDU-segmented dataset, MILNET only requires a few thousand training documents to outperform the supervised Seg-CNN. HI-ERNET follows a similar curve, but is inferior to MILNET. A reason for MILNET's inferior performance on the IMDB corpus (EDU-split) can be lowquality EDUs, due to the noisy and informal style of language used in IMDB reviews.",
"cite_spans": [],
"ref_spans": [
{
"start": 525,
"end": 532,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 1149,
"end": 1157,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Segment Classification",
"sec_num": "6.1"
},
{
"text": "Finally, we compared MILNET against the GICF model (Kotzias et al., 2015) on their Yelp and IMDB sentence sentiment datasets. 4 Their model requires sentence embeddings from a pre-trained neural model. We used the hierarchical CNN from their work (Denil et al., 2014) and, additionally, pre-trained HIERNET and MILNET sentence embeddings. The results in Table 5 show that MILNET outperforms all variants of GIFC. Our models also seem to learn better sentence embeddings, as they improve GICF's performance on both collections. Table 6 : Human evaluation results (in percentages). \u2020 indicates that the system in question is significantly different from MILNET (sign-test, p < 0.01).",
"cite_spans": [
{
"start": 51,
"end": 73,
"text": "(Kotzias et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 126,
"end": 127,
"text": "4",
"ref_id": null
},
{
"start": 247,
"end": 267,
"text": "(Denil et al., 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 354,
"end": 361,
"text": "Table 5",
"ref_id": null
},
{
"start": 527,
"end": 534,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Segment Classification",
"sec_num": "6.1"
},
{
"text": "In our opinion extraction experiments, AMT workers (all native English speakers) were shown an original review and a set of extractive, bullet-style summaries, produced by competing systems using a 30% compression rate. Participants were asked to decide which summary was best according to three criteria: Informativeness (Which summary best captures the salient points of the review?), Polarity (Which summary best highlights positive and negative comments?) and Coherence (Which summary is more coherent and easier to read?). Subjects were allowed to answer \"Unsure\" in cases where they could not discriminate between summaries. We used all reviews from our SPOT dataset and collected three responses per document. We ran four judgment elicitation studies: one comparing HIERNET and MILNET when summarizing reviews segmented as sentences, a second one comparing the two models with EDU segmentation, a third which compares EDU-and sentence-based summaries produced by MILNET, and a fourth where EDU-based summaries from MILNET were compared to a LEAD (the first N words from each document) and a RAN-DOM (random EDUs) baseline. Table 6 summarizes our results, showing the proportion of participants that preferred each system. The first block in the table shows a slight prefer- [Rating: ] As with any family-run hole in the wall, service can be slow. What the staff lacked in speed, they made up for in charm. The food was good, but nothing wowed me. I had the Pierogis while my friend had swedish meatballs. Both dishes were tasty, as were the sides. One thing that was disappointing was that the food was a a little cold (lukewarm). The restaurant itself is bright and clean. I will go back again when i feel like eating outside the box. Figure 8 : Example EDU-and sentence-based opinion summaries produced by HIERNET gt and MILNET gt . ence for MILNET across criteria. The second block shows significant preference for MILNET against HIERNET on informativeness and polarity, whereas HIERNET was more often preferred in terms of coherence, although the difference is not statistically significant. The third block compares sentence and EDU summaries produced by MILNET. EDU summaries were perceived as significantly better in terms of informativeness and polarity, but not coherence. This is somewhat expected as EDUs tend to produce more terse and telegraphic text and may seem unnatural due to segmentation errors. In the fourth block we observe that participants find MIL-NET more informative and better at distilling polarity compared to the LEAD and RANDOM (EDUs) baselines. We should point out that the LEAD system is not a strawman; it has proved hard to outperform by more sophisticated methods (Nenkova, 2005) , particularly on the newswire domain.",
"cite_spans": [
{
"start": 1281,
"end": 1289,
"text": "[Rating:",
"ref_id": null
},
{
"start": 2708,
"end": 2723,
"text": "(Nenkova, 2005)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 1130,
"end": 1137,
"text": "Table 6",
"ref_id": null
},
{
"start": 1743,
"end": 1751,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Opinion Extraction",
"sec_num": "6.2"
},
{
"text": "Example EDU-and sentence-based summaries produced by gated variants of HIERNET and MIL-NET are shown in Figure 8 , with attention weights and polarity scores of the extracted segments shown in round and square brackets respectively. For both granularities, HIERNET's positive document-level prediction results in a single polarity score assigned to every segment, and further adjusted using the corresponding attention weights. The extracted segments are informative, but fail to capture the negative sentiment of some segments. In contrast, MIL-NET is able to detect positive and negative snippets via individual segment polarities. Here, EDU segmentation produced a more concise summary with a clearer grouping of positive and negative snippets.",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 112,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "EDU-based",
"sec_num": null
},
{
"text": "In this work, we presented a neural network model for fine-grained sentiment analysis within the framework of multiple instance learning. Our model can be trained on large scale sentiment classification datasets, without the need for segment-level labels. As a departure from the commonly used vector-based composition, our model first predicts sentiment at the sentence-or EDU-level and subsequently combines predictions up the document hierarchy. An attention-weighted polarity scoring technique provides a natural way to extract sentimentheavy opinions. Experimental results demonstrate the superior performance of our model against more conventional neural architectures. Human evaluation studies also show that MILNET opinion extracts are preferred by participants and are effective at capturing informativeness and polarity, especially when using EDU segments. In the future, we would like to focus on multi-document, aspect-based extraction (Cao et al., 2017) and ways of improving the coherence of our summaries by taking into account more fine-grained discourse information (Daum\u00e9 III and Marcu, 2002) .",
"cite_spans": [
{
"start": 948,
"end": 966,
"text": "(Cao et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 1083,
"end": 1110,
"text": "(Daum\u00e9 III and Marcu, 2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Our code and SPOT dataset are publicly available at: https://github.com/stangelid/milnet-sent",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "When applied to the YELP'13 and IMDB document classification datasets, the use of CNNs results in a relative performance decrease of < 2% comparedYang et al's model (2016).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The discretization of polarities is only used for evaluation purposes and is not necessary for summary extraction, where we only need a relative ranking of segments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "GICF only handles binary labels, which makes it unsuitable for the full-scale comparisons inTable 3. Here, we binarize our training datasets and use same-sized sentence embeddings for all four models (R 150 for Yelp, R 72 for IMDB).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors gratefully acknowledge the support of the European Research Council (award number 681760). We thank TACL action editor Ani Nenkova and the anonymous reviewers whose feedback helped improve the present paper, as well as Charles Sutton, Timothy Hospedales, and members of EdinburghNLP for helpful discussions and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Multiple instance learning via disjunctive programming boosting",
"authors": [
{
"first": "Stuart",
"middle": [],
"last": "Andrews",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2004,
"venue": "Advances in Neural Information Processing Systems 16",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stuart Andrews and Thomas Hofmann. 2004. Multiple instance learning via disjunctive programming boost- ing. In Advances in Neural Information Processing Systems 16, pages 65-72. Curran Associates, Inc.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining",
"authors": [
{
"first": "Stefano",
"middle": [],
"last": "Baccianella",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Esuli",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th Conference on International Language Resources and Evaluation",
"volume": "10",
"issue": "",
"pages": "2200--2204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefano Baccianella, Andrea Esuli, and Fabrizio Sebas- tiani. 2010. SentiWordNet 3.0: An enhanced lexi- cal resource for sentiment analysis and opinion min- ing. In Proceedings of the 5th Conference on In- ternational Language Resources and Evaluation, vol- ume 10, pages 2200-2204, Valletta, Malta.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Represen- tations, San Diego, California, USA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Better document-level sentiment analysis from RST discourse parsing",
"authors": [
{
"first": "Parminder",
"middle": [],
"last": "Bhatia",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "Portu-- gal",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parminder Bhatia, Yangfeng Ji, and Jacob Eisenstein. 2015. Better document-level sentiment analysis from RST discourse parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2212-2218, Lisbon, Portu- gal.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving multi-document summarization via text classification",
"authors": [
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3053--3058",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2017. Improving multi-document summarization via text classification. In Proceedings of the 31st AAAI Con- ference on Artificial Intelligence, pages 3053-3058, San Francisco, California, USA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning to recognize objects with little supervision",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Carbonetto",
"suffix": ""
},
{
"first": "Gyuri",
"middle": [],
"last": "Dork\u00f3",
"suffix": ""
},
{
"first": "Cordelia",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2008,
"venue": "International Journal of Computer Vision",
"volume": "77",
"issue": "1",
"pages": "219--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Carbonetto, Gyuri Dork\u00f3, Cordelia Schmid, Hen- drik K\u00fcck, and Nando De Freitas. 2008. Learning to recognize objects with little supervision. International Journal of Computer Vision, 77(1):219-237.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Multidocument summarization of evaluative text",
"authors": [
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Rymond",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Pauls",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "305--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giuseppe Carenini, Rymond Ng, and Adam Pauls. 2006. Multidocument summarization of evaluative text. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguis- tics, pages 305-312, Trento, Italy.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Building a discourse-tagged corpus in the framework of rhetorical structure theory",
"authors": [
{
"first": "Lynn",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ellen"
],
"last": "Okurowski",
"suffix": ""
}
],
"year": 2003,
"venue": "Current and New Directions in Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "85--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2003. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Current and New Directions in Discourse and Dialogue, pages 85-112. Springer.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neural summarization by extracting sentences and words",
"authors": [
{
"first": "Jianpeng",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "484--494",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianpeng Cheng and Mirella Lapata. 2016. Neural sum- marization by extracting sentences and words. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 484-494, Berlin, Germany.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning from partial labels",
"authors": [
{
"first": "Timothee",
"middle": [],
"last": "Cour",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Sapp",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "1501--1536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothee Cour, Ben Sapp, and Ben Taskar. 2011. Learn- ing from partial labels. Journal of Machine Learning Research, 12(May):1501-1536.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A noisy-channel model for document compression",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "449--456",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III and Daniel Marcu. 2002. A noisy-channel model for document compression. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 449-456, Philadelphia, Pennsylvania, USA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Extraction of salient sentences from labelled documents",
"authors": [
{
"first": "Misha",
"middle": [],
"last": "Denil",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Demiraj",
"suffix": ""
},
{
"first": "Nando",
"middle": [],
"last": "De Freitas",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Misha Denil, Alban Demiraj, and Nando de Freitas. 2014. Extraction of salient sentences from labelled documents. Technical report, University of Oxford.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A hybrid approach to multidocument summarization of opinions in reviews",
"authors": [
{
"first": "Giuseppe",
"middle": [],
"last": "Di Fabbrizio",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Stent",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Natural Language Generation Conference (INLG)",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giuseppe Di Fabbrizio, Amanda Stent, and Robert Gaizauskas. 2014. A hybrid approach to multi- document summarization of opinions in reviews. In Proceedings of the 8th International Natural Lan- guage Generation Conference (INLG), pages 54-63, Philadelphia, Pennsylvania, USA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Jointly modeling aspects, ratings and sentiments for movie recommendation (JMARS)",
"authors": [
{
"first": "Qiming",
"middle": [],
"last": "Diao",
"suffix": ""
},
{
"first": "Minghui",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chao-Yuan",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"J"
],
"last": "Wu",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "193--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiming Diao, Minghui Qiu, Chao-Yuan Wu, Alexan- der J. Smola, Jing Jiang, and Chong Wang. 2014. Jointly modeling aspects, ratings and sentiments for movie recommendation (JMARS). In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 193- 202, New York, NY, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Solving the multiple instance problem with axis-parallel rectangles",
"authors": [
{
"first": "Thomas",
"middle": [
"G"
],
"last": "Dietterich",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"H"
],
"last": "Lathrop",
"suffix": ""
},
{
"first": "Toms",
"middle": [],
"last": "Lozano-Prez",
"suffix": ""
}
],
"year": 1997,
"venue": "Artificial Intelligence",
"volume": "89",
"issue": "1",
"pages": "31--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas G. Dietterich, Richard H. Lathrop, and Toms Lozano-Prez. 1997. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intel- ligence, 89(1):31 -71.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Text-level discourse parsing with rich linguistic features",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Vanessa Feng",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "60--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Vanessa Feng and Graeme Hirst. 2012. Text-level discourse parsing with rich linguistic features. In Pro- ceedings of the 50th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 60-68, Jeju Island, Korea.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Opinosis: A graph based approach to abstractive summarization of highly redundant opinions",
"authors": [
{
"first": "Kavita",
"middle": [],
"last": "Ganesan",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "340--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: A graph based approach to abstrac- tive summarization of highly redundant opinions. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 340-348, Beijing, China.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Abstractive summarization of product reviews using discourse structure",
"authors": [
{
"first": "Shima",
"middle": [],
"last": "Gerani",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Bita",
"middle": [],
"last": "Nejat",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1602--1613",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shima Gerani, Yashar Mehdad, Giuseppe Carenini, Ray- mond T. Ng, and Bita Nejat. 2014. Abstractive sum- marization of product reviews using discourse struc- ture. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1602-1613, Doha, Qatar.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Knowledgebased weak supervision for information extraction of overlapping relations",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "541--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 541-550, Portland, Oregon, USA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "168--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the 10th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, pages 168-177, Seattle, Washington, USA.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Effective use of word order for text categorization with convolutional neural networks",
"authors": [
{
"first": "Rie",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "103--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rie Johnson and Tong Zhang. 2015a. Effective use of word order for text categorization with convolu- tional neural networks. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 103-112, Denver, Col- orado, USA.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Semi-supervised convolutional neural networks for text categorization via region embedding",
"authors": [
{
"first": "Rie",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "28",
"issue": "",
"pages": "919--927",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rie Johnson and Tong Zhang. 2015b. Semi-supervised convolutional neural networks for text categorization via region embedding. In Advances in Neural Infor- mation Processing Systems 28, pages 919-927. Curran Associates, Inc.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A self-organizing integrated segmentation and recognition neural net",
"authors": [
{
"first": "Jim",
"middle": [],
"last": "Keeler",
"suffix": ""
},
{
"first": "David",
"middle": [
"E"
],
"last": "Rumelhart",
"suffix": ""
}
],
"year": 1992,
"venue": "Advances in Neural Information Processing Systems",
"volume": "4",
"issue": "",
"pages": "496--503",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jim Keeler and David E. Rumelhart. 1992. A self-organizing integrated segmentation and recogni- tion neural net. In Advances in Neural Informa- tion Processing Systems 4, pages 496-503. Morgan- Kaufmann.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sen- tence classification. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing, pages 1746-1751, Doha, Qatar.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "From group to individual labels using deep features",
"authors": [
{
"first": "Dimitrios",
"middle": [],
"last": "Kotzias",
"suffix": ""
},
{
"first": "Misha",
"middle": [],
"last": "Denil",
"suffix": ""
},
{
"first": "Padhraic",
"middle": [],
"last": "Nando De Freitas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smyth",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "597--606",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitrios Kotzias, Misha Denil, Nando De Freitas, and Padhraic Smyth. 2015. From group to individual la- bels using deep features. In Proceedings of the 21th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, pages 597-606, Sydney, Australia.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 31st International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. In Proceed- ings of the 31st International Conference on Machine Learning, pages 1188-1196, Beijing, China.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sentiment summarization: Evaluating and learning user preferences",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Lerman",
"suffix": ""
},
{
"first": "Sasha",
"middle": [],
"last": "Blair-Goldensohn",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mc-Donald",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the ACL",
"volume": "",
"issue": "",
"pages": "514--522",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Lerman, Sasha Blair-Goldensohn, and Ryan Mc- Donald. 2009. Sentiment summarization: Evaluating and learning user preferences. In Proceedings of the 12th Conference of the European Chapter of the ACL, pages 514-522, Athens, Greece.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The role of discourse units in near-extractive summarization",
"authors": [
{
"first": "Jessy",
"middle": [],
"last": "Junyi",
"suffix": ""
},
{
"first": "Kapil",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Thadani",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stent",
"suffix": ""
}
],
"year": 2016,
"venue": "The 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "137--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyi Jessy Li, Kapil Thadani, and Amanda Stent. 2016. The role of discourse units in near-extractive summa- rization. In Proceedings of the SIGDIAL 2016 Con- ference, The 17th Annual Meeting of the Special Inter- est Group on Discourse and Dialogue, pages 137-147, Los Angeles, California, USA.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse",
"authors": [
{
"first": "C",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"A"
],
"last": "Mann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "8",
"issue": "",
"pages": "243--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William C. Mann and Sandra A. Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text-Interdisciplinary Jour- nal for the Study of Discourse, 8(3):243-281.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Multiple-instance learning for natural scene classification",
"authors": [],
"year": null,
"venue": "Proceedings of the 15th International Conference on Machine Learning",
"volume": "98",
"issue": "",
"pages": "341--349",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Multiple-instance learning for natural scene classifica- tion. In Proceedings of the 15th International Con- ference on Machine Learning, volume 98, pages 341- 349, San Francisco, California, USA.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "SummaRuNNer: A recurrent neural network based sequence model for extractive summarization of documents",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Feifei",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3075--3081",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. SummaRuNNer: A recurrent neural network based se- quence model for extractive summarization of docu- ments. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, pages 3075-3081, San Fran- cisco, California.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Automatic text summarization of newswire: Lessons learned from the document understanding conference",
"authors": [
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 20th AAAI",
"volume": "",
"issue": "",
"pages": "1436--1441",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ani Nenkova. 2005. Automatic text summarization of newswire: Lessons learned from the document under- standing conference. In Proceedings of the 20th AAAI, pages 1436-1441, Pittsburgh, Pennsylvania, USA.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Computer-intensive Methods for Testing Hypotheses: An Introduction",
"authors": [
{
"first": "Eric",
"middle": [
"Noreen"
],
"last": "",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Noreen. 1989. Computer-intensive Methods for Testing Hypotheses: An Introduction. Wiley.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "115--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2005. Seeing stars: Ex- ploiting class relationships for sentiment categoriza- tion with respect to rating scales. In Proceedings of the 43rd Annual Meeting on Association for Compu- tational Linguistics, pages 115-124. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Thumbs up? sentiment classification using machine learning techniques",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Shivakumar",
"middle": [],
"last": "Vaithyanathan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? sentiment classification using ma- chine learning techniques. In Proceedings of the 2002 Conference on Empirical Methods in Natural Lan- guage Processing, pages 79-86, Pittsburgh, Pennsyl- vania, USA.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Explaining the stars: Weighted multiple-instance learning for aspect-based sentiment analysis",
"authors": [
{
"first": "Nikolaos",
"middle": [],
"last": "Pappas",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Popescu-Belis",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "455--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikolaos Pappas and Andrei Popescu-Belis. 2014. Ex- plaining the stars: Weighted multiple-instance learn- ing for aspect-based sentiment analysis. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 455-466, Doha, Qatar, October.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Explicit document modeling through weighted multipleinstance learning",
"authors": [
{
"first": "Nikolaos",
"middle": [],
"last": "Pappas",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Popescu-Belis",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of Artificial Intelligence Research",
"volume": "58",
"issue": "",
"pages": "591--626",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikolaos Pappas and Andrei Popescu-Belis. 2017. Ex- plicit document modeling through weighted multiple- instance learning. Journal of Artificial Intelligence Re- search, 58:591-626.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "The bag-of-opinions method for review rating prediction from sparse text patterns",
"authors": [
{
"first": "Lizhen",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Ifrim",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "913--921",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lizhen Qu, Georgiana Ifrim, and Gerhard Weikum. 2010. The bag-of-opinions method for review rating prediction from sparse text patterns. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 913-921, Beijing, China.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Semi-supervised recursive autoencoders for predicting sentiment distributions",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"H"
],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "151--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Jeffrey Pennington, Eric H. Huang, An- drew Y. Ng, and Christopher D. Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the 2011 Conference on Empirical Methods in Natural Lan- guage Processing, pages 151-161, Edinburgh, Scot- land, UK.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christo- pher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Pro- ceedings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing, pages 1631- 1642, Seattle, Washington, USA.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Lexicon-based methods for sentiment analysis",
"authors": [
{
"first": "Maite",
"middle": [],
"last": "Taboada",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Brooke",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Tofiloski",
"suffix": ""
},
{
"first": "Kimberly",
"middle": [],
"last": "Voll",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational Linguistics",
"volume": "37",
"issue": "2",
"pages": "267--307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maite Taboada, Julian Brooke, Milan Tofiloski, Kimberly Voll, and Manfred Stede. 2011. Lexicon-based meth- ods for sentiment analysis. Computational Linguis- tics, 37(2):267-307.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Discovering fine-grained sentiment with latent variable structured prediction models",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 39th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar T\u00e4ckstr\u00f6m and Ryan McDonald. 2011. Discov- ering fine-grained sentiment with latent variable struc- tured prediction models. In Proceedings of the 39th",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Document modeling with gated recurrent neural network for sentiment classification",
"authors": [
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1422--1432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sen- timent classification. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing, pages 1422-1432, Lisbon, Portugal.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Thumbs up or thumbs down?: Semantic orientation applied to unsupervised classification of reviews",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "417--424",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D Turney. 2002. Thumbs up or thumbs down?: Semantic orientation applied to unsupervised classifi- cation of reviews. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 417-424, Pittsburgh, Pennsylvania, USA.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Baselines and bigrams: Simple, good sentiment and topic classification",
"authors": [
{
"first": "Sida",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers",
"volume": "2",
"issue": "",
"pages": "90--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sida Wang and Christopher D. Manning. 2012. Base- lines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguis- tics: Short Papers-Volume 2, pages 90-94, Jeju Island, Korea.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "A two-level learning method for generalized multi-instance problems",
"authors": [
{
"first": "Xiu-Shen",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Jianxin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhi-Hua",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the IEEE International Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "468--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiu-Shen Wei, Jianxin Wu, and Zhi-Hua Zhou. 2014. Scalable multi-instance learning. In Proceedings of the IEEE International Conference on Data Mining, pages 1037-1042, Shenzhen, China. Nils Weidmann, Eibe Frank, and Bernhard Pfahringer. 2003. A two-level learning method for generalized multi-instance problems. In Proceedings of the 14th European Conference on Machine Learning, pages 468-479, Dubrovnik, Croatia.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Annotating expressions of opinions and emotions in language. Language resources and evaluation",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "39",
"issue": "",
"pages": "165--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language resources and evaluation, 39(2):165-210.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Deep multiple instance learning for image classification and auto-annotation",
"authors": [
{
"first": "Jiajun",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yinan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "3460--3469",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiajun Wu, Yinan Yu, Chang Huang, and Kai Yu. 2015. Deep multiple instance learning for image classifica- tion and auto-annotation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 3460-3469, Boston, Massachusetts, USA.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Exploring the use of word relation features for sentiment classification",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters",
"volume": "",
"issue": "",
"pages": "1336--1344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Xia and Chengqing Zong. 2010. Exploring the use of word relation features for sentiment classification. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1336- 1344, Beijing, China.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Logistic regression and boosting for labeled bags of instances",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "272--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Xu and Eibe Frank. 2004. Logistic regression and boosting for labeled bags of instances. In Proceed- ings of the Pacific-Asia Conference on Knowledge Dis- covery and Data Mining, pages 272-281. Springer- Verlag.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Extracting lexically divergent paraphrases from Twitter",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "William",
"middle": [
"B"
],
"last": "Dolan",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "435--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Xu, Alan Ritter, Chris Callison-Burch, William B. Dolan, and Yangfeng Ji. 2014. Extracting lexically divergent paraphrases from Twitter. Transactions of the Association for Computational Linguistics, 2:435- 448.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Hierarchical attention networks for document classification",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1480--1489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical atten- tion networks for document classification. In Pro- ceedings of the 2016 Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics: Human Language Technologies, pages 1480- 1489, San Diego, California, USA.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "ADADELTA: an adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Content-based image retrieval using multipleinstance learning",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Sally",
"middle": [
"A"
],
"last": "Goldman",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"E"
],
"last": "Fritts",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19th International Conference on Machine Learning",
"volume": "2",
"issue": "",
"pages": "682--689",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Zhang, Sally A. Goldman, Wei Yu, and Jason E. Fritts. 2002. Content-based image retrieval using multiple- instance learning. In Proceedings of the 19th Inter- national Conference on Machine Learning, volume 2, pages 682-689, Sydney, Australia.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Multiple instance boosting for object detection",
"authors": [
{
"first": "Cha",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "John",
"middle": [
"C"
],
"last": "Platt",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"A"
],
"last": "Viola",
"suffix": ""
}
],
"year": 2006,
"venue": "Advances in Neural Information Processing Systems",
"volume": "18",
"issue": "",
"pages": "1417--1424",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cha Zhang, John C. Platt, and Paul A. Viola. 2006. Mul- tiple instance boosting for object detection. In Ad- vances in Neural Information Processing Systems 18, pages 1417-1424. MIT Press.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Multi-instance learning by treating instances as noniid samples",
"authors": [
{
"first": "Zhi-Hua",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yu-Yin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yu-Feng",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 26th Annual International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1249--1256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhi-Hua Zhou, Yu-Yin Sun, and Yu-Feng Li. 2009. Multi-instance learning by treating instances as non- iid samples. In Proceedings of the 26th Annual In- ternational Conference on Machine Learning, pages 1249-1256, Montr\u00e9al, Quebec.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "A Hierarchical Network (HIERNET) for document-level sentiment classification and our proposed Multiple Instance Learning Network (MILNET). The models use the same attention mechanism to combine segment vectors and predictions respectively.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Polarity scores (bottom) obtained from class probability distributions for three EDUs (top) extracted from a restaurant review. Attention weights (top) are used to fine-tune the obtained polarities.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Distribution of segment-level labels per document-level class on our the SPOT datasets.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "System pipelines for HIERNET and MILNET showing 4 distinct phases for sentiment analysis.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Distribution of predicted polarity scores across three classes (Yelp'13 sentences).",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "Performance of HIERNET gt and MILNET gt for varying training sizes.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"text": "Document-level sentiment classification datasets used to train our models.",
"content": "<table><tr><td/><td>Yelp'13 seg Sent. EDUs</td><td>IMDB seg Sent. EDUs</td></tr><tr><td>#Segments #Documents Classes</td><td>1,065 {-, 0 , +} 2,110 100</td><td>1,029 {-, 0 , +} 2,398 97</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF2": {
"text": "",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"text": "\u2020 17.03 \u2020 18.32 \u2020 21.52 \u2020 Document HIERNET avg 54.21 \u2020 50.90 \u2020 46.99 \u2020 49.02 \u2020 HIERNET 55.33 \u2020 51.43 \u2020 48.47 \u2020 49.70 \u2020 HIERNET gt 56.64 \u2020 58.75 62.12 57.38 \u2020 MILNET avg 58.43 \u2020 48.63 \u2020 53.40 \u2020 51.81 \u2020 MILNET 52.73 \u2020 53.59 \u2020 48.75 \u2020 47.18 \u2020 MILNET gt 59.74 \u2020 59.47 61.83 \u2020 58.24 \u2020 Segm MILNET avg 51.79 \u2020 46.77 \u2020 45.69 \u2020 38.37 \u2020",
"content": "<table><tr><td>Method</td><td>Yelp'13 seg Sent EDU Sent EDU IMDB seg</td></tr><tr><td colspan=\"2\">Majority 19.02 MILNET 61.41 59.58 59.99 \u2020 57.71 \u2020 MILNET gt 63.35 59.85 63.97 59.87</td></tr><tr><td>SO-CAL Seg-CNN</td><td>56.53 \u2020 58.16 \u2020 53.21 \u2020 60.40 56.18 \u2020 59.96 58.32 \u2020 62.95 \u2020</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF4": {
"text": "",
"content": "<table><tr><td>: Segment classification results (in macro-</td></tr><tr><td>averaged F1). \u2020 indicates that the system in question is significantly different from MILNET gt (approxi-mate randomization test (</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF6": {
"text": "",
"content": "<table><tr><td>Method</td><td colspan=\"2\">Yelp IMDB</td></tr><tr><td>GICF</td><td>86.3</td><td>86.0</td></tr><tr><td>GICFHN</td><td>92.9</td><td>86.5</td></tr><tr><td>GICFMN</td><td>93.2</td><td>91.0</td></tr><tr><td colspan=\"2\">MILNET 94.0</td><td>91.9</td></tr><tr><td>: F1 scores</td><td/><td/></tr><tr><td>for neutral segments</td><td/><td/></tr><tr><td>(Yelp'13).</td><td/><td/></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF8": {
"text": "Extracted via HIERNETgt (0.13) [+0.26] The food was good + (0.10) [+0.26] but nothing wowed me. + (0.09) [+0.26] The restaurant itself is bright and clean + (0.13) [+0.26] Both dishes were tasty + (0.18) [+0.26] I will go back again + Extracted via MILNETgt (0.16) [+0.12] The food was good + (0.12) [+0.43] The restaurant itself is bright and clean + (0.19) [+0.15] I will go back again + (0.09) [-0.07] but nothing wowed me. \u2212 (0.10) [-0.10] the food was a a little cold (lukewarm) \u2212 Sent-based (0.12) [+0.23] Both dishes were tasty, as were the sides + (0.18) [+0.23] The food was good, but nothing wowed me + (0.22) [+0.23] One thing that was disappointing was that the food was a a little cold (lukewarm) + (0.13) [+0.26] Both dishes were tasty, as were the sides + (0.20) [+0.59] I will go back again when I feel like eating outside the box + (0.18) [-0.12] The food was good, but nothing wowed me \u2212 (number): attention weight [number]: non-gated polarity score text + : extracted positive opinion text \u2212 : extracted negative opinion",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}