Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K16-1015",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:11:00.216129Z"
},
"title": "Harnessing Sequence Labeling for Sarcasm Detection in Dialogue from TV Series 'Friends'",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Vaibhav",
"middle": [],
"last": "Tripathi",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Mark",
"middle": [],
"last": "Carman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Monash University",
"location": {
"country": "Australia"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper is a novel study that views sarcasm detection in dialogue as a sequence labeling task, where a dialogue is made up of a sequence of utterances. We create a manuallylabeled dataset of dialogue from TV series 'Friends' annotated with sarcasm. Our goal is to predict sarcasm in each utterance, using sequential nature of a scene. We show performance gain using sequence labeling as compared to classification-based approaches. Our experiments are based on three sets of features, one is derived from information in our dataset, the other two are from past works. Two sequence labeling algorithms (SVM-HMM and SEARN) outperform three classification algorithms (SVM, Naive Bayes) for all these feature sets, with an increase in F-score of around 4%. Our observations highlight the viability of sequence labeling techniques for sarcasm detection of dialogue.",
"pdf_parse": {
"paper_id": "K16-1015",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper is a novel study that views sarcasm detection in dialogue as a sequence labeling task, where a dialogue is made up of a sequence of utterances. We create a manuallylabeled dataset of dialogue from TV series 'Friends' annotated with sarcasm. Our goal is to predict sarcasm in each utterance, using sequential nature of a scene. We show performance gain using sequence labeling as compared to classification-based approaches. Our experiments are based on three sets of features, one is derived from information in our dataset, the other two are from past works. Two sequence labeling algorithms (SVM-HMM and SEARN) outperform three classification algorithms (SVM, Naive Bayes) for all these feature sets, with an increase in F-score of around 4%. Our observations highlight the viability of sequence labeling techniques for sarcasm detection of dialogue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sarcasm is defined as 'the use of irony to mock or convey contempt' 1 . An example of a sarcastic sentence is 'Being stranded in traffic is the best way to start the week'. In this case, the positive word 'best' together with the undesirable situation 'being stranded in traffic' conveys the sarcasm. Because sarcasm has an implied sentiment (negative) that is different from surface sentiment (positive due to presence of 'best'), it poses a challenge to sentiment analysis systems that aim to determine polarity in text (Pang and Lee, 2008) .",
"cite_spans": [
{
"start": 522,
"end": 542,
"text": "(Pang and Lee, 2008)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Some sarcastic expressions may be more difficult to detect. Consider the possibly sarcastic statement 'I absolutely love this restaurant'. Unlike in the traffic example above, sarcasm in this sentence, if any, can be understood using context which is 'external' to the sentence i.e., beyond common world knowledge. 2 . This external context may be available in the conversation that this sentence is a part of. For example, the conversational context may be situational: the speaker discovers a fly in her soup, then looks at her date and says, 'I absolutely love this restaurant'. The conversational context may also be verbal: her date says, 'They've taken 40 minutes to bring our appetizers' to which the speaker responds 'I absolutely love this restaurant'. Both these examples point to the intuition that for dialogue (i.e., data where more than one speaker participates in a discourse), conversational context is often a clue for sarcasm.",
"cite_spans": [
{
"start": 315,
"end": 316,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For such dialogue, prior work in sarcasm detection (determining whether a text is sarcastic or not) captures context in the form of classifier features such as the topic's probability of evoking sarcasm, or the author's tendency to use sarcasm (Rajadesingan et al., 2015; Wallace, 2015) . In this paper, we present an alternative hypothesis: sarcasm detection of dialogue is better formulated as a sequence labeling task, instead of classification task.",
"cite_spans": [
{
"start": 244,
"end": 271,
"text": "(Rajadesingan et al., 2015;",
"ref_id": "BIBREF21"
},
{
"start": 272,
"end": 286,
"text": "Wallace, 2015)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The central message of our work is the efficacy of using sequence labeling as a learning mechanism for sarcasm detection in dialogue, and not in the set of features that we propose for sarcasm detectionalthough we experiment with three feature sets. For our experiments, we create a manually labeled dataset of dialogues from TV series 'Friends'. Each dialogue is considered to be a sequence of utterances, and every utterance is annotated as sarcastic or non-sarcastic (Details in Section 3). It may be argued that a TV series episode is dramatized and hence does not reflect realworld conversations. However, although the script of 'Friends' is dramatized to suit the situational comedy genre, it takes away nothing from its relevance to reallife conversations except for the volume of sarcastic sentences. Therefore, our findings from this work can, in theory, be reliably extended to work for any real-life utterances. Also, such datasets that are not based on real-world conversations have been used in prior work: emotion detection of children stories in and speech transcripts of a MTV show in Rakov and Rosenberg (2013) . As a first step in the direction of using sequence labeling, our dataset is a good 'controlled experiment' environment (The details are discussed in Section 2). In fact, use of a dataset in a new Figure 1 : Illustration of our hypothesis for sarcasm detection of conversational text (such as dialogue); A, B, C, D indicate four utterances genre (TV series transcripts, specifically) has potential for future work in sarcasm detection. Our dataset without the actual dialogues from the show (owing to copyright restrictions) may be available on request.",
"cite_spans": [
{
"start": 1101,
"end": 1127,
"text": "Rakov and Rosenberg (2013)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 1326,
"end": 1334,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Based on information available in our dataset (names of speakers, etc.), we present new features. We then compare two sequence labelers (SEARN and SVM-HMM) with three classifiers (SVM with oversampled and undersampled data, and Na\u00efve Bayes), for this set of features and also for features from two prior works. In case of our novel features as well as features reported in prior work, sequence labeling algorithms outperform classification algorithms. There is an improvement of 3-4% in F-score when sequence labelers are used, as compared to classifiers, for sarcasm detection in our dialogue dataset. Since many datasets such as tweet conversations, chat transcripts, etc. are currently available, our findings will be useful to obtain additional contexts in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. Section 2 motivates the approach and presents our hypothesis. Section 3 describes our dataset, while Section 4 presents the features we use (this includes three configurations: novel features based on our dataset, and features from past work). Experiment setup is in Section 5 and results are given in Section 6. We present a discussion on which types of sarcasm are handled better by sequence labeling and an error analysis in Section 7, and describe related work in Section 8. Finally, we conclude in Section 9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In dialogue, multiple participants take turns to speak. Consider the following snippet from 'Friends' involving two of the lead characters, Ross and Chandler.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation & Hypothesis",
"sec_num": "2"
},
{
"text": "[Chandler is at the table. Ross walks in, looking very tanned.] Chandler: Hold on! There is something different. Ross: I went to that tanning place your wife suggested. Chandler: Was that place... The Sun?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation & Hypothesis",
"sec_num": "2"
},
{
"text": "Chandler's statement 'Was that place... The Sun?' is sarcastic. The sarcasm can be understood based on two kinds of contextual information: (a) general knowledge (that sun is indeed hot) (b) Conversational context (In the previous utterance, Ross states that he went to a tanning place). Without information (b), the sarcasm cannot be understood. Thus, dialogue presents a peculiar opportunity: using sequential nature of text for the task at hand. We hypothesize that 'for sarcasm detection of dialogue, sequence labeling performs better than classification'. To validate our hypothesis, we consider two feature configurations: (a) novel features designed for our dataset, (b) features as given in two prior works. To further understand where exactly sequence labeling techniques do better, we also present a discussion on which linguistic types of sarcasm benefit the most from sequence labeling in place of classification. Figure 1 summarizes the scope of this paper. We consider two formulations for sarcasm detection of conversational text. In the first option (i.e. classification), a sequence is broken down into individual instances. One instance as an input to a classification algorithm returns an output for that instance. In the second option (i.e. sequence labeling), a sequence as input to a sequence labeling algorithm returns a sequence of labels as an output. In rest of the paper, we use the following terms: 2. Scene/Sequence: A scene is a sequence of utterances, in which different speakers take turns to speak. We use the terms 'scene' and 'sequence' interchangeably.",
"cite_spans": [],
"ref_spans": [
{
"start": 926,
"end": 934,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motivation & Hypothesis",
"sec_num": "2"
},
{
"text": "Datasets based on literary/creative works have been explored in the past. One such example is emotion classification of children's stories by Zhang Z (2014). Similarly, we create a sarcasm-labeled dataset that consists of transcripts of a comedy TV show, 'Friends 3 ' (by Bright/Kauffman/Crane Productions, and Warner Bros. Entertainment Inc.). We download these transcripts from OpenSubtitles 4 as given by Lison and Tiedemann (2016) , with additional pre-processing from a fan-contributed website called http://www. friendstranscripts.tk. Each scene begins with a description of the location/situation followed by a series of utterances spoken by characters. Figure 2 shows an illustration of our dataset. This is (obviously) a dummy example that has been anonymized. The reason behind choosing a TV show transcript as our dataset was to restrict to a small set of characters (so as to leverage on speaker-specific features) that use a lot of humor. These characters are often sarcastic towards each other because of their inter-personal relationships. In fact, past linguistic studies also show how sarcasm is more common between familiar speakers, and often friends (Gibbs, 2000) . A typical snippet is: [Scene: Chandler and Monica's room. Chandler is packing when Ross knocks on the door and enters...] Ross: Hey! Chandler: Hey! Ross: You guys ready to go? Chandler: Not quite. Monica's still at the salon, and I'm just finishing packing.",
"cite_spans": [
{
"start": 408,
"end": 434,
"text": "Lison and Tiedemann (2016)",
"ref_id": "BIBREF16"
},
{
"start": 1170,
"end": 1183,
"text": "(Gibbs, 2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 661,
"end": 669,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "'Friends' Dataset",
"sec_num": "3"
},
{
"text": "Our annotators are linguists with an experience of more than 8k hours of annotation, and are not authors A complete scene is visible to the annotators at a time, so that they understand complete context of the scene. They perform the task of annotating every utterance in this scene with two labels: sarcastic and non-sarcastic. The two annotators separately perform this annotation over multiple sessions. To minimize bias beyond the scope of this annotation, we selected annotators who had never watched Friends before this annotation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "'Friends' Dataset",
"sec_num": "3"
},
{
"text": "The annotations 5 may be available on request, subject to copyright restrictions. Every utterance is annotated with a label while description of a scene is not annotated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "'Friends' Dataset",
"sec_num": "3"
},
{
"text": "The inter-annotator agreement for a subset of 105 scenes 6 (around 1600 utterances) is 0.44. This is comparable with other manually annotated datasets in sarcasm detection . Table 1 shows the relevant statistics of the complete dataset (in addition to 105 scenes as mentioned above). There are 17338 utterances in 913 scenes. Out of these, 1888 utterances are labeled as sarcastic. Average length of a scene is 18.6 utterances. Table 2 shows additional statistics. Table 2(a) shows that Chandler is the character with highest proportion of sarcastic utterances (22.24%). Table 2(b) shows that sarcastic utterances have higher surface positive word score 7 (1.55) than non-sarcastic (0.97) or overall utterances (1.03). This validates the past observation that sarcasm is often expressed through positive words (and sometimes contrasted with negation) . Finally, Table 2 (c) shows that sarcastic utterances also have higher proportion of non-verbal indicators (action words) (28.23%) than non-sarcastic or overall utterances. ",
"cite_spans": [],
"ref_spans": [
{
"start": 174,
"end": 181,
"text": "Table 1",
"ref_id": null
},
{
"start": 428,
"end": 435,
"text": "Table 2",
"ref_id": null
},
{
"start": 862,
"end": 869,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "'Friends' Dataset",
"sec_num": "3"
},
{
"text": "To ensure that our hypothesis is not dependent on choice of features, we show our results on two configurations: (a) when dataset-derived features (i.e., novel features designed based on our dataset) are used, and (b) when features reported in prior work are used. We describe these in forthcoming subsections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "We design our dataset-derived features based on information available in our dataset. An utterance consists of three parts: 1. Speaker: The name of the speaker is the first word of an utterance, and is followed by a colon. In case of the second utterance in Figure 2 , the speaker is 'Ross' while in the third, the speaker is 'Chandler'.",
"cite_spans": [],
"ref_spans": [
{
"start": 258,
"end": 266,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Dataset-derived Features",
"sec_num": "4.1"
},
{
"text": "2. Spoken words: This is the textual portion of what the speaker says. In the second utterance in Figure 2, the spoken words are 'Chandler's utterance, sentence 1..'.",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 104,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset-derived Features",
"sec_num": "4.1"
},
{
"text": "Actions that a speaker performs while speaking the utterance are indicated in parentheses. These are useful clues that form additional context. Unlike speaker and spoken words, action words may or may not be present.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Action words:",
"sec_num": "3."
},
{
"text": "In the second utterance in Figure 2 , there are no action words while in the third utterance, 'action Chandler does while reading this' are action words.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 35,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Action words:",
"sec_num": "3."
},
{
"text": "Based on this information, we design three categories of features (listed in Table 3 ). These are:",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 84,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Action words:",
"sec_num": "3."
},
{
"text": "1. Lexical Features: These are unigrams in the spoken words. We experimented with both count and boolean representations, and the results are comparable. We report values for boolean representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Action words:",
"sec_num": "3."
},
{
"text": "In order to capture conversational context, we use three kinds of features. Action words are unigrams indicated within parentheses. The intuition is that a character 'raising her eyebrows' (action) is different from saying \"raising her eyebrows\". As the next feature, we use sentiment score of this utterance. These are two values: positive and negative scores. These scores are the positive and negative words present in an utterance. The third kind of conversational context features is the sentiment score of the previous utterance. This captures phenomena such as a negative remark from one character eliciting sarcasm from another. This is similar to the situation described in . Thus, for the third utterance in Figure 2 , the sentiment score of Chandler's utterance forms the Sentiment score feature, while that of Ross's utterance forms Sentiment score of previous utterance.",
"cite_spans": [],
"ref_spans": [
{
"start": 718,
"end": 726,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Conversational Context Features:",
"sec_num": "2."
},
{
"text": "We use name of the speaker and name of the speaker-listener pair as features. The listener is assumed to be the speaker of the previous utterance in the sequence 8 . The speaker feature aims to capture the sarcastic nature of each of these characters, while the speaker-listener feature aims to capture interpersonal interactions between different characters. In the context of third utterance in Figure 2 , the speaker is 'Chandler' while speaker-listener pair is 'Chandler-Ross'.",
"cite_spans": [],
"ref_spans": [
{
"start": 397,
"end": 405,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Speaker Context Features:",
"sec_num": "3."
},
{
"text": "We also compare our results with features presented in two prior works 9 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features from Prior Work",
"sec_num": "4.2"
},
{
"text": "1. Features given in Gonz\u00e1lez-Ib\u00e1nez et al. 2. Features given in Buschmeier et al. (2014) : In addition to unigrams, the features used by them are: (a) Hyperbole (captured by three positive or negative words in a row), (b) Quotation marks and ellipsis, (c) Positive/Negative Sentiment Scores followed by punctuation (this includes more than one positive or negative words with an exclamation mark or question mark at the end), (d) Positive/Negative Sentiment Scores followed by ellipsis (this includes more than one positive or negative words with a '...' at the end, (e) Punctuation, (f) Interjections, and (g) Laughter expressions (such as 'haha').",
"cite_spans": [
{
"start": 65,
"end": 89,
"text": "Buschmeier et al. (2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features from Prior Work",
"sec_num": "4.2"
},
{
"text": "We experiment with three classification techniques and two sequence labeling techniques:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "5"
},
{
"text": "1. Classification Techniques: We use Na\u00efve Bayes and SVM as classification techniques. Na\u00efve Bayes implementation provided in Scikit (Pedregosa et al., 2011) is used. For SVM, we use SVM-Light (Joachims, 1999) . Since SVM does not do well for datasets with a large class imbalance (Akbani et al., 2004) 10 , we use sampling to deal with this skew as done in Kotsiantis et al. (2006) . We experiment with two configurations:",
"cite_spans": [
{
"start": 193,
"end": 209,
"text": "(Joachims, 1999)",
"ref_id": "BIBREF11"
},
{
"start": 281,
"end": 302,
"text": "(Akbani et al., 2004)",
"ref_id": "BIBREF0"
},
{
"start": 358,
"end": 382,
"text": "Kotsiantis et al. (2006)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "5"
},
{
"text": "\u2022 SVM (Oversampled) i.e., SVM (O): Sarcastic utterances are duplicated to match the count of non-sarcastic utterances. \u2022 SVM (Undersampled) i.e., SVM (U): Random non-sarcastic utterances are dropped to match the count of sarcastic utterances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "5"
},
{
"text": "2. Sequence Labeling Techniques: We use SVM-HMM by Altun et al. (2003) and SEARN by Daum\u00e9 III et al. (2009) . SVM-HMM is a sequence labeling algorithm that combines Support Vector Machines and Hidden Markov Models. SEARN is a sequence labeling algorithm that integrates search and learning to solve prediction problems. The implementation of SEARN that we use relies on perceptron as the base classifier. Daum\u00e9 III et al. (2009) show that SEARN outperforms other sequence labeling techniques (such as CRF) for tasks like character recognition and named entity class identification.",
"cite_spans": [
{
"start": 51,
"end": 70,
"text": "Altun et al. (2003)",
"ref_id": "BIBREF1"
},
{
"start": 84,
"end": 107,
"text": "Daum\u00e9 III et al. (2009)",
"ref_id": "BIBREF6"
},
{
"start": 405,
"end": 428,
"text": "Daum\u00e9 III et al. (2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "5"
},
{
"text": "Thus, we wish to validate our hypothesis in case of:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "5"
},
{
"text": "1. Our data-derived features as given in Section 4.1. We report weighted average values of precision, recall and F-score computed using five-fold crossvalidation for all experiments, and class-wise precision, recall, F-score wherever necessary. The folds are created on the basis of sequences and not utterances. This means that a sequence does not get split across different folds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "5"
},
{
"text": "Section 6.1 describes performance of traditional models that use dataset-derived features (as given in Section 4.1), while Section 6.2 does so for features from prior work (as given in Section 4.2). Table 4 compares the performance of the two formulations: classification and sequence labeling, for our dataset-derived features. When classification techniques are used, we obtain the best F-score of 79.8% with SVM (O). However, when sequence labeling techniques are used, the best F-score is 84.2%. Table 6 : Feature Combinations for which different techniques exhibit their best performance for datasetderived features Table 5 shows class-wise precision/recall values for these techniques. The best value of precision for sarcastic class is obtained in case of SVM-HMM, i.e., 35.8%. The best F-score for the sarcastic class is in the case of SVM (O) (29%) whereas that for the nonsarcastic class is in the case of SVM-HMM (93.6%). Tables 4 and 5 show that it is due to a high recall, sequence labeling techniques perform better than classification techniques.",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 206,
"text": "Table 4",
"ref_id": null
},
{
"start": 500,
"end": 507,
"text": "Table 6",
"ref_id": null
},
{
"start": 621,
"end": 628,
"text": "Table 5",
"ref_id": null
},
{
"start": 933,
"end": 947,
"text": "Tables 4 and 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "It may be argued that the benefit in case of sequence labeling is due to our features, and is not a benefit of the sequence labeling formulation itself. Hence, we ran these five techniques with all possible combinations of features. Table 6 shows the best performance obtained by each of the classifiers, and the corresponding (best) feature combinations. The table can be read as: SVM (O) obtains a F-score of 81.2% when spoken words, speaker, speaker-listener and sentiment score are used as features. The table shows that even if we consider the best performance of each of the techniques (with different feature sets), classifiers are not able to perform as well as sequence labeling. The best sequence labeling algorithm (SVM-HMM) gives a Fscore of 84.4% while the best classifier (SVM(O)) has an F-score of 81.2%. We emphasize that both SVM-HMM and SEARN have higher recall values than the three classification techniques.",
"cite_spans": [],
"ref_spans": [
{
"start": 233,
"end": 240,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance on Dataset-derived Features",
"sec_num": "6.1"
},
{
"text": "These findings show that for our novel set of dataset-derived features, sequence labeling works better than classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance on Dataset-derived Features",
"sec_num": "6.1"
},
{
"text": "We now show our evaluation on two sets of features reported in prior work. These sets of features as given in two prior works by Buschmeier et al. (2014) and Gonz\u00e1lez-Ib\u00e1nez et al. (2011) . Table 7 compares classification techniques with sequence labeling techniques for features given in Gonz\u00e1lez-Ib\u00e1nez et al. (2011) 11 . Table 8 shows corresponding values for features given in Buschmeier et al. (2014) 12 . For features by Gonz\u00e1lez-Ib\u00e1nez et al. (2011) , SVM (O) gives the best F-score for classification techniques (79%), whereas SVM-HMM shows an improvement of 4% over that value. Recall increases by 11.8% when sequence labeling techniques are used instead of classification.",
"cite_spans": [
{
"start": 129,
"end": 153,
"text": "Buschmeier et al. (2014)",
"ref_id": "BIBREF3"
},
{
"start": 158,
"end": 187,
"text": "Gonz\u00e1lez-Ib\u00e1nez et al. (2011)",
"ref_id": "BIBREF10"
},
{
"start": 381,
"end": 405,
"text": "Buschmeier et al. (2014)",
"ref_id": "BIBREF3"
},
{
"start": 427,
"end": 456,
"text": "Gonz\u00e1lez-Ib\u00e1nez et al. (2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 190,
"end": 197,
"text": "Table 7",
"ref_id": null
},
{
"start": 324,
"end": 331,
"text": "Table 8",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Performance on Features Reported in Prior Work",
"sec_num": "6.2"
},
{
"text": "In case of features by Buschmeier et al. (2014) , the improvement in performance achieved by using sequence labeling as against classification is 2.8%. The best recall for classification techniques is 77.8% (for SVM (O)). In this case as well, the recall increases by 10% for sequence labeling.",
"cite_spans": [
{
"start": 23,
"end": 47,
"text": "Buschmeier et al. (2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Performance on Features Reported in Prior Work",
"sec_num": "6.2"
},
{
"text": "These findings show that for two feature sets reported in prior work, sequence labeling works bet-ter than classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance on Features Reported in Prior Work",
"sec_num": "6.2"
},
{
"text": "Features from Gonzalez-Ibanez et al. 2011Formulation as Classification ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm P (%) R (%) F (%)",
"sec_num": null
},
{
"text": "In previous sections, we show that quantitatively, sequence labeling techniques perform better than classification techniques. In this section, we delve into the question: 'What does this improved performance mean, in terms of forms of sarcasm that sequence labeling techniques are able to handle better than classification?' To understand the implication of using sequence labeling, we randomly select 100 examples that were correctly labeled by sequence labeling techniques but incorrectly labeled by classification techniques. Our annotators manually annotated them into one among four categories of sarcasm as given in Camp (2012) . Table 9 shows the proportion of these utterances. Likeprefixed and illocutionary sarcasm types are the ones that require context for understanding sarcasm. We observe that around 71% of our examples belong to these two types of sarcasm. This means that our intuition that sequence labeling will better capture conversational context reflects in the forms of sarcasm for which sequence labeling improves over classification.",
"cite_spans": [
{
"start": 623,
"end": 634,
"text": "Camp (2012)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 637,
"end": 644,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "On the other hand, examples where our system makes errors can be grouped as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "\u2022 Topic Drift: Eisterhold et al. (2006) state that topic change/drift is a peculiarity of sarcasm. For example, when Phoebe gets irritated with another character talking for a long time, she says,\"See? Vegetarianism benefits everyone\". This was misclassified by our system.",
"cite_spans": [
{
"start": 15,
"end": 39,
"text": "Eisterhold et al. (2006)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "\u2022 Short expressions: Short expressions occurring in the context of a conversation may express sarcasm. Expressions such as \"Oh God, is it\" and \"Me too\" were misclassified as non-sarcastic. However, in the context of the scene, these were sarcastic utterances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "\u2022 Dry humor: In the context of a conversation, sarcasm may be expressed in response to a long serious description. Our system was unable to capture such sarcasm in some cases. When a character gives long description of advantages of a particular piece of clothing, Chandler asks sarcastically, \"Are you aware that you're still talking?\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "\u2022 Implications in popular culture: The utterance \"Ok, I smell smoke. Maybe that's cause someone's pants are on fire\" was misclassified by our system. The popular saying 'Liar, liar, pants on fire 13 ' was the context that was missing in our case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "\u2022 Background knowledge: When a petite girl walks in, Rachel says \"She is so cute! You could fit her right in your little pocket\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "\u2022 Long-range connection: In comedy shows like Friends, humor is often created by introducing a concept in the initial part and then repeating it as an impactful, sarcastic remark. For example, in beginning of an episode, Ross says that he has never grabbed a spoon before -and at the end of the episode, he says with a sarcastic tone \"I grabbed a spoon\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "\u2022 Incongruity with situation in the scenes: Utterances that were incongruent with non-verbal situations could not be adequately identified. For example, Ross enters an office wearing a piece of steel bandaged to his nose. In response, the receptionist says, \"Oh, that's attractive\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "\u2022 Sarcasm as a part of a longer sentence: In several utterances, sarcasm is a subset of a longer sentence, and hence, the non-sarcastic portion may dominate the rest of the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "These errors point to future directions in which sequence labeling algorithms may be optimized to improve their impact on sarcasm detection. Table 9 : Proportion of utterances of different types of sarcasm that were correctly labeled by sequence labeling but incorrectly labeled by classification techniques",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Sarcasm detection approaches using different features have been reported (Tepperman et al., 2006; Kreuz and Caucci, 2007; Veale and Hao, 2010; Gonz\u00e1lez-Ib\u00e1nez et al., 2011; Reyes et al., 2012; Buschmeier et al., 2014) . However, Wallace et al. 2014show how context beyond the target text (i.e., extra-textual context) is necessary for humans as well as machines, in order to identify sarcasm. Following this, the new trend in sarcasm detection is to explore the use of such extratextual context (Khattri et al., 2015; Rajadesingan et al., 2015; Bamman and Smith, 2015; Wallace, 2015) . (Wallace, 2015) uses meta-data about reddits to predict sarcasm in a reddit 14 comment. (Rajadesingan et al., 2015) present a suite of classifier features that capture different kinds of context: context related to the author, conversation, etc. The new trend in sarcasm detection is, thus, to look at additional context beyond the text where sarcasm is to be predicted. The work closest to ours is by Wang et al. (2015) . They use a labeled dataset of 1500 tweets, the labels for which are obtained automatically. Due to their automatically labeled gold dataset and their lack of focus on labeling utterances in a sequence, our analysis seems to be more rigorous. Our work substantially differs from theirs: (a) They do not deal with dialogue, (b) Their goal is to predict sarcasm of a tweet, using series of past tweets as the context i.e., only the last tweet in the sequence. Our goal is to predict sarcasm in every element of the sequence: a lot more rigorous task. Note that the two differ in the way precision/recall values will be computed. (c) Their 'gold' standard dataset is annotated by an automatic classifier. On the other hand, every textual unit (utterance) in our gold standard dataset is manually labeled -making our dataset and hence, findings lot more reliable. (c) They consider three types of sequences: conversational, historical and topic-based. Historical context is series of tweets by this author, while topic-based context is series of tweets containing a hashtag in the tweet to be classified. We do not use the two because they do not seem suitable for our dataset. They show that a sequence labeling algorithm works well to detect sar-14 www.reddit.com casm of a tweet with a pseudo-sequence generated using such additional context. They attempt to obtain correct prediction only for a single target tweet with no consideration to other elements in the context, which is completely different from our goal. They do not bother about other elements in the sequence but only use an algorithm to perform sarcasm detection of a tweet.",
"cite_spans": [
{
"start": 73,
"end": 97,
"text": "(Tepperman et al., 2006;",
"ref_id": "BIBREF24"
},
{
"start": 98,
"end": 121,
"text": "Kreuz and Caucci, 2007;",
"ref_id": "BIBREF15"
},
{
"start": 122,
"end": 142,
"text": "Veale and Hao, 2010;",
"ref_id": "BIBREF26"
},
{
"start": 143,
"end": 172,
"text": "Gonz\u00e1lez-Ib\u00e1nez et al., 2011;",
"ref_id": "BIBREF10"
},
{
"start": 173,
"end": 192,
"text": "Reyes et al., 2012;",
"ref_id": "BIBREF23"
},
{
"start": 193,
"end": 217,
"text": "Buschmeier et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 495,
"end": 517,
"text": "(Khattri et al., 2015;",
"ref_id": "BIBREF13"
},
{
"start": 518,
"end": 544,
"text": "Rajadesingan et al., 2015;",
"ref_id": "BIBREF21"
},
{
"start": 545,
"end": 568,
"text": "Bamman and Smith, 2015;",
"ref_id": "BIBREF2"
},
{
"start": 569,
"end": 583,
"text": "Wallace, 2015)",
"ref_id": "BIBREF28"
},
{
"start": 674,
"end": 701,
"text": "(Rajadesingan et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 988,
"end": 1006,
"text": "Wang et al. (2015)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "Several approaches for sequence labeling in sentiment classification have been studied. Zhao et al. (2008) perform sentiment classification using conditional random fields. deal with emotion classification. Using a dataset of children's stories manually annotated at the sentence level, they employ HMM to identify sequential structure and a classifier to predict emotion in a particular sentence. Mao and Lebanon (2006) present a isotonic CRF that predicts global and local sentiment of documents, with additional mechanism for author-specific distributions and smoothing sentiment curves. Yessenalina et al. (2010) present a joint learning algorithm for sentencelevel subjectivity labeling and document-level sentiment labeling. Choi and Cardie (2010) deal with sequence learning to jointly identify scope of opinion polarity expressions, and polarity labels. Taking inspiration from use of sequence labeling for sarcasm detection, our work takes the first step to show if sequence labeling techniques are helpful at all. They experiment with MPQA corpus that is labeled at the sentence level for polarity as well as intensity. Specialized sequence labeling techniques like these are the next step to our first step: showing if sequence labeling techniques are helpful at all, for sarcasm detection of dialogue.",
"cite_spans": [
{
"start": 88,
"end": 106,
"text": "Zhao et al. (2008)",
"ref_id": null
},
{
"start": 398,
"end": 420,
"text": "Mao and Lebanon (2006)",
"ref_id": "BIBREF17"
},
{
"start": 591,
"end": 616,
"text": "Yessenalina et al. (2010)",
"ref_id": "BIBREF30"
},
{
"start": 731,
"end": 753,
"text": "Choi and Cardie (2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "We explored how sequence labeling can be used for sarcasm detection of dialogue. We formulated sarcasm detection of dialogue as a task of labeling each utterance in a sequence, with one among two labels: sarcastic and non-sarcastic. For our experiments, we created a manually annotated dataset of transcripts from a popular TV show 'Friends'. Our dataset consisted of 913 scenes where every utterance was annotated as sarcastic or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "9"
},
{
"text": "We experiment with: (a) a novel set of features derived from our dataset, (b) sets of features from two prior works. Our dataset-derived features are: (a) lexical features, (b) conversational context features, and (c) author context features. Using these features, we compared two classes of learning techniques: classifiers (SVM (undersampled), SVM (oversampled) and Na\u00efve Bayes) and sequence labeling techniques (SVM-HMM and SEARN). For our classifiers, the best Fscore was obtained with SVM (O) (i.e. 79.8%) while the best F-score for sequence labeling techniques was obtained using SVM-HMM (i.e. 84.2%). Even in case of the best combinations of our features for each algorithm, both sequence labeling techniques outperformed the classifiers. In addition, we also experimented with features introduced in two prior works. We observed an improvement of 2.8% for features in Buschmeier et al. (2014) and 4% for features in Gonz\u00e1lez-Ib\u00e1nez et al. (2011) when sequence labeling techniques were used as against classifiers. In all cases, sequence labeling techniques had a substantially high recall as compared to classification techniques (10% in case of Buschmeier et al. (2014) , 12% in case of Gonz\u00e1lez-Ib\u00e1nez et al. (2011) ). To understand which forms of sarcasm get correctly labeled by sequence labeling (and not by classification), we manually evaluated 100 examples. 71% of these examples consisted of sarcasm that could be understood only with conversational context. Our error analysis points to interesting future work for sarcasm detection of dialogue such as longrange connection, lack of conversational clues, and sarcasm a part of long utterances.",
"cite_spans": [
{
"start": 876,
"end": 900,
"text": "Buschmeier et al. (2014)",
"ref_id": "BIBREF3"
},
{
"start": 924,
"end": 953,
"text": "Gonz\u00e1lez-Ib\u00e1nez et al. (2011)",
"ref_id": "BIBREF10"
},
{
"start": 1154,
"end": 1178,
"text": "Buschmeier et al. (2014)",
"ref_id": "BIBREF3"
},
{
"start": 1196,
"end": 1225,
"text": "Gonz\u00e1lez-Ib\u00e1nez et al. (2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "9"
},
{
"text": "Thus, we observe that for sarcasm detection of our dataset, in case of different feature configurations, sequence labeling performs better than classification. Our observations establish the efficacy of sequence labeling techniques for sarcasm detection of dialogue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "9"
},
{
"text": "Future work on repeating these experiments for other forms of dialogue (such as twitter conversations, chat transcripts, etc.) is imperative. Also, a combination of unified sarcasm and emotion detection using sequence labeling is another promising line of work. It would be interesting to see if deep learning-based models that perform sequence labeling perform better than those that perform classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "9"
},
{
"text": "As defined by the Oxford Dictionary. 2 Common world knowledge here refers to a general sentiment map of situations to sentiment. For example, being stranded in traffic is a negative situation to most.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.imdb.com/title/tt0108778/ 4 http://www.opensubtitles.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "without textual content, keeping in view copyright restrictions.6 For these scenes, the annotators later discussed and arrived at a consensus-they were then added to the dataset. The remaining scenes are done by either of the two annotators.7 This is computed using a simple lexicon lookup, as in case of conversational context features below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The first utterance in a sequence has a null value for previous speaker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The two prior works are chosen based on what information was available in our dataset for the purpose of reimplementation. For example, approaches that use the Twitter profile information or the follower/friends structure in the Twitter, cannot be computed for our dataset.10 We also observe the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The paper reports best accuracy of 65.44% for their dataset. This shows that our implementation is competent.12 The paper reports best F-score of 67.8% for their dataset. This shows that our implementation is competent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.urbandictionary.com/define.php?term=Liar %20Liar%20Pants%20On%20Fire",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Jun Zhao, Kang Liu, and Gen Wang. 2008. Adding redundant features for crfs-based sentence sentiment classification. In Proceedings of the conference on empirical methods in natural language processing, pages 117-126. Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We express our gratitude towards our annotators, Rajita Shukla and Jaya Saraswati. We also thank Prerana Singhal for her support. Aditya's PhD is funded by TCS Research Scholar Fellowship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Applying support vector machines to imbalanced datasets",
"authors": [
{
"first": "Rehan",
"middle": [],
"last": "Akbani",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Kwek",
"suffix": ""
},
{
"first": "Nathalie",
"middle": [],
"last": "Japkowicz",
"suffix": ""
}
],
"year": 2004,
"venue": "Machine learning: ECML 2004",
"volume": "",
"issue": "",
"pages": "39--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rehan Akbani, Stephen Kwek, and Nathalie Japkow- icz. 2004. Applying support vector machines to imbalanced datasets. In Machine learning: ECML 2004, pages 39-50. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Hidden markov support vector machines",
"authors": [
{
"first": "Yasemin",
"middle": [],
"last": "Altun",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Tsochantaridis",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2003,
"venue": "ICML",
"volume": "3",
"issue": "",
"pages": "3--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yasemin Altun, Ioannis Tsochantaridis, Thomas Hof- mann, et al. 2003. Hidden markov support vector machines. In ICML, volume 3, pages 3-10.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Contextualized sarcasm detection on twitter",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Ninth International AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Bamman and Noah A Smith. 2015. Contextual- ized sarcasm detection on twitter. In Ninth Interna- tional AAAI Conference on Web and Social Media.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An impact analysis of features in a classification approach to irony detection in product reviews",
"authors": [
{
"first": "Konstantin",
"middle": [],
"last": "Buschmeier",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Cimiano",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "42--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Konstantin Buschmeier, Philipp Cimiano, and Roman Klinger. 2014. An impact analysis of features in a classification approach to irony detection in prod- uct reviews. In Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sen- timent and Social Media Analysis, pages 42-49.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Sarcasm, pretense, and the semantics/pragmatics distinction*. No\u00fbs",
"authors": [
{
"first": "Elisabeth",
"middle": [],
"last": "Camp",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "46",
"issue": "",
"pages": "587--634",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisabeth Camp. 2012. Sarcasm, pretense, and the se- mantics/pragmatics distinction*. No\u00fbs, 46(4):587- 634.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Hierarchical sequential learning for extracting opinions and their attributes",
"authors": [
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the ACL 2010 conference short papers",
"volume": "",
"issue": "",
"pages": "269--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yejin Choi and Claire Cardie. 2010. Hierarchical se- quential learning for extracting opinions and their at- tributes. In Proceedings of the ACL 2010 conference short papers, pages 269-274. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Search-based structured prediction",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Langford",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III, John Langford, and Daniel Marcu. 2009. Search-based structured prediction.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Semi-supervised recognition of sarcastic sentences in twitter and amazon",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Davidov",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Tsur",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "107--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Semi-supervised recognition of sarcastic sentences in twitter and amazon. In Proceedings of the Four- teenth Conference on Computational Natural Lan- guage Learning, pages 107-116. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Reactions to irony in discourse: Evidence for the least disruption principle",
"authors": [
{
"first": "Jodi",
"middle": [],
"last": "Eisterhold",
"suffix": ""
},
{
"first": "Salvatore",
"middle": [],
"last": "Attardo",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Boxer",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Pragmatics",
"volume": "38",
"issue": "8",
"pages": "1239--1256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jodi Eisterhold, Salvatore Attardo, and Diana Boxer. 2006. Reactions to irony in discourse: Evidence for the least disruption principle. Journal of Pragmat- ics, 38(8):1239-1256.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Irony in talk among friends. Metaphor and symbol",
"authors": [
{
"first": "W",
"middle": [],
"last": "Raymond",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gibbs",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "15",
"issue": "",
"pages": "5--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raymond W Gibbs. 2000. Irony in talk among friends. Metaphor and symbol, 15(1-2):5-27.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Identifying sarcasm in twitter: a closer look",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Gonz\u00e1lez-Ib\u00e1nez",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Wacholder",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers",
"volume": "2",
"issue": "",
"pages": "581--586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Gonz\u00e1lez-Ib\u00e1nez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in twit- ter: a closer look. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 581-586. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Making large-scale SVM learning practical",
"authors": [
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Kernel Methods -Support Vector Learning",
"volume": "",
"issue": "",
"pages": "169--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Joachims. 1999. Making large-scale SVM learning practical. In B. Sch\u00f6lkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods -Support Vec- tor Learning, chapter 11, pages 169-184. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Harnessing context incongruity for sarcasm detection",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Vinita",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "757--762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Vinita Sharma, and Pushpak Bhat- tacharyya. 2015. Harnessing context incongruity for sarcasm detection. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing, volume 2, pages 757-762.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Your sentiment precedes you: Using an authors historical tweets to predict sarcasm",
"authors": [
{
"first": "Anupam",
"middle": [],
"last": "Khattri",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"James"
],
"last": "Carman",
"suffix": ""
}
],
"year": 2015,
"venue": "6th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anupam Khattri, Aditya Joshi, Pushpak Bhat- tacharyya, and Mark James Carman. 2015. Your sentiment precedes you: Using an authors historical tweets to predict sarcasm. In 6th Workshop on Com- putational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA), page 25.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Handling imbalanced datasets: A review",
"authors": [
{
"first": "Sotiris",
"middle": [],
"last": "Kotsiantis",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Kanellopoulos",
"suffix": ""
},
{
"first": "Panayiotis",
"middle": [],
"last": "Pintelas",
"suffix": ""
}
],
"year": 2006,
"venue": "GESTS International Transactions on Computer Science and Engineering",
"volume": "30",
"issue": "1",
"pages": "25--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sotiris Kotsiantis, Dimitris Kanellopoulos, Panayiotis Pintelas, et al. 2006. Handling imbalanced datasets: A review. GESTS International Transactions on Computer Science and Engineering, 30(1):25-36.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Lexical influences on the perception of sarcasm",
"authors": [
{
"first": "J",
"middle": [],
"last": "Roger",
"suffix": ""
},
{
"first": "Gina",
"middle": [
"M"
],
"last": "Kreuz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Caucci",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Workshop on computational approaches to Figurative Language",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roger J Kreuz and Gina M Caucci. 2007. Lexical in- fluences on the perception of sarcasm. In Proceed- ings of the Workshop on computational approaches to Figurative Language, pages 1-4. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Opensub-titles2016: Extracting large parallel corpora from movie and tv subtitles",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Lison",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Lison and J\u00f6rg Tiedemann. 2016. Opensub- titles2016: Extracting large parallel corpora from movie and tv subtitles. In Proceedings of LREC 2016.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Isotonic conditional random fields and local sentiment flow",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Guy",
"middle": [],
"last": "Lebanon",
"suffix": ""
}
],
"year": 2006,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "961--968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Mao and Guy Lebanon. 2006. Isotonic condi- tional random fields and local sentiment flow. In Advances in neural information processing systems, pages 961-968.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Opinion mining and sentiment analysis. Foundations and trends in information retrieval",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "2",
"issue": "",
"pages": "1--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and trends in infor- mation retrieval, 2(1-2):1-135.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Scikit-learn: Machine learning in python",
"authors": [
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Bertrand",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dubourg",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. The Journal of Ma- chine Learning Research, 12:2825-2830.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Linguistic inquiry and word count: Liwc",
"authors": [
{
"first": "Martha",
"middle": [
"E"
],
"last": "James W Pennebaker",
"suffix": ""
},
{
"first": "Roger J",
"middle": [],
"last": "Francis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Booth",
"suffix": ""
}
],
"year": 2001,
"venue": "Mahway: Lawrence Erlbaum Associates",
"volume": "71",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James W Pennebaker, Martha E Francis, and Roger J Booth. 2001. Linguistic inquiry and word count: Liwc 2001. Mahway: Lawrence Erlbaum Asso- ciates, 71:2001.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Sarcasm detection on twitter: A behavioral modeling approach",
"authors": [
{
"first": "Ashwin",
"middle": [],
"last": "Rajadesingan",
"suffix": ""
},
{
"first": "Reza",
"middle": [],
"last": "Zafarani",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Eighth ACM International Conference on Web Search and Data Mining",
"volume": "",
"issue": "",
"pages": "97--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashwin Rajadesingan, Reza Zafarani, and Huan Liu. 2015. Sarcasm detection on twitter: A behavioral modeling approach. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, pages 97-106. ACM.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "sure, i did the right thing\": a system for sarcasm detection in speech",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Rakov",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Rosenberg",
"suffix": ""
}
],
"year": 2013,
"venue": "INTERSPEECH",
"volume": "",
"issue": "",
"pages": "842--846",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rachel Rakov and Andrew Rosenberg. 2013. \" sure, i did the right thing\": a system for sarcasm detection in speech. In INTERSPEECH, pages 842-846.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "From humor recognition to irony detection: The figurative language of social media",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Reyes",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Davide",
"middle": [],
"last": "Buscaldi",
"suffix": ""
}
],
"year": 2012,
"venue": "Data & Knowledge Engineering",
"volume": "74",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Reyes, Paolo Rosso, and Davide Buscaldi. 2012. From humor recognition to irony detection: The figurative language of social media. Data & Knowledge Engineering, 74:1-12.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "yeah right\": sarcasm recognition for spoken dialogue systems",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Tepperman",
"suffix": ""
},
{
"first": "Shrikanth",
"middle": [],
"last": "David R Traum",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2006,
"venue": "INTER-SPEECH. Citeseer",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Tepperman, David R Traum, and Shrikanth Narayanan. 2006. \" yeah right\": sarcasm recog- nition for spoken dialogue systems. In INTER- SPEECH. Citeseer.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Icwsm-a great catchy name: Semi-supervised recognition of sarcastic sentences in online product reviews",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Tsur",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Davidov",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2010,
"venue": "ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Tsur, Dmitry Davidov, and Ari Rappoport. 2010. Icwsm-a great catchy name: Semi-supervised recog- nition of sarcastic sentences in online product re- views. In ICWSM.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Detecting ironic intent in creative comparisons",
"authors": [
{
"first": "Tony",
"middle": [],
"last": "Veale",
"suffix": ""
},
{
"first": "Yanfen",
"middle": [],
"last": "Hao",
"suffix": ""
}
],
"year": 2010,
"venue": "ECAI",
"volume": "215",
"issue": "",
"pages": "765--770",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tony Veale and Yanfen Hao. 2010. Detecting ironic in- tent in creative comparisons. In ECAI, volume 215, pages 765-770.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Humans require context to infer ironic intent (so computers probably do, too)",
"authors": [
{
"first": "C",
"middle": [],
"last": "Byron",
"suffix": ""
},
{
"first": "Laura Kertz Do Kook",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Choe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "512--516",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Byron C Wallace, Laura Kertz Do Kook Choe, and Eu- gene Charniak. 2014. Humans require context to infer ironic intent (so computers probably do, too). In Proceedings of the Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), pages 512-516.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Sparse, contextually informed models for irony detection: Exploiting user communities,entities and sentiment",
"authors": [
{
"first": "C",
"middle": [],
"last": "Byron",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wallace",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Byron C Wallace. 2015. Sparse, contextually informed models for irony detection: Exploiting user commu- nities,entities and sentiment. In ACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Twitter sarcasm detection exploiting a context-based model",
"authors": [
{
"first": "Zelin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhijian",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Ruimin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yafeng",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2015,
"venue": "Web Information Systems Engineering-WISE 2015",
"volume": "",
"issue": "",
"pages": "77--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zelin Wang, Zhijian Wu, Ruimin Wang, and Yafeng Ren. 2015. Twitter sarcasm detection exploiting a context-based model. In Web Information Systems Engineering-WISE 2015, pages 77-91. Springer.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Multi-level structured models for documentlevel sentiment classification",
"authors": [
{
"first": "Ainur",
"middle": [],
"last": "Yessenalina",
"suffix": ""
},
{
"first": "Yisong",
"middle": [],
"last": "Yue",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1046--1056",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ainur Yessenalina, Yisong Yue, and Claire Cardie. 2010. Multi-level structured models for document- level sentiment classification. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1046-1056. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Emotion analysis of children's stories with context information",
"authors": [
{
"first": "Zhengchen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Minghui",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Shuzhi",
"middle": [],
"last": "Sam Ge",
"suffix": ""
}
],
"year": 2014,
"venue": "Asia-Pacific Signal and Information Processing Association, 2014 Annual Summit and Conference (APSIPA)",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengchen Zhang, Minghui Dong, and Shuzhi Sam Ge. 2014. Emotion analysis of children's stories with context information. In Asia-Pacific Signal and Information Processing Association, 2014 Annual Summit and Conference (APSIPA), pages 1-7. IEEE.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Emotion analysis of children's stories with context information",
"authors": [
{
"first": "S S",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dong",
"suffix": ""
}
],
"year": 2014,
"venue": "Asia-Pacific Signal and Information Processing Association",
"volume": "38",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ge S S Zhang Z, Dong M. 2014. Emotion analysis of children's stories with context information. In Asia- Pacific Signal and Information Processing Associa- tion, volume 38, pages 1-7. IEEE.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Example from our Dataset: Part of a Scene of this paper.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "These features are: (a) Interjections, (b) Punctuations, (c) Pragmatic features (where we include action words as well), (d) Sentiment lexicon-based features from LIWC (Pennebaker et al., 2001) (where they include counts of linguistic process words, positive/negative emotion words, etc.).",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"text": "Dataset statistics related to: (a) percentage of sarcastic utterances for six lead characters, (b) average surface positive and negative scores for the two classes, (c) percentage of sarcastic and non-sarcastic utterances with actions",
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"2\">Character % sarcastic</td><td/><td>Surface</td><td>Surface</td><td/></tr><tr><td>Phoebe Joey Rachel</td><td>9.70 11.05 9.74</td><td/><td>Positive Sentiment Score</td><td>Negative Sentiment Score</td><td>Sarcastic</td><td>Actions (%) 28.23</td></tr><tr><td>Monica</td><td>8.87</td><td>Sarcastic</td><td>1.55</td><td>1.20</td><td colspan=\"2\">Non-sarcastic 23.95</td></tr><tr><td>Chandler</td><td>22.24</td><td colspan=\"2\">Non-sarcastic 0.97</td><td>0.75</td><td>All</td><td>24.43</td></tr><tr><td>Ross</td><td>8.42</td><td>All</td><td>1.03</td><td>0.79</td><td/></tr><tr><td>Table 2: Feature</td><td>Description</td><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">Lexical Features</td><td/><td/><td/></tr><tr><td>Spoken</td><td colspan=\"2\">Unigrams of spoken words</td><td/><td/><td/></tr><tr><td>words</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"3\">Conversational Context Features</td><td/><td/><td/></tr><tr><td>Actions</td><td colspan=\"2\">Unigrams of action words</td><td/><td/><td/></tr><tr><td>Sentiment</td><td colspan=\"3\">Positive &amp; Negative score of utter-</td><td/><td/></tr><tr><td>Score</td><td>ance</td><td/><td/><td/><td/></tr><tr><td>Previous</td><td colspan=\"3\">Positive &amp; Negative score of previ-</td><td/><td/></tr><tr><td>Sentiment</td><td colspan=\"2\">ous utterance in the sequence</td><td/><td/><td/></tr><tr><td>Score</td><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">Speaker Context Features</td><td/><td/><td/></tr><tr><td>Speaker</td><td colspan=\"2\">Speaker of this utterance</td><td/><td/><td/></tr><tr><td>Speaker-</td><td colspan=\"3\">Pair of speaker of this utterance</td><td/><td/></tr><tr><td>Listener</td><td colspan=\"3\">and speaker of the previous utter-</td><td/><td/></tr><tr><td/><td>ance</td><td/><td/><td/><td/></tr></table>",
"num": null
},
"TABREF2": {
"text": "Our Dataset-Derived Features",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF6": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table><tr><td>: Comparison of sequence labeling techniques</td></tr><tr><td>with classification techniques, for features reported in</td></tr><tr><td>Buschmeier et al. (2014)</td></tr></table>",
"num": null
}
}
}
}