Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U19-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:07:51.900818Z"
},
"title": "Detecting Target of Sarcasm using Ensemble Methods",
"authors": [
{
"first": "Pradeesh",
"middle": [],
"last": "Parameswaran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Otago New Zealand",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Andrew",
"middle": [],
"last": "Trotman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Otago New Zealand",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Veronica",
"middle": [],
"last": "Liesaputra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Otago New Zealand",
"location": {}
},
"email": "[email protected]"
},
{
"first": "David",
"middle": [],
"last": "Eyers",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Otago New Zealand",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe our methods in trying to detect the target of sarcasm as part of ALTA 2019 shared task. We use combination of ensemble of classifiers and a rule-based system. Our team obtained a Dice-Sorensen Coefficient score of 0.37150, which placed 2 nd in the public leaderboard. Despite no team beating the baseline score for the private dataset, we present our findings and also some of the challenges and future improvements which can be used in order to tackle the problem.",
"pdf_parse": {
"paper_id": "U19-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe our methods in trying to detect the target of sarcasm as part of ALTA 2019 shared task. We use combination of ensemble of classifiers and a rule-based system. Our team obtained a Dice-Sorensen Coefficient score of 0.37150, which placed 2 nd in the public leaderboard. Despite no team beating the baseline score for the private dataset, we present our findings and also some of the challenges and future improvements which can be used in order to tackle the problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We humans are complex creatures that use language as a communication tool in order to express our thoughts to one another (Sabbagh, 1999) . One of the ways that we communicate with another person is through use of verbal irony. Verbal irony is defined as where the words that are being used to communicate differ from the supposed meaning (Sperber, 1984) . An example of this would be from Austen (1813) Pride & Prejudice, when Darcy said to his future beloved wife, that she is \"tolerable but not handsome enough to tempt me\".",
"cite_spans": [
{
"start": 122,
"end": 137,
"text": "(Sabbagh, 1999)",
"ref_id": "BIBREF18"
},
{
"start": 339,
"end": 354,
"text": "(Sperber, 1984)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Sarcasm is a kind of verbal irony that expresses a cynical attitude towards a person or circumstance (Gibbs, 2000) . In our daily lives, sarcasm is often conveyed using the tone of our voice, and/or our facial expression, to give the signal to the other person that the person is being sarcastic (Cheang and Pell, 2008) . Recently, with the growth of social media many researchers have embarked on various ways of detecting sarcasm automatically. Most of their work were focused on detecting sarcasm on Twitter and on online reviews (Bamman and Smith, 2015; Rajadesingan et al., 2015; Amir et al., 2016) .",
"cite_spans": [
{
"start": 101,
"end": 114,
"text": "(Gibbs, 2000)",
"ref_id": "BIBREF7"
},
{
"start": 296,
"end": 319,
"text": "(Cheang and Pell, 2008)",
"ref_id": "BIBREF4"
},
{
"start": 533,
"end": 557,
"text": "(Bamman and Smith, 2015;",
"ref_id": "BIBREF2"
},
{
"start": 558,
"end": 584,
"text": "Rajadesingan et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 585,
"end": 603,
"text": "Amir et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Prior works treated this problem as a binary text classification problem. To the best of our knowledge, there is little work that has been done in the realm of identifying the target of sarcasm in sarcastic text. The earliest work in this domain was (Joshi et al., 2019) . Identifying the target would help in certain Natural Language Processing (NLP) tasks such as in the realm of improving cyberbully detection by helping to identify the target of ridicule (Raisi and Huang, 2016) . It has also sparked the organisers at the Australasian Language Technology Association (ALTA) to organise a shared challenge task to tackle the problem.",
"cite_spans": [
{
"start": 250,
"end": 270,
"text": "(Joshi et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 459,
"end": 482,
"text": "(Raisi and Huang, 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We employed a 2-phase approach to attempt to solve this task. In our first phase, we employed an ensemble of classifiers along with a meta-classifier to classify sarcasm targets which are marked as \"OUTSIDE\". First, we built a Support Vector Machine (SVM) using word embedding to classify the text, followed by the use of a Logistic Classifier. Finally, we used a Linear Classifier to combine the results of the two classifiers. In the second phase of our system, we used a rule-based approach to extract the target sarcasm words from text that are marked as \"NOT OUTSIDE\". With this proposed system, we achieved 2 nd place in the public leader board of the ALTA competition. We describe our method in details in the methodology section. Next, we present our results along with some of the challenges and recommendations in improving the task. We end our paper with our plans for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The dataset 1 provided by the organizers of the ALTA 2019 shared task consists a of collection of sarcastic texts. There are 950 sarcastic texts for training and 544 for testing. The training dataset comes with the sarcastic text (text), along with the set of words which are the target of sarcasm (tar- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2"
},
{
"text": "As \"NOT OUTSIDE\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Marked",
"sec_num": null
},
{
"text": "Genetic Algorithm Figure 1 : System Architecture get). If the target of sarcasm is not in the text, it is marked as \"OUTSIDE\". Our task was to predict the target of the sarcasm.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "SMOTE",
"sec_num": null
},
{
"text": "We decided to analyse the training data further to understand the distribution and the pattern of the dataset. Table 1 describes the pattern. We observed that several instances of (\"NOT OUT-SIDE\") have 14-19 sarcasm targets (which is half of the sentences) and other times they only have one sarcasm target. We found there to be no correlation between the sentence length and the number of sarcasm targets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SMOTE",
"sec_num": null
},
{
"text": "We employed a 2-phase approach to tackle this problem. In the first phase, we used a series of classifiers, followed by a rule-based system in the second phase. In this section, we describe our method in detail, along with the steps that we performed. The complete system architecture is shown in Figure 1 . We have also made our system's source code publicly available on GitHub. 2",
"cite_spans": [],
"ref_spans": [
{
"start": 297,
"end": 305,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "We observed that the ratio of \"OUTSIDE\" to \"NOT OUTSIDE\" in our training data set is not balanced. In order to improve our classifier's performance, we used SMOTE (Dal Pozzolo et al., 2002) to balance our dataset. SMOTE achieves this by artificially over-sampling the dataset. This has been demonstrated to improve the performance of classifiers when the dataset is small (Luengo et al., 2011).",
"cite_spans": [
{
"start": 163,
"end": 189,
"text": "(Dal Pozzolo et al., 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Class Imbalance",
"sec_num": "3.1"
},
{
"text": "We used pre-trained model of Universal Sentence Encoding (USE) (Cer et al., 2018) to convert the text into a high-dimensional vector representation. USE is known to work well on noisy social media data. We experimented with stemming in our data to increase its accuracy, however it negatively impacted our results.",
"cite_spans": [
{
"start": 63,
"end": 81,
"text": "(Cer et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embedding",
"sec_num": "3.2"
},
{
"text": "We observed that our dataset was obtained from Khodak et al. (2019) 's Reddit 3 Corpus where there were both sarcastic and non-sarcastic texts present, but there was no information about the target of sarcasm. We were inspired by Wallace et al. (2014)'s work that humans require context when it comes to understanding sarcasm. In their work, when annotators were asked to classify sarcastic comments, on average 30% of the comments required annotators to ask for additional context such as the previous comment before they were able to decide. We hypothesized that we can improve our classifier's performance by adding additional context extracted from Khodak et al. (2019) 's corpus to our original dataset.",
"cite_spans": [
{
"start": 47,
"end": 67,
"text": "Khodak et al. (2019)",
"ref_id": "BIBREF13"
},
{
"start": 653,
"end": 673,
"text": "Khodak et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Features",
"sec_num": "3.3"
},
{
"text": "We converted each Subreddit label found in Khodak et al. (2019)'s dataset into categorical data values using one-hot encoding. For categories that were not present in both training and testing data, we grouped them together into a category known as \"Others\". We have also extracted the number of likes and dislikes on each post. They are continuous features, we used Z-Score normalization to improve our classifier's performance (Jayalakshmi and Santhakumaran, 2011) where x 1 is the value of the feature, \u00b5 1 is the mean value of the feature in training data and \u03c3 1 is the standard deviation for the feature in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Features",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x = x 1 \u2212 \u00b5 1 \u03c3 1",
"eq_num": "(1)"
}
],
"section": "Contextual Features",
"sec_num": "3.3"
},
{
"text": "In our first phase, we used a Support Vector Machine (SVM) with word embedding from USE as its input for our SVM Classifier. SVM has been known to perform very well on high dimensional input vectors (Goudjil et al., 2018) . We experimented with other classifiers such as Logistic Regression and Random Forest but it did not yield good results. For our SVM Classifier, we set the classification's threshold value to be 0.425 and above in order for the text to be classified as \"OUT-SIDE\". This was done to minimize the false positives. Figure 2 shows the various thresholds and the accuracy score regarding true positive (TP) and false positives (FP). The additional data features that we have extracted from Khodak et al. (2019) 's corpus are used as input vectors for our logistic classifier. Just like our SVM Classifier, we fine-tuned our logistic classifier's threshold value to be 0.40 and above for a text to be classified as \"OUTSIDE\". Figure 3 shows the performance of the classifier. The values for both of the classifiers were obtained by performing 3-fold cross-validation.",
"cite_spans": [
{
"start": 199,
"end": 221,
"text": "(Goudjil et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 708,
"end": 728,
"text": "Khodak et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 535,
"end": 543,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 943,
"end": 951,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Phase 1",
"sec_num": "3.4"
},
{
"text": "We introduced cosine similarity to further strengthen the meta-classifier's performance. It is calculated by using the word embedding we obtained earlier. If we obtain a similarity score of 0.70 or higher, we assign a score of 1 otherwise a Finally for our meta classifier, we used a Linear Classification (D\u017eeroski and\u017denko, 2004) .",
"cite_spans": [
{
"start": 306,
"end": 331,
"text": "(D\u017eeroski and\u017denko, 2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phase 1",
"sec_num": "3.4"
},
{
"text": "We used the probability scores from both of the classifiers and cosine similarity as input vectors into the classifier. We did not fine-tune the linear classifier and used the default value of 0.5 and above to classify text as \"OUTSIDE\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase 1",
"sec_num": "3.4"
},
{
"text": "In our second phase, we used the rule-based system to extract the target of sarcasm from the texts. The rules that we used are described in Table 2 , and adopted from (Joshi et al., 2019) . We implemented the rules using NLTK Toolkit. 4 We applied some minor adjustments to R1 and R2 that increased the performance 4.49% and 39.68% respectively, over the original rules, as described below.",
"cite_spans": [
{
"start": 167,
"end": 187,
"text": "(Joshi et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 235,
"end": 236,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Phase 2",
"sec_num": "3.5"
},
{
"text": "For R1, we included the subject of each pronoun. For example in the training set one of the target of sarcasm was identified as \"you,op\". 5 The original rule set would only identify \"you\". As for R2, in order to get the Named Entities (NE) recognized, we used Truecasing (Lita et al., 2003) . This helped to correct the case of our noisy data which further improved NE recognition. Lowering all the cases does not work as it presents a problem in distinguishing named entities from nouns. For example, the word \"apple\" may be interpreted as the fruit and not the company. However due to time constrains, we did not take a look at other rules in-depth but intend to do so as future work.",
"cite_spans": [
{
"start": 138,
"end": 139,
"text": "5",
"ref_id": null
},
{
"start": 271,
"end": 290,
"text": "(Lita et al., 2003)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phase 2",
"sec_num": "3.5"
},
{
"text": "In order to determine how effective each rule was, we ran the rules one by one over the training data after excluding all the text which were marked as \"OUTSIDE\". We used Dice-Sorensen Coefficient (DSC) in order to measure the performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase 2",
"sec_num": "3.5"
},
{
"text": "= 2 \u00d7 A \u2229 B |A| + |B| (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D(A, B)",
"sec_num": null
},
{
"text": "where A are predicted words and B are actual words. Table 3 shows the individual performance for each rule. In order to determine which rules were likely to give us the high scores, we implemented a genetic algorithm to obtain weights for each of the rules. We ran our genetic algorithm across 500 generations with 80% probability of mutation. Figure 4 shows the performance of our genetic algorithm. The algorithm assigned a good weighting scores for R1, R2, R3, and R5 respectively. For the other rules, negative weighting scores were given.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 59,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 344,
"end": 352,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "D(A, B)",
"sec_num": null
},
{
"text": "We investigated the results and the behavior of the system by submitting our runs to the competition. Kaggle is used as the platform for submission of runs. In Kaggle, the training data provided to us by ALTA organizers is split into public (public leaderboard) and private (private leaderboard). The private portion serves as a validation portion in order for the organizers to determine the effectiveness of the system. The scores are evaluated by using DSC Score (Equation 2). We summarise and present our results in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 520,
"end": 527,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "The objective set by the organizers at ALTA was to beat the two baselines provided by them. The first baseline always predicted \"OUTSIDE\". The second one always predicted the pronouns from the text as the target for the \"NOT OUTSIDE\" text. Our system beat both baselines for the public leaderboard, but we did not manage to beat the baseline for private score. In fact, no teams beat the scores in the private baseline. Prior to proposing our final system, we have built and evaluated various different systems which included just using one classifier which is either SVM or Logistic Regression and the rule-based system. Then we used the ensemble of classifiers. We believe that our ensemble classifiers performed poorly on the Table 5 : DSC Scores private score as it might have been biased to the public data. On the other hand, just using the additional features alone to classify yielded poor results as our system could not identify \"OUTSIDE\" accurately. This prompted us to look deeper into the problem and offering several ways on how it can be addressed. We discuss this in subsection 4.2 and 4.3.",
"cite_spans": [],
"ref_spans": [
{
"start": 729,
"end": 736,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.1"
},
{
"text": "Based on equation 2, we can deduce that the score for predicting \"OUTSIDE\" would be easier to obtain compared to predicting the target of sarcasm words correctly which may be trickier. In order to demonstrate our point, let us look at the following two examples which we took from training data. The target of sarcasm given by the judges are highlighted in bold \"Oh man and while we are at it we can make it so when the boss dies you can hand pick the piece of gear you want!.\" (\"OUTSIDE\") \"The sun is gonna destroy us in a few billion years anyways, so why does it matter if we die out in the next few centuries?.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.2"
},
{
"text": "In the first example the target of sarcasm is outside. DSC score would yield a perfect score of 1 if it predicted properly. In the second example, it is very hard to get a perfect DSC of 1. In Table 5 , we show how the score varies depending on the number of words predicted correctly, and length of the predicted words. We can clearly see that it is very challenging to get a very high score even when we can predict all of the relevant targets.",
"cite_spans": [],
"ref_spans": [
{
"start": 193,
"end": 200,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.2"
},
{
"text": "One way of addressing the performance of the system is to use accuracy score as an additional metric to determine the effectiveness of the system. This would also help to gauge the capacity of the systems identifying true positives (TP) and true negatives (TN).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.2"
},
{
"text": "In their works, (Joshi et al., 2016) have highlighted some of the difficulties that annotators face in identifying sarcasm and irony. From our failureanalysis, we have determined that humans' annotations can be inconsistent. We show two of the examples from the training dataset, with the target of sarcasm annotated by the judges in bold.",
"cite_spans": [
{
"start": 16,
"end": 36,
"text": "(Joshi et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human Perspective & Relevance Judgement",
"sec_num": "4.3"
},
{
"text": "\"OP is just some white knight who always comes to the aid of the female, if you knew her you'd know how much of a whore she is..\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Perspective & Relevance Judgement",
"sec_num": "4.3"
},
{
"text": "\"$10 OP wants to do something crazy with trading cards and is just trying to get you all to sell them to him on the cheap\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Perspective & Relevance Judgement",
"sec_num": "4.3"
},
{
"text": "In the first example, we can clearly make the association that \"you\" from the first example refers to \"OP\" but only \"OP\" is identified as the target of sarcasm. However, in the second example, both the words \"OP,him\" are identified as the target of sarcasm by the judges. This shows to us that even in sentences which are constructed in a similar manner, the way judges identify the target of sarcasm differs from one person to another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Perspective & Relevance Judgement",
"sec_num": "4.3"
},
{
"text": "In order to address this gap, we propose that additional assessments should be conducted. For example, in the Text Retrieval Conference (TREC), participants submit their assessments and let the human annotators decide if the documents retrieved by the search engines were relevant to the given queries (Hawking et al., 1999) . We believe that adopting this approach for this task instead of the current approach would help to address the shortcomings of relying entirely on human annotators.",
"cite_spans": [
{
"start": 302,
"end": 324,
"text": "(Hawking et al., 1999)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human Perspective & Relevance Judgement",
"sec_num": "4.3"
},
{
"text": "We presented an approach to identify the target of sarcasm. We competed in the ALTA 2019 Competition under the team name of \"orangutan\". Our best-performing system used an ensemble of classifiers. Despite achieving a score of 0.37150 and beating the baselines in the public portion within Kaggle, we did not manage to beat the baseline in the private dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "We believe that there is still much work to be done in this domain. As part of future work we are planning to tackle this problem in several ways, including:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "\u2022 Improving our classifier;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "\u2022 Further improving the rule-based system; and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "\u2022 Experimenting with deep learning models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "https://www.kaggle.com/c/ alta-2019-challenge/data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/prasys/ sarcasm-detection",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Reddit http://www.reddit.com is a social news aggregation and discussion website",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.nltk.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Many thanks to Kat Lilly, Dr.Diego Moll-Aliod for their time in proofreading this paper. We would also like to thank the ALTA organizers for their support.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Modelling Context with User Embeddings for Sarcasm Detection in Social Media",
"authors": [
{
"first": "Silvio",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Paula",
"middle": [],
"last": "Carvalho",
"suffix": ""
},
{
"first": "Mario",
"middle": [
"J"
],
"last": "Silva",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "167--177",
"other_ids": {
"DOI": [
"10.18653/v1/k16-1017"
]
},
"num": null,
"urls": [],
"raw_text": "Silvio Amir, Byron C. Wallace, Hao Lyu, Paula Car- valho, and Mario J. Silva. 2016. Modelling Con- text with User Embeddings for Sarcasm Detection in Social Media. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL), pages 167-177.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Jane Austen. 1813. Pride and Prejudice. Routledge/Thoemmes",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jane Austen. 1813. Pride and Prejudice. Rout- ledge/Thoemmes, London.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Contextualized sarcasm detection on twitter",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Conference on Web and Social Media, ICWSM 2015",
"volume": "",
"issue": "",
"pages": "574--577",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Bamman and Noah A. Smith. 2015. Contextu- alized sarcasm detection on twitter. Proceedings of the 9th International Conference on Web and Social Media, ICWSM 2015, pages 574-577.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sheng-Yi",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "Rhomni",
"middle": [],
"last": "St",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Guajardo-Cespedes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal Sentence Encoder. Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The sound of sarcasm",
"authors": [
{
"first": "S",
"middle": [],
"last": "Henry",
"suffix": ""
},
{
"first": "Marc",
"middle": [
"D"
],
"last": "Cheang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pell",
"suffix": ""
}
],
"year": 2008,
"venue": "Speech Communication",
"volume": "50",
"issue": "5",
"pages": "366--381",
"other_ids": {
"DOI": [
"10.1016/j.specom.2007.11.003"
]
},
"num": null,
"urls": [],
"raw_text": "Henry S. Cheang and Marc D. Pell. 2008. The sound of sarcasm. Speech Communication, 50(5):366-381.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Comparison of balancing techniques for unbalanced datasets",
"authors": [
{
"first": "A",
"middle": [],
"last": "Pozzolo",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Caelen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Bontempi",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Artificial Intelligence Research",
"volume": "16",
"issue": "1",
"pages": "732--735",
"other_ids": {
"DOI": [
"10.1613/jair.953"
]
},
"num": null,
"urls": [],
"raw_text": "A. Dal Pozzolo, O. Caelen, and G. Bontempi. 2002. Comparison of balancing techniques for unbalanced datasets. Journal of Artificial Intelligence Research 16, 16(1):732-735.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Is combining classifiers with stacking better than selecting the best one?",
"authors": [
{
"first": "Saso",
"middle": [],
"last": "D\u017eeroski",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bernard\u017eenko",
"suffix": ""
}
],
"year": 2004,
"venue": "Machine Learning",
"volume": "54",
"issue": "",
"pages": "255--273",
"other_ids": {
"DOI": [
"10.1023/B:MACH.0000015881.36452.6e"
]
},
"num": null,
"urls": [],
"raw_text": "Saso D\u017eeroski and Bernard\u017denko. 2004. Is combining classifiers with stacking better than selecting the best one? Machine Learning, 54(3):255-273.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Metaphor and Symbol Irony in Talk Among Friends Irony in Talk Among Friends",
"authors": [
{
"first": "W",
"middle": [],
"last": "Raymond",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gibbs",
"suffix": ""
}
],
"year": 2000,
"venue": "Metaphor and Symbol",
"volume": "15",
"issue": "2",
"pages": "1--2",
"other_ids": {
"DOI": [
"10.1080/10926488.2000.9678862"
]
},
"num": null,
"urls": [],
"raw_text": "Raymond W Gibbs. 2000. Metaphor and Symbol Irony in Talk Among Friends Irony in Talk Among Friends. Metaphor and Symbol, 15(2):1-2.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Novel Active Learning Method Using SVM for Text Classification",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Goudjil",
"suffix": ""
},
{
"first": "Mouloud",
"middle": [],
"last": "Koudil",
"suffix": ""
},
{
"first": "Mouldi",
"middle": [],
"last": "Bedda",
"suffix": ""
},
{
"first": "Noureddine",
"middle": [],
"last": "Ghoggali",
"suffix": ""
}
],
"year": 2018,
"venue": "International Journal of Automation and Computing",
"volume": "15",
"issue": "3",
"pages": "290--298",
"other_ids": {
"DOI": [
"10.1007/s11633-015-0912-z"
]
},
"num": null,
"urls": [],
"raw_text": "Mohamed Goudjil, Mouloud Koudil, Mouldi Bedda, and Noureddine Ghoggali. 2018. A Novel Active Learning Method Using SVM for Text Classifica- tion. International Journal of Automation and Com- puting, 15(3):290-298.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Overview of TREC-7 Very Large Collection Track",
"authors": [
{
"first": "David",
"middle": [],
"last": "Hawking",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Craswell",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Thistlewaite",
"suffix": ""
}
],
"year": 1999,
"venue": "NIST Special Publication 500-242: The Seventh Text REtrieval Conference (TREC 7)",
"volume": "",
"issue": "",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Hawking, Nick Craswell, and Paul Thistlewaite. 1999. Overview of TREC-7 Very Large Collection Track. In NIST Special Publication 500-242: The Seventh Text REtrieval Conference (TREC 7), pages 1-13.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Statistical Normalization and Back Propagationfor Classification",
"authors": [
{
"first": "T",
"middle": [],
"last": "Jayalakshmi",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Santhakumaran",
"suffix": ""
}
],
"year": 2011,
"venue": "International Journal of Computer Theory and Engineering",
"volume": "3",
"issue": "1",
"pages": "89--93",
"other_ids": {
"DOI": [
"10.7763/ijcte.2011.v3.288"
]
},
"num": null,
"urls": [],
"raw_text": "T. Jayalakshmi and A. Santhakumaran. 2011. Statisti- cal Normalization and Back Propagationfor Classi- fication. International Journal of Computer Theory and Engineering, 3(1):89-93.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Sarcasm target identification: Dataset and an introductory approach. LREC 2018 -11th International Conference on Language Resources and Evaluation",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"J"
],
"last": "Carman",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "2676--2683",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Pranav Goel, Pushpak Bhattacharyya, and Mark J. Carman. 2019. Sarcasm target iden- tification: Dataset and an introductory approach. LREC 2018 -11th International Conference on Language Resources and Evaluation, (2008):2676- 2683.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "How Challenging is Sarcasm versus Irony Classification?: A Study With a Dataset from {E}nglish Literature",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Vaibhav",
"middle": [],
"last": "Tripathi",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Carman",
"suffix": ""
},
{
"first": "Meghna",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Jaya",
"middle": [],
"last": "Saraswati",
"suffix": ""
},
{
"first": "Rajita",
"middle": [],
"last": "Shukla",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Australasian Language Technology Association Workshop",
"volume": "",
"issue": "",
"pages": "123--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Vaibhav Tripathi, Pushpak Bhat- tacharyya, Mark Carman, Meghna Singh, Jaya Saraswati, and Rajita Shukla. 2016. How Challeng- ing is Sarcasm versus Irony Classification?: A Study With a Dataset from {E}nglish Literature. In Pro- ceedings of the Australasian Language Technology Association Workshop 2016, pages 123-127.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A large self-annotated corpus for sarcasm",
"authors": [
{
"first": "Mikhail",
"middle": [],
"last": "Khodak",
"suffix": ""
},
{
"first": "Nikunj",
"middle": [],
"last": "Saunshi",
"suffix": ""
},
{
"first": "Kiran",
"middle": [],
"last": "Vodrahalli",
"suffix": ""
}
],
"year": 2019,
"venue": "LREC 2018 -11th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "641--646",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikhail Khodak, Nikunj Saunshi, and Kiran Vodra- halli. 2019. A large self-annotated corpus for sar- casm. LREC 2018 -11th International Conference on Language Resources and Evaluation, pages 641- 646.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics",
"authors": [
{
"first": "Lucian",
"middle": [],
"last": "Vlad Lita",
"suffix": ""
},
{
"first": "I B M T J",
"middle": [],
"last": "Watson",
"suffix": ""
},
{
"first": "I B M T J",
"middle": [],
"last": "Watson",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "1",
"issue": "",
"pages": "152--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucian Vlad Lita, I B M T J Watson, and I B M T J Watson. 2003. tRuEcasIng. In Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics -Volume 1, pages 152-159.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Addressing data complexity for imbalanced data sets: Analysis of SMOTE-based oversampling and evolutionary undersampling",
"authors": [
{
"first": "Juli\u00e1n",
"middle": [],
"last": "Luengo",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "Salvador",
"middle": [],
"last": "Garc\u00eda",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Herrera",
"suffix": ""
}
],
"year": 2011,
"venue": "Soft Computing",
"volume": "15",
"issue": "10",
"pages": "1909--1936",
"other_ids": {
"DOI": [
"10.1007/s00500-010-0625-8"
]
},
"num": null,
"urls": [],
"raw_text": "Juli\u00e1n Luengo, Alberto Fern\u00e1ndez, Salvador Garc\u00eda, and Francisco Herrera. 2011. Addressing data complexity for imbalanced data sets: Analysis of SMOTE-based oversampling and evolutionary un- dersampling. Soft Computing, 15(10):1909-1936.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Cyberbullying Identification Using Participant-Vocabulary Consistency",
"authors": [
{
"first": "Elaheh",
"middle": [],
"last": "Raisi",
"suffix": ""
},
{
"first": "Bert",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "ICML Workshop on #Data4Good: Machine Learning in Social Good Applications",
"volume": "",
"issue": "",
"pages": "46--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elaheh Raisi and Bert Huang. 2016. Cyberbullying Identification Using Participant-Vocabulary Consis- tency. In ICML Workshop on #Data4Good: Ma- chine Learning in Social Good Applications, pages 46-50, New York.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Sarcasm Detection on Twitter",
"authors": [
{
"first": "Ashwin",
"middle": [],
"last": "Rajadesingan",
"suffix": ""
},
{
"first": "Reza",
"middle": [],
"last": "Zafarani",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "WSDM '15: Proceedings of the Eight ACM International Conference on Web Search and Data Mining",
"volume": "",
"issue": "",
"pages": "97--106",
"other_ids": {
"DOI": [
"10.1145/2684822.2685316"
]
},
"num": null,
"urls": [],
"raw_text": "Ashwin Rajadesingan, Reza Zafarani, and Huan Liu. 2015. Sarcasm Detection on Twitter. In WSDM '15: Proceedings of the Eight ACM International Conference on Web Search and Data Mining, pages 97-106.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Communicative intentions and language: Evidence from right-hemisphere damage and autism",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sabbagh",
"suffix": ""
}
],
"year": 1999,
"venue": "Brain and Language",
"volume": "70",
"issue": "1",
"pages": "29--69",
"other_ids": {
"DOI": [
"10.1006/brln.1999.2139"
]
},
"num": null,
"urls": [],
"raw_text": "Mark A. Sabbagh. 1999. Communicative intentions and language: Evidence from right-hemisphere damage and autism. Brain and Language, 70(1):29- 69.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Verbal irony: Pretense or echoic mention",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Sperber",
"suffix": ""
}
],
"year": 1984,
"venue": "Journal of Experimental Psychology: General",
"volume": "113",
"issue": "1",
"pages": "130--136",
"other_ids": {
"DOI": [
"10.1037/0096-3445.113.1.130"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Sperber. 1984. Verbal irony: Pretense or echoic mention? Journal of Experimental Psychology: General, 113(1):130-136.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Humans require context to infer ironic intent (so computers probably do, too)",
"authors": [
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Do Kook Choe",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Kertz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2014,
"venue": "52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "512--516",
"other_ids": {
"DOI": [
"10.3115/v1/p14-2084"
]
},
"num": null,
"urls": [],
"raw_text": "Byron C. Wallace, Do Kook Choe, Laura Kertz, and Eugene Charniak. 2014. Humans require context to infer ironic intent (so computers probably do, too). In 52nd Annual Meeting of the Association for Com- putational Linguistics, ACL 2014 -Proceedings of the Conference, volume 2, pages 512-516.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Recall Score against Threshold for SVM Classifier",
"uris": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"text": "Distribution and Pattern of Training Data",
"num": null,
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">Phase 1</td></tr><tr><td>Sarcasm</td><td>Word</td><td>SVM</td><td/></tr><tr><td>Text</td><td>Embeddings</td><td colspan=\"2\">Classification</td><td>Linear</td></tr><tr><td/><td/><td/><td/><td>Classification</td><td>Cosine</td></tr><tr><td/><td/><td/><td/><td>using</td><td>Similarity</td></tr><tr><td/><td>Features Contextual</td><td colspan=\"2\">Classification Logistic</td><td>Ensemble</td></tr><tr><td/><td>Classified</td><td/><td/></tr><tr><td/><td>Text</td><td/><td colspan=\"2\">Run Rule Based</td></tr><tr><td/><td/><td/><td/><td>System to</td></tr><tr><td/><td/><td/><td colspan=\"2\">Extract Target</td></tr><tr><td/><td/><td/><td/><td>Words</td></tr><tr><td/><td/><td/><td>YES</td></tr><tr><td/><td/><td/><td colspan=\"2\">Mark Them</td></tr><tr><td/><td/><td/><td/><td>as</td></tr><tr><td/><td/><td>NO</td><td colspan=\"2\">\"OUTSIDE\"</td></tr><tr><td/><td/><td/><td colspan=\"2\">Phase 2</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"html": null,
"text": "",
"num": null,
"content": "<table/>"
},
"TABREF5": {
"type_str": "table",
"html": null,
"text": "Performance of Rules Score",
"num": null,
"content": "<table/>"
},
"TABREF7": {
"type_str": "table",
"html": null,
"text": "System Evaluation",
"num": null,
"content": "<table/>"
}
}
}
}