ACL-OCL / Base_JSON /prefixF /json /figlang /2020.figlang-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:42:56.323477Z"
},
"title": "Detecting Sarcasm in Conversation Context Using Transformer-Based Models",
"authors": [
{
"first": "Adithya",
"middle": [],
"last": "Avvaru",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Sanath",
"middle": [],
"last": "Vobilisetty",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Teradata India Pvt. Ltd",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Radhika",
"middle": [],
"last": "Mamidi",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Sarcasm detection, regarded as one of the subproblems of sentiment analysis, is a very typical task because the introduction of sarcastic words can flip the sentiment of the sentence itself. To date, many research works revolve around detecting sarcasm in one single sentence and there is very limited research to detect sarcasm resulting from multiple sentences. Current models used Long Short Term Memory (Hochreiter and Schmidhuber, 1997) (LSTM) variants with or without attention to detect sarcasm in conversations. We showed that the models using state-of-the-art Bidirectional Encoder Representations from Transformers (Devlin et al., 2018) (BERT), to capture syntactic and semantic information across conversation sentences, performed better than the current models. Based on the data analysis, we estimated that the number of sentences in the conversation that can contribute to the sarcasm and the results agrees to this estimation. We also perform a comparative study of our different versions of BERT-based model with other variants of LSTM model and XLNet (Yang et al., 2019) (both using the estimated number of conversation sentences) and find out that BERT-based models outperformed them.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Sarcasm detection, regarded as one of the subproblems of sentiment analysis, is a very typical task because the introduction of sarcastic words can flip the sentiment of the sentence itself. To date, many research works revolve around detecting sarcasm in one single sentence and there is very limited research to detect sarcasm resulting from multiple sentences. Current models used Long Short Term Memory (Hochreiter and Schmidhuber, 1997) (LSTM) variants with or without attention to detect sarcasm in conversations. We showed that the models using state-of-the-art Bidirectional Encoder Representations from Transformers (Devlin et al., 2018) (BERT), to capture syntactic and semantic information across conversation sentences, performed better than the current models. Based on the data analysis, we estimated that the number of sentences in the conversation that can contribute to the sarcasm and the results agrees to this estimation. We also perform a comparative study of our different versions of BERT-based model with other variants of LSTM model and XLNet (Yang et al., 2019) (both using the estimated number of conversation sentences) and find out that BERT-based models outperformed them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "For many NLP researchers from both academia and industry, sarcasm detection has been one of the most focused areas of research among many research problems like code-mixed sentiment analysis (Lal et al., 2019) , detection of offensive or hate speeches (Liu et al., 2019) , questionanswering(Soares and Parreiras, 2018), etc. One of the main reasons why sarcasm finds a significant portion of research work is because of its nature that the addition of a sarcastic clause or a word can alter the sentiment of the sentence.",
"cite_spans": [
{
"start": 191,
"end": 209,
"text": "(Lal et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 252,
"end": 270,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Sarcasm is used to criticize people, to provide political or apolitical views, to make fun of ideas, etc., and the most common form of sarcasm usage is through text. Some major sources of the sarcastic text are social media platforms like Twitter, Instagram, Facebook, Quora, WhatsApp etc. Out of these, Twitter forms the major source of sarcastic content drawing attention from researchers across the globe (Bamman and Smith, 2015; Rajadesingan et al., 2015; Davidov et al., 2010) . Due to its inherent nature of flipping the context of the sentence, sarcasm in a sentence is difficult to detect even for humans (Chaudhari and Chandankhede, 2017) . Here, the context is considered only in one sentence. How do we deal with situations where the sarcastic sentence depends on a conversation context and the context spans over multiple sentences preceding the response sarcastic sentence? Addressing this problem may help in identifying the root cause of sarcasm in a larger context, which is even tougher because conversation sentences differ in number, some conversation sentences themselves may be sarcastic and response text may depend on more than one conversation sentences. This is the research problem that we are trying to address and are largely successful in building better models which outperformed the baseline F-measures of 0.6 for Reddit and 0.67 for Twitter datasets (Ghosh et al., 2018) . We have achieved Fmeasures of 0.752 for Twitter and 0.621 for Reddit datasets.",
"cite_spans": [
{
"start": 408,
"end": 432,
"text": "(Bamman and Smith, 2015;",
"ref_id": "BIBREF2"
},
{
"start": 433,
"end": 459,
"text": "Rajadesingan et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 460,
"end": 481,
"text": "Davidov et al., 2010)",
"ref_id": "BIBREF5"
},
{
"start": 613,
"end": 647,
"text": "(Chaudhari and Chandankhede, 2017)",
"ref_id": "BIBREF4"
},
{
"start": 1382,
"end": 1402,
"text": "(Ghosh et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Sarcasm is a form of figurative language where the meaning of a sentence does not hold and the interpretation is quite contrary. A quick survey about sarcasm detection and some of the earlier approaches is compiled by Joshi et al. (2017) . The problem of sarcasm detection is targeted in",
"cite_spans": [
{
"start": 218,
"end": 237,
"text": "Joshi et al. (2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Field Description label SARCASM or NOT SARCASM response Tweet or a Reddit post context Ordered list of dialogue (Bamman and Smith, 2015) . Davoodi and Kosseim (2017) used semi-supervised approaches to detect sarcasm. Another approach is automatic learning and exploiting word embeddings to recognize sarcasm (Amir et al., 2016) . Emojis also have a significant impact on the sarcastic nature of the text, which might help in detecting sarcasm better (Felbo et al., 2017) . Other approaches to detect sarcasm include Bi-Directional Gated Recurrent Neural Network (Bi-Directional GRNU) (Zhang et al., 2016) . Sarcasm detection in speech is also gaining importance (Castro et al., 2019) . Some of the earlier works involving conversation contexts in detecting sarcasm are trying to model conversation contexts and understand what part of conversation sentence was involved in triggering sarcasm (Ghosh et al., 2017 (Ghosh et al., , 2018 and identify the specific sentence that is sarcastic given a sarcastic post that contains multiple sentences (Ghosh et al., 2018) . Humans could infer sarcasm better with conversation context which emphasises the importance of conversation context (Wallace et al., 2014). The structure of the paper is as follows. In Section 3, we describe the dataset (fields provided in the train and the test data and an example data along with its explanation). Section 4 describes the feature extraction where the emphasis is on data preprocessing and the procedure to select conversation sentences. Section 5 describes the systems used in training the data whereas section 6 discusses the comparative results of various models. Section 7 presents concluding remarks and future direction of research.",
"cite_spans": [
{
"start": 112,
"end": 136,
"text": "(Bamman and Smith, 2015)",
"ref_id": "BIBREF2"
},
{
"start": 139,
"end": 165,
"text": "Davoodi and Kosseim (2017)",
"ref_id": "BIBREF6"
},
{
"start": 308,
"end": 327,
"text": "(Amir et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 450,
"end": 470,
"text": "(Felbo et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 584,
"end": 604,
"text": "(Zhang et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 662,
"end": 683,
"text": "(Castro et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 892,
"end": 911,
"text": "(Ghosh et al., 2017",
"ref_id": "BIBREF10"
},
{
"start": 912,
"end": 933,
"text": "(Ghosh et al., , 2018",
"ref_id": "BIBREF9"
},
{
"start": 1043,
"end": 1063,
"text": "(Ghosh et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Field",
"sec_num": null
},
{
"text": "The data 1 we used for model building is taken from sarcasm detection shared task of the Sec- FigLang2020) . There are two types of data provided by the organizers: 1. Twitter dataset and 2. Reddit dataset. Training data contains the fields -\"label\", \"response\" and \"context\" and are described as shown in the Table 1. If the \"context\" contains three elements, \"c1\", \"c2\", \"c3\", in that order, then \"c2\" is a reply to \"c1\" and \"c3\" is a reply to \"c2\". Further, if the sarcastic \"response\" is \"r\", then \"r\" is a reply to \"c3\". Consider the example provided by the organizers: label: \"SARCASM\" response: \"Did Kelly just call someone else messy? Baaaahaaahahahaha\" context: [\"X is looking a First Lady should . #classact, \"didn't think it was tailored enough it looked messy\"] This example can be understood as \"Did Kelly...\" is a reply to its immediate context \"didn't think it was tailored...\" which is a reply to \"X is looking...\". and the label of the response is \"SARCASM\".",
"cite_spans": [
{
"start": 94,
"end": 106,
"text": "FigLang2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "3"
},
{
"text": "Testing data contains the fields -\"id\", \"response\" and \"context\" and are described as shown in the Table 2 . The data of both Twitter tweets and Reddit posts were organized into train and test sets. The number of samples in each of these datasets is shown in Table 3 . It is clear from the table that the data is balanced with the same number of sarcastic and nonsarcastic samples (Abercrombie and Hovy, 2016) .",
"cite_spans": [
{
"start": 381,
"end": 409,
"text": "(Abercrombie and Hovy, 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 99,
"end": 106,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 259,
"end": 266,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "3"
},
{
"text": "Identification for each test sample response Tweet or a Reddit post context Ordered list of dialogue The corpus data contains consecutive occurrences of periods (.), multiple spaces between words, more or consecutive punctuation marks like exclamation (!), etc. Since the data is collected from Twitter handles and Reddit posts, the data also contain hashtags and emoticons, which are some of the properties of the text extracted from social media. Hence, there is a great need to clean the data before any further processing and we followed multiple steps, for cleaning the data, as described below: 2. Demojizing the sentences that contain emoticons i.e., replacing emoticons with their corresponding texts. For example, is replaced with :stuck out tongue:.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Field Field Description id",
"sec_num": null
},
{
"text": "3. There are two ways of handling hashtagsone, remove the hashtag and two, extract the hashtag content. We took the second approach as we believe certain hashtags contain meaningful text. For example, consider the text Made $174 this month, I'm gonna buy a yacht! #poor. There are two parts to this sentence -Made $174 this month, which doesn't have any sentiment but it is understood that the money he got is less and the second one, I'm gonna buy a yacht!, which is a positive statement that he can buy something very costly. The addition of hashtag #poor flipped the first statement to negative sentiment. Ignoring #poor will lose the sarcastic impact on the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Field Field Description id",
"sec_num": null
},
{
"text": "No of sentences in a conversation Training Testing 5 or less 83% 90% 7 or less 90% 96% 10 or less 95% 98% 5. We have identified contracted and combined words (for example, we've, won't've, etc,.) and replaced them with their corresponding English equivalents (in this case, we have, will not have, etc,.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Field Field Description id",
"sec_num": null
},
{
"text": "Twitter Dataset: Since the number of conversation sentences range from two to twenty, it is important to understand how many sentences can contribute to the sarcastic behavior. A quick analysis of Twitter data is provided by the Figure 1 and the Table 4 . The behavior of training and testing data follows similar trend as observed from the Figure 1 . We selected the last 7 conversation sentences out of all conversation sentences per Twitter tweet based on the following analysis:",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 237,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 246,
"end": 253,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 341,
"end": 349,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Selection of Conversation Sentences",
"sec_num": "4.2"
},
{
"text": "\u2022 If we have chosen to select 10 sentences or more, then around 50 percent of samples which have 2 context sentences should be padded with zeros after tokenization. If we have chosen to select 2 sentences, then we will end up losing more context information. There is this trade-off while selecting conversation sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection of Conversation Sentences",
"sec_num": "4.2"
},
{
"text": "\u2022 It is unlikely that the response text depends on the farther context sentences. So, the response text largely depends on context sentences that are closest to the response text. Reddit Dataset: Here, the dataset composition is different compared to that of Twitter Dataset. The number of conversation sentences ranges from two to eight in train data with 99 percent of samples having five or fewer sentences but the number of conversation sentences in test data ranges from two to thirteen with only 70 percent of samples having five or fewer sentences. Figure 2 and the Table 5 depict this behaviour of Reddit data.",
"cite_spans": [],
"ref_spans": [
{
"start": 556,
"end": 564,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 573,
"end": 580,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Selection of Conversation Sentences",
"sec_num": "4.2"
},
{
"text": "As discussed in Section 4.2, we considered the last 7 cleaned sentences from the conversation sentences. The response text is a direct result of the conversation sentences. Hence, we concatenate all the selected conversation sentences together and with the cleaned response text. This final text is fed to the model for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training text finalization",
"sec_num": "4.3"
},
{
"text": "There are several NLP models at our disposal to work with, some are pre-trained while others need to be trained from scratch. We have done experiments with LSTM, BiLSTM, Stacked LSTM and CNN-LSTM (Convolution Neural Network + LSTM) models which can be trained to capture sequence information. To avoid over-fitting, we have introduced dropout layers and taken early stopping measures while training. We split the training data into train data (to train the model) and validation data (10 percent of actual training data to validate the model and employ early stopping). We also have worked with pre-trained Transformer based BERT (bert-base-uncased) model and XLNet. The following steps are used to fine-tune the pre-trained BERT model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System description",
"sec_num": "5"
},
{
"text": "1. Tokenize the text (BERT requires the text to be in a predefined format with separators and class labels)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System description",
"sec_num": "5"
},
{
"text": "2. Create attention masks 3. Fine-tune the pre-trained BERT model so that the model parameters will conform to the input training data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System description",
"sec_num": "5"
},
{
"text": "In our model, training stops when F1-score on validation data goes below the earlier epoch's F1-score and the prediction is done on the earlier model for which validation F1-score is highest. Similar steps are performed to fine-tune XLNet model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System description",
"sec_num": "5"
},
{
"text": "The LSTM model variants -LSTM, BiLSTM, Stacked-LSTM and Conv-LSTM models are applied to Twitter dataset and the F1-scores on test data are 0.67, 0.66, 0.66 and 0.67 respectively. The F1-scores of variants of BERT models considering different lengths of conversation sentences and XL-Net are depicted in Table 6 . We experimented by considering the last 3, 5 and 8 sentences for Reddit dataset and found that model that used 5 sentences outperformed the other two, probably because the model which used 3 sentences captured the context well while training but failed to apply it as the range of sentences' length in the test set is large compared to the train set. Similarly model with 8 samples had a lot of padded zeros as 99 percent of samples have five or fewer sentences which resulted in poor performance. The results of the experiments on Reddit dataset are depicted in Table 6 . Since LSTM variants did not perform well compared to BERT-based models, we focused more on data preparation part of our research work for Reddit dataset. It can be inferred from the results table that our hypothesis of taking seven latest sentences, for Twitter dataset, falls in-line with the results.",
"cite_spans": [],
"ref_spans": [
{
"start": 303,
"end": 310,
"text": "Table 6",
"ref_id": "TABREF7"
},
{
"start": 876,
"end": 883,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Sarcasm detection in conversational context is an important research area which infuses more enthusiasm and encourages the researchers across the globe. We build models that outperformed the baseline results. Though the results in the Shared Task leaderboard shows that the top model achieved Fmeasure of 0.93 for the Twitter dataset and 0.83 for the Reddit dataset, there is a lot to work on the problem and find ways to improve the performance with a larger dataset. Use of a larger dataset might help in adding more context and help in improving accuracy. Currently, the models that are built are not generalised across datasets. Further research can focus on building a generalized model for multiple datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Putting Sarcasm Detection into Context: The Effects of Class Imbalance and Manual Labelling on Supervised Machine Classification of Twitter Conversations",
"authors": [
{
"first": "Gavin",
"middle": [],
"last": "Abercrombie",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the ACL 2016 Student Research Workshop",
"volume": "",
"issue": "",
"pages": "107--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gavin Abercrombie and Dirk Hovy. 2016. Putting Sarcasm Detection into Context: The Effects of Class Imbalance and Manual Labelling on Super- vised Machine Classification of Twitter Conversa- tions. In Proceedings of the ACL 2016 Student Re- search Workshop, pages 107-113.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Modelling context with user embeddings for sarcasm detection in social media",
"authors": [
{
"first": "Silvio",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Paula",
"middle": [],
"last": "Carvalho",
"suffix": ""
},
{
"first": "M\u00e1rio",
"middle": [
"J"
],
"last": "Silva",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "167--177",
"other_ids": {
"DOI": [
"10.18653/v1/K16-1017"
]
},
"num": null,
"urls": [],
"raw_text": "Silvio Amir, Byron C. Wallace, Hao Lyu, Paula Car- valho, and M\u00e1rio J. Silva. 2016. Modelling context with user embeddings for sarcasm detection in social media. In Proceedings of The 20th SIGNLL Con- ference on Computational Natural Language Learn- ing, pages 167-177, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Contextualized Sarcasm Detection on Twitter",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Ninth International AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Bamman and Noah A Smith. 2015. Contextual- ized Sarcasm Detection on Twitter. In Ninth Interna- tional AAAI Conference on Web and Social Media.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Towards Multimodal Sarcasm Detection (An Obviously Perfect Paper)",
"authors": [
{
"first": "Santiago",
"middle": [],
"last": "Castro",
"suffix": ""
},
{
"first": "Devamanyu",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "Ver\u00f3nica",
"middle": [],
"last": "P\u00e9rez-Rosas",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Zimmermann",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.01815"
]
},
"num": null,
"urls": [],
"raw_text": "Santiago Castro, Devamanyu Hazarika, Ver\u00f3nica P\u00e9rez- Rosas, Roger Zimmermann, Rada Mihalcea, and Soujanya Poria. 2019. Towards Multimodal Sar- casm Detection (An Obviously Perfect Paper). arXiv preprint arXiv:1906.01815.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Literature survey of sarcasm detection",
"authors": [
{
"first": "P",
"middle": [],
"last": "Chaudhari",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Chandankhede",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET)",
"volume": "",
"issue": "",
"pages": "2041--2046",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Chaudhari and C. Chandankhede. 2017. Litera- ture survey of sarcasm detection. In 2017 Interna- tional Conference on Wireless Communications, Sig- nal Processing and Networking (WiSPNET), pages 2041-2046.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Semi-Supervised Recognition of Sarcastic Sentences in Twitter and Amazon",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Davidov",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Tsur",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the fourteenth conference on computational natural language learning",
"volume": "",
"issue": "",
"pages": "107--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Semi-Supervised Recognition of Sarcastic Sentences in Twitter and Amazon. In Proceedings of the fourteenth conference on computational natu- ral language learning, pages 107-116. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic identification of AltLexes using monolingual parallel corpora",
"authors": [
{
"first": "Elnaz",
"middle": [],
"last": "Davoodi",
"suffix": ""
},
{
"first": "Leila",
"middle": [],
"last": "Kosseim",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "195--200",
"other_ids": {
"DOI": [
"10.26615/978-954-452-049-6_027"
]
},
"num": null,
"urls": [],
"raw_text": "Elnaz Davoodi and Leila Kosseim. 2017. Automatic identification of AltLexes using monolingual paral- lel corpora. In Proceedings of the International Con- ference Recent Advances in Natural Language Pro- cessing, RANLP 2017, pages 195-200, Varna, Bul- garia. INCOMA Ltd.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm",
"authors": [
{
"first": "Bjarke",
"middle": [],
"last": "Felbo",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Mislove",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Iyad",
"middle": [],
"last": "Rahwan",
"suffix": ""
},
{
"first": "Sune",
"middle": [],
"last": "Lehmann",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1708.00524"
]
},
"num": null,
"urls": [],
"raw_text": "Bjarke Felbo, Alan Mislove, Anders S\u00f8gaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain represen- tations for detecting sentiment, emotion and sarcasm. arXiv preprint arXiv:1708.00524.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sarcasm analysis using conversation context",
"authors": [
{
"first": "Debanjan",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Alexander R Fabbri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Muresan",
"suffix": ""
}
],
"year": 2018,
"venue": "Computational Linguistics",
"volume": "44",
"issue": "4",
"pages": "755--792",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Debanjan Ghosh, Alexander R Fabbri, and Smaranda Muresan. 2018. Sarcasm analysis using conversa- tion context. Computational Linguistics, 44(4):755- 792.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Role of Conversation Context for Sarcasm Detection in Online Interactions",
"authors": [
{
"first": "Debanjan",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"Richard"
],
"last": "Fabbri",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.06226"
]
},
"num": null,
"urls": [],
"raw_text": "Debanjan Ghosh, Alexander Richard Fabbri, and Smaranda Muresan. 2017. The Role of Conversa- tion Context for Sarcasm Detection in Online Inter- actions. arXiv preprint arXiv:1707.06226.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Long Short-Term Memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long Short-Term Memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Automatic Sarcasm Detection: A Survey",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"J"
],
"last": "Car",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "50",
"issue": "5",
"pages": "1--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Pushpak Bhattacharyya, and Mark J Car- man. 2017. Automatic Sarcasm Detection: A Sur- vey. ACM Computing Surveys (CSUR), 50(5):1-22.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "De-mixing sentiment from code-mixed text",
"authors": [
{
"first": "Yash",
"middle": [],
"last": "Kumar Lal",
"suffix": ""
},
{
"first": "Vaibhav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Mrinal",
"middle": [],
"last": "Dhar",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "371--377",
"other_ids": {
"DOI": [
"10.18653/v1/P19-2052"
]
},
"num": null,
"urls": [],
"raw_text": "Yash Kumar Lal, Vaibhav Kumar, Mrinal Dhar, Manish Shrivastava, and Philipp Koehn. 2019. De-mixing sentiment from code-mixed text. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Work- shop, pages 371-377, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "NULI at SemEval-2019 task 6: Transfer learning for offensive language detection using bidirectional transformers",
"authors": [
{
"first": "Ping",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "87--91",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2011"
]
},
"num": null,
"urls": [],
"raw_text": "Ping Liu, Wen Li, and Liang Zou. 2019. NULI at SemEval-2019 task 6: Transfer learning for offen- sive language detection using bidirectional trans- formers. In Proceedings of the 13th Interna- tional Workshop on Semantic Evaluation, pages 87- 91, Minneapolis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Sarcasm Detection on Twitter: A Behavioral Modeling Approach",
"authors": [
{
"first": "Ashwin",
"middle": [],
"last": "Rajadesingan",
"suffix": ""
},
{
"first": "Reza",
"middle": [],
"last": "Zafarani",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the eighth ACM international conference on web search and data mining",
"volume": "",
"issue": "",
"pages": "97--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashwin Rajadesingan, Reza Zafarani, and Huan Liu. 2015. Sarcasm Detection on Twitter: A Behavioral Modeling Approach. In Proceedings of the eighth ACM international conference on web search and data mining, pages 97-106.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A literature review on question answering techniques, paradigms and systems",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Antonio",
"suffix": ""
},
{
"first": "Calijorne",
"middle": [],
"last": "Soares",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Silva Parreiras",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Antonio Calijorne Soares and Fernando Silva Parreiras. 2018. A literature review on question an- swering techniques, paradigms and systems. Jour- nal of King Saud University-Computer and Informa- tion Sciences.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Humans require context to infer ironic intent (so computers probably do, too)",
"authors": [
{
"first": "C",
"middle": [],
"last": "Byron",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Kertz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "512--516",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Byron C Wallace, Laura Kertz, Eugene Charniak, et al. 2014. Humans require context to infer ironic in- tent (so computers probably do, too). In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 512-516.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5754--5764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Advances in neural in- formation processing systems, pages 5754-5764.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Tweet sarcasm detection using deep neural network",
"authors": [
{
"first": "Meishan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guohong",
"middle": [],
"last": "Fu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2449--2460",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meishan Zhang, Yue Zhang, and Guohong Fu. 2016. Tweet sarcasm detection using deep neural network. In Proceedings of COLING 2016, the 26th Inter- national Conference on Computational Linguistics: Technical Papers, pages 2449-2460, Osaka, Japan. The COLING 2016 Organizing Committee.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Analysis of Twitter data: Number of sentences Vs Percentage of samples 1. Replacing consecutive instances of punctuation marks with only one instance of it.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Analysis of Reddit data: Number of sentences Vs Percentage of samples",
"uris": null
},
"TABREF0": {
"type_str": "table",
"text": "Fields used in the training data",
"content": "<table><tr><td>different ways by the research community. Sar-</td></tr><tr><td>casm detection is not wholly a linguistic prob-</td></tr><tr><td>lem but extra-lingual features like author and au-</td></tr><tr><td>dience information, communication environment</td></tr><tr><td>etc., also play a significant role in sarcasm identifi-</td></tr><tr><td>cation</td></tr></table>",
"html": null,
"num": null
},
"TABREF2": {
"type_str": "table",
"text": "Fields used in the testing data",
"content": "<table><tr><td colspan=\"2\">Datasets Label</td><td colspan=\"2\">No. of Samples Train Test</td></tr><tr><td>Twitter</td><td>S NS</td><td>2500 2500</td><td>1800</td></tr><tr><td>Reddit</td><td>S NS</td><td>2200 2200</td><td>1800</td></tr></table>",
"html": null,
"num": null
},
"TABREF3": {
"type_str": "table",
"text": "",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF4": {
"type_str": "table",
"text": "",
"content": "<table><tr><td colspan=\"3\">: Twitter Data -Percentage of samples having</td></tr><tr><td colspan=\"3\">certain number of sentences in a conversation</td></tr><tr><td>No of sentences in a conversation</td><td colspan=\"2\">Training Testing</td></tr><tr><td>5 or less</td><td>99.4%</td><td>70%</td></tr><tr><td>7 or less</td><td>99.9%</td><td>93.8%</td></tr><tr><td>10 or less</td><td>100%</td><td>99%</td></tr></table>",
"html": null,
"num": null
},
"TABREF5": {
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Reddit Data -Percentage of samples having</td></tr><tr><td>certain number of sentences in a conversation</td></tr><tr><td>4. Some punctuation marks like exclamation (!)</td></tr><tr><td>have special significance in English text and</td></tr><tr><td>are generally used to express emotions such as</td></tr><tr><td>sudden surprises, praises, excitement or even</td></tr><tr><td>pain. So, we decided to not remove punctua-</td></tr><tr><td>tion marks.</td></tr></table>",
"html": null,
"num": null
},
"TABREF7": {
"type_str": "table",
"text": "Comparison of results for various models for Twitter and Reddit datasets * indicates that the BERT-7 model is not trained as the number of samples in BERT-all model is just one sample more than that in BERT-7 model.",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}