ACL-OCL / Base_JSON /prefixS /json /semeval /2020.semeval-1.112.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:15:53.841884Z"
},
"title": "IIITG-ADBU at SemEval-2020 Task 8: A Multimodal Approach to Detect Offensive, Sarcastic and Humorous Memes",
"authors": [
{
"first": "Arup",
"middle": [],
"last": "Baruah",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIIT Guwahati",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Kaushik",
"middle": [
"Amar"
],
"last": "Das",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIIT Guwahati",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Ferdous",
"middle": [
"Ahmed"
],
"last": "Barbhuiya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIIT Guwahati",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Kuntal",
"middle": [],
"last": "Dey",
"suffix": "",
"affiliation": {
"laboratory": "Accenture Technology Labs",
"institution": "",
"location": {
"settlement": "Bangalore"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present a multimodal architecture to determine the emotion expressed in a meme. This architecture utilizes both textual and visual information present in a meme. To extract image features we experimented with pre-trained VGG-16 and Inception-V3 classifiers and to extract text features we used LSTM and BERT classifiers. Both FastText and GloVe embeddings were experimented with for the LSTM classifier. The best F1 scores our classifier obtained on the official analysis results are 0.3309, 0.4752, and 0.2897 for Task A, B, and C respectively in the Memotion Analysis task (Task 8) organized as part of International Workshop on Semantic Evaluation 2020 (SemEval 2020). In our study, we found that combining both textual and visual information expressed in a meme improves the performance of the classifier as opposed to using standalone classifiers that use only text or visual data. *",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present a multimodal architecture to determine the emotion expressed in a meme. This architecture utilizes both textual and visual information present in a meme. To extract image features we experimented with pre-trained VGG-16 and Inception-V3 classifiers and to extract text features we used LSTM and BERT classifiers. Both FastText and GloVe embeddings were experimented with for the LSTM classifier. The best F1 scores our classifier obtained on the official analysis results are 0.3309, 0.4752, and 0.2897 for Task A, B, and C respectively in the Memotion Analysis task (Task 8) organized as part of International Workshop on Semantic Evaluation 2020 (SemEval 2020). In our study, we found that combining both textual and visual information expressed in a meme improves the performance of the classifier as opposed to using standalone classifiers that use only text or visual data. *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The word meme was first coined by Richard Dawkins to refer \"an idea, behavior, or style that spreads from person to person within a culture\" (Dawkins, 1976) . Internet memes are the result of deliberate modification of an original idea using one's own creativity 1 . It is a type of meme that spreads via the Internet. Memes have become a new way of communication on the Internet. People mostly use it as a way to share jokes. But there are also other classes of memes that are offensive in nature. They spread hatred and racism. This second class of memes is harmful to society. They need to be detected and removed from the Internet. But the scale involved makes manual monitoring difficult.",
"cite_spans": [
{
"start": 141,
"end": 156,
"text": "(Dawkins, 1976)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Automated systems can help detect offensive memes. However, the automatic classification of memes has a lot of challenges. It is not textual data or images in isolation. The information provided by both the image and the text needs to utilized to correctly understand the emotion expressed by the meme. The text itself may be very short having only a few words. But the image provides a context and the expressions exhibited by the images supplement the textual information and it is the combinations of both text and image that enable one to understand the message conveyed by the meme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Memotion Analysis (Task 8) organized as part of the International Workshop on Semantic Evaluation 2020 (SemEval 2020) required detecting the emotion expressed by a given meme (Sharma et al., 2020) . This task consisted of three subtasks: Subtask A -Detect the sentiment of a given meme. It was a three-way classification problem with the labels being positive, negative, or neutral, Subtask B -This was a multi-label classification problem where it was required to determine if a given meme is humorous, motivational, offensive or sarcastic. A meme can belong to multiple class, Subtask C: This was a multiclass multi-label classification problem where each class mentioned in subtask B is further sub-divided into four levels. For example, the humorous class is sub-divided as not funny, funny, very funny, and hilarious.",
"cite_spans": [
{
"start": 175,
"end": 196,
"text": "(Sharma et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We participated in all the three subtasks. We used the pre-trained convolutional neural networks VGG-16 (Simonyan and Zisserman, 2015) and Inception-V3 (Szegedy et al., 2016) as our image classifiers. For processing text, we used LSTM (Hochreiter and Schmidhuber, 1997) and the pre-trained BERT classifier (Devlin et al., 2019) . The rest of the paper is structured as follows: Section 2 discusses the related works that have used multimodal approaches for detecting emotions expressed by memes, Section 3 describes the dataset used in this shared task, Section 3 discusses the methodology and the architecture of the classifier used by us, Section 5 describes the different experiments we have performed in this task including the questions we are trying to answer through the experiments, and Section 6 discusses the results our classifier obtained on the development set and the test set.",
"cite_spans": [
{
"start": 104,
"end": 134,
"text": "(Simonyan and Zisserman, 2015)",
"ref_id": "BIBREF9"
},
{
"start": 152,
"end": 174,
"text": "(Szegedy et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 235,
"end": 269,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF4"
},
{
"start": 306,
"end": 327,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Extracting the text from the meme is an important step in the multimodal processing of meme. Borisyuk et al. (2018) describes Facebook's OCR system for extracting text from images in realtime. Bruni et al. (2014) talks about integrating both text and image based distributional information to create multimodal distributional semantic vectors. Kumar et al. (2020) used a multimodal approach to determine the sentiment of a meme. Google Lens was used to separate the text from the meme. The sentiment expressed by the text was determined using the combination of a convolutional neural network and the sentiment scores of words from the VADER sentiment lexicon. When determining the sentiment scores of the words, the context in which the word appears was taken into consideration. The image was processed using an SVM classifier trained with Bag of Visual Words features. A Boolean decision system was then used to combine the text and image scores to make the final classification. Hu and Flaxman (2018) also used a multimodal approach to determine the emotion expressed in the Tumblr posts. Inception and LSTM were used to process the image and text respectively. Sabat et al. 2019used a multimodal approach to detect hate memes. BERT was used to process the text and VGG-16 was used to process the image. Both the information was concatenated and the final classification was done using an MLP classifier.",
"cite_spans": [
{
"start": 93,
"end": 115,
"text": "Borisyuk et al. (2018)",
"ref_id": "BIBREF0"
},
{
"start": 193,
"end": 212,
"text": "Bruni et al. (2014)",
"ref_id": "BIBREF1"
},
{
"start": 344,
"end": 363,
"text": "Kumar et al. (2020)",
"ref_id": "BIBREF6"
},
{
"start": 983,
"end": 1004,
"text": "Hu and Flaxman (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The train and test data set provided as part of this task consisted of 6,992 and 1,878 memes respectively. The memes in the train data set were annotated for the following categories: sentiment, humour, motivational, offensive, and sarcasm. Table 1 to 5 shows the statistics of the labels used for each category. The labels used for the sentiment category were very positive (L1), positive (L2), neutral (L3), negative (L4), and very negative (L5). The labels used for the humour category were hilarious (L1), not funny (L2), very funny (L3), and funny (L4). The labels used for the motivational category were not motivational (L1), and motivational (L2). The labels used for the offensive category were not offensive (L1), very offensive (L2), slight (L3), and hateful offensive (L4). The labels used for the sarcasm category were general (L1), not sarcastic (L2), twisted meaning (L3), and very twisted (L4). As can be seen from the tables, the data set was, in general, imbalanced.",
"cite_spans": [],
"ref_spans": [
{
"start": 241,
"end": 248,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data Set",
"sec_num": "3"
},
{
"text": "In this multimodal approach of detecting the emotion expressed by a meme, we combined both the visual and textual information to perform the classification. The pre-trained image classifiers VGG-16 (Simonyan and Zisserman, 2015) and Inception-v3 (Szegedy et al., 2016) were used to process the image. The textual data was processed using LSTM (Hochreiter and Schmidhuber, 1997) and the pre-trained BERT (Devlin et al., 2019) classifier.",
"cite_spans": [
{
"start": 198,
"end": 228,
"text": "(Simonyan and Zisserman, 2015)",
"ref_id": "BIBREF9"
},
{
"start": 246,
"end": 268,
"text": "(Szegedy et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 343,
"end": 377,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF4"
},
{
"start": 403,
"end": 424,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "VGG-16 is a convolutional neural network (CNN) based architecture for image classification. It was ranked second in the ImageNet image classification task in 2014 (ILSVRC 2014). It has 13 convolutional layers, 5 Max pooling layers, and 3 Dense layers. Out of these 21 layers, only 16 are weight layers. Inception-V3 is the third version of Google's Inception network. It too is a CNN based architecture. It was the 1 st runner-up in the ImageNet image classification task in 2015. It consists of 42 layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "3,507 (RNN) . It handles the vanishing and the exploding gradient problem through the use of an input gate, output gate, and forget gate. It is thus able to handle long range dependencies. BERT is a bi-directional model based on the transformer architecture. The transformer architecture is an architecture based solely on attention mechanism (Vaswani et al., 2017) . Figure 1 shows the architecture of the classifier used by us. This architecture is inspired by a work performed for automatic image captioning 2 . As shown in the figure, the text first needs to be extracted from the meme. In this SemEval task, the text was already extracted from the meme by the organizers of the task. The extracted text was provided in the data set along with the memes. The text and the image are then processed independently by our classifier. We used the pre-trained convolutional neural network based classifiers, VGG-16, and Inception-v3, to process the image. For both these pre-trained classifiers, the output of the second-last layer is used as the representation of the image. For VGG-16, the size of the vector produced by the second-last layer is 4096 and in the case of Inception-v3 the size of this vector is 2048. These vectors were then fed to a Dense layer having 256 units. The text was processed using LSTM and the pre-trained BERT classifier. For LSTM, the words in the text were represented using pre-trained GloVe 3 and fastText 4 embeddings. An LSTM of size 256 units was used in our experiment. The hidden states of the intermediate time steps were not used. Only the hidden state of the final step was used as the representation of the text. This vector was merged with the image vector directly. Thus, the intermediate Dense layer that is shown in the diagram is not used in the case of LSTM. The intermediate Dense layer is used only in the case of BERT. For BERT, we used the uncased large version of it 5 . We used the 1024 dimensional vector produced by the Extract layer of BERT as the representation of the text. This vector was then fed to a Dense layer having 256 units. The maximum sequence lengths of 30 and 50 were used for LSTM and BERT respectively.",
"cite_spans": [
{
"start": 343,
"end": 365,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 6,
"end": 11,
"text": "(RNN)",
"ref_id": null
},
{
"start": 368,
"end": 376,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "L1",
"sec_num": null
},
{
"text": "(50%) L2 1,544 (22%) L3 1,547 (22%) L4 394 (6%) Total 6,992",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "L1",
"sec_num": null
},
{
"text": "The 256-dimensional vectors produced for the image and text were combined together in a Merge layer. The output of the merge layer is then fed to a Dense layer having 256 units. The output of this Dense layer is then fed to a classification layer. The classification layer consisted of three units for subtask A, a single unit for subtask B and four units for subtask C. During training, we fine-tuned only the Dense and LSTM layers. The pre-trained layers of VGG-16, Inception-v3, and BERT were not retrained. The adam optimizer with the default learning rate of 0.001 was used to train the classifier. The loss function categorical crossentropy was used for subtask A and C and binary crossentropy was used for subtask B. The relu activation function was used for all the Dense layers except the final classification layer. The final classification layer used the softmax activation function for subtask A and C, and the sigmoid activation function for subtask B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Architecture of our Classifier",
"sec_num": "4.1"
},
{
"text": "Subtask B and C were multi-label classification problems. For these subtasks, we used the binary relevance approach for classification. We trained a separate classifier for each class (humorous, motivational, offensive, and sarcasm). For subtask B, the classifiers were binary classifiers and for subtask C the classifiers were multi-class classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Architecture of our Classifier",
"sec_num": "4.1"
},
{
"text": "We performed the following nine types of experiments for this task: (1) Only VGG-16, (2) Only Inception-v3, (3) Only LSTM with fastText embeddings, (4) Only BERT, (5) VGG-16 for processing image and LSTM with GloVe embeddings for processing text, (6) VGG-16 for processing image and LSTM with fastText embeddings for processing text, (7) Inception-v3 for processing image and LSTM with GloVe embeddings for processing text, (8) Inception-v3 for processing image and LSTM with fastText embeddings for processing text, and (9) Inception-v3 for processing image and BERT for processing text. The first four experiments used standalone classifiers (meaning classifiers that used only textual or visual data). The next five experiments used multimodal classifiers (classifiers that combined both textual and visual data). The reason for conducting these experiments was to find an answer to the following questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "1. Does combining both textual and visual data improve performance compared to using only textual or visual data?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "2. Among standalone classifiers, is image based classifier better than text based classifier?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "3. Among multimodal classifiers, which combination provides the best performance?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "6 Results Table 6 shows the results obtained by our classifiers on the development set. The development set was created by doing a stratified split on the train data set. 20% of the train data set was used as the development set. The table shows the F1 score obtained by the classifiers for the subtasks Task A (A), Task B Humour (B-H), Task B Motivational (B-M), Task B Offensive (B-O), Task B Sarcasm (B-S), Task C Humour (C-H), Task C Motivational (C-M), Task C Offensive (C-O), and Task C Sarcasm (C-S). The Motivational class had only two unique labels (motivational, not-motivational) as opposed to the other three classes (Humour, Offensive, and Sarcasm) which had four unique labels. Thus, Task B Motivational and Task C Motivational reduced to be the same task. For this reason, only one column is shown in the table for these two subtasks. As can be seen from the table, except for Task B Offensive subtask, all the other subtasks have benefited from combining both the image and text data. In the case of Task B Offensive subtask, the best F1 score was obtained when using only the fastText based LSTM classifier. Compared to the best performing standalone classifier, the best performing multimodal classifiers obtained gain of 1.44%, 4.41%, 3.87%, 4.91%, 2.18%, 2.18%, 0.89%, and 1.62% in the F1 score for subtask A, B-H, B-M, B-S, C-H, C-O, and C-S respectively. We can thus say that detecting emotion in memes benefits from combining both textual and visual data as compared to using standalone classifiers.",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 17,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Among the standalone classifiers, the classifier that used only visual data performed better for subtasks A, B-M, C-O, and C-S. Whereas, the classifier that used only text data performed better in subtasks B-H, B-O, and C-H. Thus, there was no clear winner among the standalone classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Among the multimodal classifiers, VGG16+LSTM(FastText) performed the best for subtasks A, B-O, and C-H; InceptionV3+LSTM(FastText) performed the best for subtasks B-H, and B-M; Incep-tionV3+LSTM(GloVe) performed the best for subtasks B-S and C-S; and VGG16+LSTM(GloVe) performed the best for subtask C-O. Table 7 shows the official results on the test set. The five runs that were submitted for analysis are InceptionV3+LSTM(GloVe) (Run1), VGG16+LSTM(GloVe) (Run2), VGG16+LSTM(FastText) (Run3), InceptionV3+LSTM(FastText) (Run4), and InceptionV3+BERT (Run5). As can be seen from table 7, InceptionV3+BERT which did not perform well on the development set, produced the best F1 scores for Task A and Task B. VGG16+LSTM(FastText) produced the best result for Task C. The predictions from InceptionV3+LSTM(GloVe) was our final submission for Task A and this submission was ranked 30. The predictions from InceptionV3+LSTM(FastText) were our final submission for Task B and C, and these submissions obtained the rank of 26 and 23 for the two tasks respectively. ",
"cite_spans": [],
"ref_spans": [
{
"start": 305,
"end": 312,
"text": "Table 7",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Memes are becoming a very popular means of communication in social media. Some of these memes are offensive in nature and they are used to spread hatred. Thus, it is very important to develop systems to automatically determine the emotion expressed by memes. Developing such systems requires combining both textual and visual information present in the meme. In this paper, we presented a multimodal architecture that combines both textual and visual data to determine the emotion expressed by the meme. This classifier obtained F1 scores of 0.3309, 0.4752, and 0.2897 for task A, B, and C respectively. In our study, we found that combining both textual and visual data improves the performance of the classifiers than using standalone classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://towardsdatascience.com/image-captioning-with-keras-teaching-computers-to-describe-pictures-c88a46a311b8 3 https://nlp.stanford.edu/projects/glove/ 4 https://fasttext.cc/docs/en/english-vectors.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/google-research/bert",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Rosetta: Large scale system for text detection and recognition in images",
"authors": [
{
"first": "Fedor",
"middle": [],
"last": "Borisyuk",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Gordo",
"suffix": ""
},
{
"first": "Viswanath",
"middle": [],
"last": "Sivakumar",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining",
"volume": "",
"issue": "",
"pages": "71--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fedor Borisyuk, Albert Gordo, and Viswanath Sivakumar. 2018. Rosetta: Large scale system for text detection and recognition in images. In Yike Guo and Faisal Farooq, editors, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, August 19-23, 2018, pages 71-79. ACM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multimodal distributional semantics",
"authors": [
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Nam-Khanh",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "J. Artif. Intell. Res",
"volume": "49",
"issue": "",
"pages": "1--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. J. Artif. Intell. Res., 49:1-47.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Selfish Gene",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Dawkins",
"suffix": ""
}
],
"year": 1976,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Dawkins. 1976. The Selfish Gene. Oxford University Press, Oxford, UK.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Multimodal sentiment analysis to explore the structure of emotions",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Seth",
"middle": [
"R"
],
"last": "Flaxman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining",
"volume": "",
"issue": "",
"pages": "350--358",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Hu and Seth R. Flaxman. 2018. Multimodal sentiment analysis to explore the structure of emotions. In Yike Guo and Faisal Farooq, editors, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, August 19-23, 2018, pages 350-358. ACM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Hybrid context enriched deep learning model for fine-grained sentiment analysis in textual and visual semiotic modality social data",
"authors": [
{
"first": "Akshi",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Kathiravan",
"middle": [],
"last": "Srinivasan",
"suffix": ""
},
{
"first": "Wen-Huang",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Albert",
"middle": [
"Y"
],
"last": "Zomaya",
"suffix": ""
}
],
"year": 2020,
"venue": "Inf. Process. Manag",
"volume": "57",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akshi Kumar, Kathiravan Srinivasan, Wen-Huang Cheng, and Albert Y. Zomaya. 2020. Hybrid context enriched deep learning model for fine-grained sentiment analysis in textual and visual semiotic modality social data. Inf. Process. Manag., 57(1).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Hate speech in pixels: Detection of offensive memes towards automatic moderation",
"authors": [
{
"first": "",
"middle": [],
"last": "Benet Oriol",
"suffix": ""
},
{
"first": "Cristian",
"middle": [],
"last": "Sabat",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Canton-Ferrer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gir\u00f3-I-Nieto",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benet Oriol Sabat, Cristian Canton-Ferrer, and Xavier Gir\u00f3-i-Nieto. 2019. Hate speech in pixels: Detection of offensive memes towards automatic moderation. CoRR, abs/1910.02334.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Viswanath Pulabaigari, and Bj\u00f6rn Gamb\u00e4ck. 2020. SemEval-2020 Task 8: Memotion Analysis-The Visuo-Lingual Metaphor! In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"authors": [
{
"first": "Chhavi",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Deepesh",
"middle": [],
"last": "Bhageria",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Scott Paka",
"suffix": ""
},
{
"first": "P Y K L",
"middle": [],
"last": "Srinivas",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chhavi Sharma, Deepesh Bhageria, William Scott Paka, Srinivas P Y K L, Amitava Das, Tanmoy Chakraborty, Viswanath Pulabaigari, and Bj\u00f6rn Gamb\u00e4ck. 2020. SemEval-2020 Task 8: Memotion Analysis-The Visuo- Lingual Metaphor! In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, Dec. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Very deep convolutional networks for large-scale image recognition",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recogni- tion. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Rethinking the inception architecture for computer vision",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Vanhoucke",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Ioffe",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Zbigniew",
"middle": [],
"last": "Wojna",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "2818--2826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 2818-2826. IEEE Computer Society.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Attention is All you Need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan. N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Architecture of our classifier",
"num": null,
"uris": null
},
"TABREF0": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>L1</td><td>2,713</td></tr><tr><td/><td>(39%)</td></tr><tr><td>L2</td><td>1,466</td></tr><tr><td/><td>(21%)</td></tr><tr><td>L3</td><td>2,592</td></tr><tr><td/><td>(37%)</td></tr><tr><td>L4</td><td>221</td></tr><tr><td/><td>(3%)</td></tr><tr><td colspan=\"2\">Total 6,992</td></tr><tr><td>: Sarcasm</td><td/></tr></table>"
},
"TABREF1": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>L1</td><td>1,033</td></tr><tr><td/><td>(15%)</td></tr><tr><td>L2</td><td>3,127</td></tr><tr><td/><td>(45%)</td></tr><tr><td>L3</td><td>2,201</td></tr><tr><td/><td>(31%)</td></tr><tr><td>L4</td><td>480</td></tr><tr><td/><td>(7%)</td></tr><tr><td>L5</td><td>151</td></tr><tr><td/><td>(2%)</td></tr><tr><td colspan=\"2\">Total 6,992</td></tr><tr><td>: Offensive</td><td/></tr></table>"
},
"TABREF2": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>L1</td><td>651</td></tr><tr><td/><td>(9%)</td></tr><tr><td>L2</td><td>1,651</td></tr><tr><td/><td>(24%)</td></tr><tr><td>L3</td><td>2,238</td></tr><tr><td/><td>(32%)</td></tr><tr><td>L4</td><td>2,452</td></tr><tr><td/><td>(35%)</td></tr><tr><td colspan=\"2\">Total 6,992</td></tr><tr><td>: Sentiment</td><td/></tr></table>"
},
"TABREF3": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>L1</td><td>4,525</td></tr><tr><td/><td>(65%)</td></tr><tr><td>L2</td><td>2,467</td></tr><tr><td/><td>(35%)</td></tr><tr><td colspan=\"2\">Total 6,992</td></tr><tr><td>: Humour</td><td/></tr></table>"
},
"TABREF4": {
"text": "MotivationalLSTM is a type of recurrent neural network",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF5": {
"text": "Macro F1 scores on development set",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>Task</td><td colspan=\"2\">Run 1 Run 2 Run 3 Run 4 Run 5 Baseline Best</td><td>Rank</td></tr><tr><td colspan=\"2\">Task A 0.3078 0.2690 0.2759 0.2471 0.3309 0.2176</td><td colspan=\"2\">0.3546 30 (for Run 1)</td></tr><tr><td colspan=\"2\">Task B 0.4646 0.4619 0.4666 0.4650 0.4752 0.5002</td><td colspan=\"2\">0.5183 26 (for Run 4)</td></tr><tr><td colspan=\"2\">Task C 0.2616 0.2763 0.2897 0.2850 0.2839 0.3008</td><td colspan=\"2\">0.3224 23 (for Run 4)</td></tr></table>"
},
"TABREF6": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
}
}
}
}