ACL-OCL / Base_JSON /prefixC /json /constraint /2022.constraint-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:12:40.979963Z"
},
"title": "Fine-tuning and Sampling Strategies for Multimodal Role Labeling of Entities under Class Imbalance",
"authors": [
{
"first": "Syrielle",
"middle": [],
"last": "Montariol",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "INRIA Paris",
"location": {
"postCode": "F-75012",
"settlement": "Paris",
"country": "France"
}
},
"email": ""
},
{
"first": "\u00c9tienne",
"middle": [],
"last": "Simon",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Arij",
"middle": [],
"last": "Riabi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "INRIA Paris",
"location": {
"postCode": "F-75012",
"settlement": "Paris",
"country": "France"
}
},
"email": ""
},
{
"first": "Djam\u00e9",
"middle": [],
"last": "Seddah",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "INRIA Paris",
"location": {
"postCode": "F-75012",
"settlement": "Paris",
"country": "France"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose our solution to the multimodal semantic role labeling task from the CON-STRAINT'22 workshop. The task aims at classifying entities in memes into classes such as \"hero\" and \"villain\". We use several pre-trained multi-modal models to jointly encode the text and image of the memes, and implement three systems to classify the role of the entities. We propose dynamic sampling strategies to tackle the issue of class imbalance. Finally, we perform qualitative analysis on the representations of the entities.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose our solution to the multimodal semantic role labeling task from the CON-STRAINT'22 workshop. The task aims at classifying entities in memes into classes such as \"hero\" and \"villain\". We use several pre-trained multi-modal models to jointly encode the text and image of the memes, and implement three systems to classify the role of the entities. We propose dynamic sampling strategies to tackle the issue of class imbalance. Finally, we perform qualitative analysis on the representations of the entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Social media memes can be defined as \"pieces of culture, typically jokes, which gain influence through online transmission\" (Davison, 2012) . More specifically, memes are visual templates usually associated with a textual caption. Analysing memes involves many unique challenges that differ from classical multimodal tasks such as image captioning and visual question answering. While unimodal models can often perform well on multimodal datasets (Agrawal et al., 2018) , memes involve a lot of entanglement -stylistic or semantic -between the two modalities, such as the caption contradicting the image. This makes memes intrinsically multimodal. Furthermore, pragmatics -the context's contribution to meaning -plays a key role in the interpretation of memes. In particular, phenomenons such as irony are challenging to detect. Even human annotators have difficulties in interpreting a meme correctly without knowledge of the community in which the meme was shared.",
"cite_spans": [
{
"start": 124,
"end": 139,
"text": "(Davison, 2012)",
"ref_id": "BIBREF8"
},
{
"start": 447,
"end": 469,
"text": "(Agrawal et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we tackle the shared task on multimodal semantic role labeling of the workshop (Sharma et al., 2022) . Given a (meme, entity) pair, 1 the goal is to classify the entity's role in the meme into one of four classes (hero, villain, victim or other) from the perspective of the author of the meme. The multimodality of the problem stems from the meme, which is given as an (image, OCR) pair, where OCR (for Optical Character Recognition) is the caption extracted from the image. The dataset covers one language, English, and two domains, COVID-19 and US politics. Figure 1 shows a sample from the training set.",
"cite_spans": [
{
"start": 94,
"end": 115,
"text": "(Sharma et al., 2022)",
"ref_id": "BIBREF27"
},
{
"start": 228,
"end": 260,
"text": "(hero, villain, victim or other)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 575,
"end": 583,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Understanding memes involves a lot of commonsense and cultural knowledge on the political stance of the entities. Thus, it requires models pre-trained on a large amount of data, capable of recognising key entities such as political figures in both modalities, and of inferring their relationship, their role and the public opinion of a community on them. To evaluate the task's difficulty, we manually annotate a set of samples. With 5 annotators, we reach an average Macro-F 1 of 0.65 (see details in Appendix A), less than 10 points above the best system submitted to the shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose systems relying on several multimodal (vision-language) pre-trained models: One For All (OFA, Wang et al., 2022) , CLIP (Radford et al., 2021) and VisualBERT (Li et al., 2019) . We use these models as encoders to extract multimodal meme representations. These encoders are introduced in Section 3. We then design several neural network classifiers to handle these representations in a task-specific fashion. These classifiers are presented in Section 4.1.",
"cite_spans": [
{
"start": 99,
"end": 123,
"text": "(OFA, Wang et al., 2022)",
"ref_id": null
},
{
"start": 131,
"end": 153,
"text": "(Radford et al., 2021)",
"ref_id": "BIBREF22"
},
{
"start": 169,
"end": 186,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The CONSTRAINT'22 dataset is characterised by a large class imbalance, with the most frequent class gathering 78% of the samples in the train set, while the least frequent one is conveyed by less than 3% of the samples. However, the challenge is evaluated using a Macro-F 1 metric and calls for balanced performances across all classes. To handle this discrepancy, we developed several sub- Figure 1 : In this meme, the OCR is: \"WEARS A MASK THE SAME WAY\\nEXIT\\nHE HANDLES THE\\nPANDEMIC \\nmakeameme.org\\n\". There are two entities, \"Donald trump\" labeled as villain and \"mask\" labeled as other.",
"cite_spans": [],
"ref_spans": [
{
"start": 391,
"end": 399,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "sampling strategies that we present in Section 4.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our best results are obtained by ensembling predictions from all of our models, using various ensembling methods. The details of the ensembling methods are given in Section 4.3. Finally, we present our performance in Section 5 along with a qualitative analysis of our models. We highlight the limitations of the dataset, task and methods in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To summarise, our whole architecture is built on freely available pre-trained models. We only fine-tune these models for the multimodal semantic role labeling task. This makes computational training cost particularly low. Our system can be characterised by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Simple classifier design on top of deep pretrained model. \u2022 Handling of class imbalance through carefully-designed sampling strategies. Our code is available at: https://github. com/smontariol/mmsrl_constraint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Multimodal semantic role detection in memes is a relatively unique task, compared to other languageimage multimodal task such as object classification and entity action detection, it requires a lot more contextual and cultural background. In this section, we list some related problems before introducing tools to tackle the task at hand in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In recent years, social media platforms have seen a wave of multimodal data in diverse media types. This attracted the interest of researchers to combine modalities to solve various tasks with joint representations, where the model's encoder takes all the modalities as input, or separated representations, where all modalities are encoded separately (Baltru\u0161aitis et al., 2018) .",
"cite_spans": [
{
"start": 351,
"end": 378,
"text": "(Baltru\u0161aitis et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the CONSTRAINT'22 challenge, we tackle multimodal semantic role labeling (SRL). SRL is originally a Natural Language Processing (NLP) task which consists in labeling words in a sentence with different semantics roles to determine Who did What to Whom, When and Where (Gildea and Jurafsky, 2002; Carreras and M\u00e0rquez, 2005) ; these roles are also known as thematic relations. It was extended to the computer vision domain through Visual SRL. Visual SRL benchmarks focus on situation recognition in images (Silberer and Pinkal, 2018; Pratt et al., 2020) ; these tasks heavily rely on object detection systems for visual groundings (Yang et al., 2019) . This differs from the methods we need to implement for the shared task, where the entities do not necessarily appear in the image. Moreover, in our case, the semantic role is taken from the point of view of a political argumentative: the perception of the entity by the author of the meme. This involves completely different features compared to labeling the thematic relations of the entity; in particular, cultural and contextual knowledge on the background of the meme.",
"cite_spans": [
{
"start": 270,
"end": 297,
"text": "(Gildea and Jurafsky, 2002;",
"ref_id": "BIBREF13"
},
{
"start": 298,
"end": 325,
"text": "Carreras and M\u00e0rquez, 2005)",
"ref_id": "BIBREF4"
},
{
"start": 507,
"end": 534,
"text": "(Silberer and Pinkal, 2018;",
"ref_id": "BIBREF28"
},
{
"start": 535,
"end": 554,
"text": "Pratt et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 632,
"end": 651,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Another similar task is multimodal named entity recognition, which aims at identifying and classifying named entities in texts and images. It requires more in-domain knowledge compared to multimodal SRL; but most multimodal NER datasets are text-centric, with the image being an additional feature for the text-based prediction (Arshad et al., 2019; Chen et al., 2021) , while our task is more symmetrical or even image-centric. Finally, many shared task on memes have been proposed in recent years, with a large variety of tasks: emotion classification (e.g. MEMOTION task at SemEval 2020 Sharma et al., 2020) ; hateful meme detection (e.g. the Hateful Meme Challenge Kiela et al., 2020 ) event clustering (e.g. DANKMEMES at EVALITA 2020 (Miliani et al., 2020) ); more fine-grained hateful content analysis (Fine-Grained Hateful Memes Detection Mathias et al., 2021, aiming at classifying the target attacked by the meme and the type of attack); or and detection of persuasion techniques (e.g. Semeval 2021 Task 6, .",
"cite_spans": [
{
"start": 328,
"end": 349,
"text": "(Arshad et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 350,
"end": 368,
"text": "Chen et al., 2021)",
"ref_id": "BIBREF5"
},
{
"start": 590,
"end": 610,
"text": "Sharma et al., 2020)",
"ref_id": null
},
{
"start": 669,
"end": 687,
"text": "Kiela et al., 2020",
"ref_id": null
},
{
"start": 739,
"end": 761,
"text": "(Miliani et al., 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Since we experiment with deep neural networks, we need to obtain distributed representations of our inputs. To this end, we use pre-trained mod-els with good performances on popular datasets. These models are multimodal transformers, that we use to encode image and caption's OCR into a common latent space. While transformers were originally developed for natural language processing (Vaswani et al., 2017; Devlin et al., 2019) , they subsequently became ubiquitous in computer vision models as well (Dosovitskiy et al., 2021) . To process an image, it is first cut into a sequence of P \u00d7 P \u00d7 C patches. These patches are then projected into the transformer input dimension, either using a single linear layer, or using a full-fledged CNN architecture.",
"cite_spans": [
{
"start": 385,
"end": 407,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF31"
},
{
"start": 408,
"end": 428,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 501,
"end": 527,
"text": "(Dosovitskiy et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Encoding",
"sec_num": "3"
},
{
"text": "The output of a transformer has the same length as its input. We call this length N ; it is the number of patches in the image, the number of tokens in the OCR, or the sum of the two for multimodal transformers. Thereafter, we refer to an encoded meme image i and OCR o as enc full (o, i) \u2208 R N \u00d7d . This output can be further pooled into a fixed-size representation enc pool (o, i) \u2208 R d . We now describe what models are behind these encoder functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Encoding",
"sec_num": "3"
},
{
"text": "The multi-modal features are extracted from the caption's OCR and the meme image using two vision-language models, CLIP and VisualBERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CLIP and VisualBERT",
"sec_num": "3.1"
},
{
"text": "CLIP (Contrastive Language-Image Pretraining, Radford et al., 2021) is trained using text as supervision to encode images, with 400 million image-text pairs available on the internet. The training task is to predict which text is associated with an image, from all text snippets of the batch, using a contrastive objective instead of a predictive one for computational efficiency. CLIP trains an image encoder and a text encoder jointly, maximizing the cosine similarity of the image and text embeddings in the joint representation space for positive pairs, and minimizing similarity of negative pairs. The strength of this task is to offer large robustness and zero-shot capability to the model, to transfer to many classification tasks. Image encoding is done using a variation of the Vision Transformer (ViT, Dosovitskiy et al., 2021) . Text encoding is done using a GPT-like language model (Radford et al., 2019) . 2 Similar to CLIP, we use a VisualBERT model (Li et al., 2019) trained on visual commonsense reasoning and image captioning. VisualBERT uses self-attention to align parts of the text with regions of the image and build a joint representation. It mostly differs from CLIP in its training procedure in three phases: task-agnostic pre-training, taskspecific pre-training, and task-specific fine-tuning. Moreover, VisualBERT does not include an image encoder; the patch features are extracted beforehand with pre-trained image classification and segmentation models. We extract features using FasterRCNN (Ren et al., 2015) , EfficientNet (Tan and Le, 2019) and VGG (Simonyan and Zisserman, 2015) . Bucur et al. (2022) showed that EfficientNet features prove useful for sentiment and emotion analyses of meme, while Pramanick et al. (2021) prove the efficiency of VGG for detecting harmful memes and identifying their target.",
"cite_spans": [
{
"start": 46,
"end": 67,
"text": "Radford et al., 2021)",
"ref_id": "BIBREF22"
},
{
"start": 812,
"end": 837,
"text": "Dosovitskiy et al., 2021)",
"ref_id": null
},
{
"start": 894,
"end": 916,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 919,
"end": 920,
"text": "2",
"ref_id": null
},
{
"start": 964,
"end": 981,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 1508,
"end": 1537,
"text": "FasterRCNN (Ren et al., 2015)",
"ref_id": null
},
{
"start": 1553,
"end": 1571,
"text": "(Tan and Le, 2019)",
"ref_id": "BIBREF30"
},
{
"start": 1580,
"end": 1610,
"text": "(Simonyan and Zisserman, 2015)",
"ref_id": "BIBREF29"
},
{
"start": 1613,
"end": 1632,
"text": "Bucur et al. (2022)",
"ref_id": "BIBREF3"
},
{
"start": 1730,
"end": 1753,
"text": "Pramanick et al. (2021)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CLIP and VisualBERT",
"sec_num": "3.1"
},
{
"text": "The output of both CLIP and VisualBERT can either be pooled (enc pool ) or be used as-is (enc full ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CLIP and VisualBERT",
"sec_num": "3.1"
},
{
"text": "A second method we experiment with to obtain a distributed representation of text and images is OFA (One For All, Wang et al., 2022) . OFA is based on an encoder-decoder architecture pretrained on several visual, textual and cross-modal tasks. A key point of OFA is to leverage a diverse set of training tasks to obtain good zero-shot performances. Despite this claim, we did not obtain satisfactory zero-shot results. We hypothesize that this is due to the noisy OCR and to the nature of meme role labeling which is radically different from what OFA was pre-trained on.",
"cite_spans": [
{
"start": 114,
"end": 132,
"text": "Wang et al., 2022)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "OFA",
"sec_num": "3.2"
},
{
"text": "All tasks are expressed as sequence-to-sequence problems, such that a single OFA model can be used without the need of task-specific layers. For example, one of the pretraining task is image captioning; for this task, the model is trained to predict the caption given the image and the text \"What does the image describe?\" as inputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OFA",
"sec_num": "3.2"
},
{
"text": "The input image and text are fed jointly to the encoding transformer using modality-specific positional embeddings. The image representation is built from 16 \u00d7 16 patches embedded by a ResNet (He et al., 2016) . The decoding transformer is trained as a causal language model conditioned on the encoder's output with a standard cross-entropy loss. When the output is constrained on a small number of classes, the model is trained and evaluated on the task's output domain, not on the whole output vocabulary.",
"cite_spans": [
{
"start": 192,
"end": 209,
"text": "(He et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "OFA",
"sec_num": "3.2"
},
{
"text": "For the meme role labeling task, we feed OFA with the image as well as the following instruction:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OFA",
"sec_num": "3.2"
},
{
"text": "\"What is the category of ENTITY between hero, villain and victim? OCR\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OFA",
"sec_num": "3.2"
},
{
"text": "As we detail in the next Section 4, we train OFA either as a sequence to sequence problem (resulting in a pair of models enc OFA -dec OFA ) or by adding a classification head on top of the decoder (which can be used as a standard enc pool ). 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OFA",
"sec_num": "3.2"
},
{
"text": "We now describe how we use the encoded text and images for semantic role labeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "We experiment with three different methods to classify a (meme, entity) pair, depending on what kind of representation we get from the encoder. The representation of the meme is composed of the image's representation along with the encoded caption's OCR, and any extra features such as the list of entities related to the meme. For ease of notation, we group under \"OCR\" all extra features which were extracted from the meme, and we refer to them using a single variable o = (OCR, caption, . . . ). Image features are referred to by i and the encoded list of entities by e. All classifiers are illustrated in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 609,
"end": 617,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.1"
},
{
"text": "Multilayer perceptron (MLP) When the output of the encoder is of fixed size, we use a 2-layers MLP classifier. The input of the classifier is made from the encoding of the OCR, image and entity. The representation of the entity is obtained using the same transformer used to process the OCR. The output of the model is a softmax on the four possible roles:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.1"
},
{
"text": "P (r | o, i, e) \u221d exp MLP enc pool (o, i) enc pool (e) r .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.1"
},
{
"text": "This model is trained using a standard crossentropy loss. Depending on the encoder, we either train the MLP alone, or the MLP and the encoder jointly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.1"
},
{
"text": "Attention When the representations of the OCR and image are not pooled along the sequence's length, we use an attention mechanism. In this case, the query of the attention is the entity we wish to classify, while the memory is built from a concatenation of the image and OCR encoded by CLIP or VisualBERT:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.1"
},
{
"text": "\u03b1 j \u221d exp enc pool (e) T W k enc full (o, i) j , a = ReLU \uf8eb \uf8ed j \u03b1 j W v enc full (o, i) j \uf8f6 \uf8f8 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.1"
},
{
"text": "where W k and W v are parameters used to project the encoded meme for use as attention key and value. We classify the attention output a, using a softmax layer P (r | o, i, e) \u221d exp(W p a) r . Since the encoders already use positional embeddings, we do not add this information to our classifier's attention. However, we do use segment embeddings to distinguish the vectors encoding the image, OCR or entity list in the encoder's output. We use different MLP layers depending on whether a vector correspond to an input image, OCR or entity list. This model is also trained by minimizing the cross-entropy with gold labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.1"
},
{
"text": "Seq2seq When using an OFA encoder, we also attempt to stay in the sequence to sequence framework and train the model to generate the class labels. In this case, if we denote the label's tokens by \u2113, the model is trained to maximize the likelihood that the meme (o, i) has the gold target \u2113:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.1"
},
{
"text": "P (\u2113 k | \u2113 <k , o, i) \u221d dec ofa (enc ofa (o, i), \u2113 <k ) \u2113 k ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.1"
},
{
"text": "where \u2113 <k = [\u2113 1 , \u2113 2 , . . . , \u2113 k\u22121 ] T refers to the list of previous tokens. To evaluate this model, the loglikelihood of the possible labels are summed along sequence length:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.1"
},
{
"text": "r = arg max r P (r | o, i) \u221d k P (\u2113 (r) k | \u2113 (r) <k , o, i),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.1"
},
{
"text": "where \u2113 (r) designates the list of tokens for the label r, such as [vil, lain] T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.1"
},
{
"text": "Additional features As explained in Section 2, our task is quite different from most multimodal tasks on which the encoders were trained; it is much more abstract and requires a lot of additional background knowledge. Thus, when using CLIP and VisualBERT, we add supplementary features as input to the classification model (MLP and attention).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.1"
},
{
"text": "We add as textual features the list of entities associated with the meme, this list is directly available in the dataset. We encode the entities' names using the same encoder as the system (CLIP or Vi-sualBERT). 4 We also add to the system the image features that were extracted using VGG, Efficient-NET and FRCNN.",
"cite_spans": [
{
"start": 212,
"end": 213,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.1"
},
{
"text": "enc OFA dec OFA \u2113 1 dec OFA \u2113 2 dec OFA \u2113 3 dec OFA \u2113 i \u2022 \u2022 \u2022 \u2022 \u2022 \u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.1"
},
{
"text": "The dataset faces a large class imbalance, with the class other being over-represented (78% in the train set) and classes hero and victim consisting of only 2.7% and 5.2% of the train set respectively. Thus, training on the raw dataset might lead to overfitting and over-predicting the majority class. Moreover, recall that the evaluation metric is Macro-F 1 , which weighs each class equally; hence the importance of solving the class imbalance issue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dealing with Class Imbalance",
"sec_num": "4.2"
},
{
"text": "Our first solution was to weight labels in the loss. This loss penalisation led to poor performances; we suspect this is due to the working of the optimization algorithm we used. Adam and its variants estimate the distribution of the gradients using exponential moving averages; these estimates are faulty when the magnitude of the loss changes often.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dealing with Class Imbalance",
"sec_num": "4.2"
},
{
"text": "A common strategy is over-sampling the lowfrequency classes and under-sampling the highfrequency ones. Each (meme, entity) pair is dropped with a pre-defined probability, following various class sampling strategies. We evaluated 6 different sampling strategies illustrated in Figure 3 : 4 We also experiment with adding generated captions as features. We generate them using an OFA model trained for automatic caption generation. However, the captions are very generic and descriptive; for example the entities names are not captured by the model. This features does not improve the systems, hence we do not further develop it in the results section.",
"cite_spans": [
{
"start": 287,
"end": 288,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 276,
"end": 284,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Dealing with Class Imbalance",
"sec_num": "4.2"
},
{
"text": "Micro does not subsample. This optimize the Micro-F 1 , which puts more weight on samples labeled other due to their sheer number.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dealing with Class Imbalance",
"sec_num": "4.2"
},
{
"text": "Macro subsamples memes such that the label distribution is uniform. This implies dropping a large amount of other samples in order to lower their frequency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dealing with Class Imbalance",
"sec_num": "4.2"
},
{
"text": "In-between is a compromise between micro and macro, balancing between matching the evaluation loss and seeing a more diverse set of samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dealing with Class Imbalance",
"sec_num": "4.2"
},
{
"text": "Interpolate drifts from micro to macro during training. For the first epoch, the memes are sampled according to the empirical distribution (micro); while the last epoch is sampled to have a uniform label distribution (macro).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dealing with Class Imbalance",
"sec_num": "4.2"
},
{
"text": "Cycle alternates between micro and macro (2epoch short cycle) or between micro, macro and two different in-between (4-epoch long cycle).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dealing with Class Imbalance",
"sec_num": "4.2"
},
{
"text": "For the last two strategies, the sampling rates are updated at the end of each epoch during training. In general, these dynamic sampling strategies performed better than sampling strategies with a fixed rate for the whole training duration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dealing with Class Imbalance",
"sec_num": "4.2"
},
{
"text": "In order to further improve our results, we build several ensemble of our models. We filter-out models with a low validation macro-F 1 and experiment with several ensembling techniques. Due to the small size of the dataset, we did not create an additional split to evaluate our ensembling approach. In this context, overfitting the validation set is a risk. Two of the ensembling methods we evaluate are therefore non-parametric. These non-parametric strategies take the average or the median probability assigned to each class by all models. Preliminary results indicate that training a linear model to weight the output of our various models is tedious and does not improve over nonparametric strategies. We therefore turn towards gradient boosted trees (Friedman, 2001) trained by XGBoost (Chen and Guestrin, 2016) . XGBoost builds an ensemble of decision trees, whose internal nodes correspond to conditions on our models' output, and whose leaves correspond to a predicted semantic role. Boosted trees have the potential to outperform non-parametric methods by better capturing the scale of various models' output, however it has the downside of being very prone to overfitting.",
"cite_spans": [
{
"start": 756,
"end": 772,
"text": "(Friedman, 2001)",
"ref_id": "BIBREF12"
},
{
"start": 792,
"end": 817,
"text": "(Chen and Guestrin, 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensembling",
"sec_num": "4.3"
},
{
"text": "The train set consists of 17 514 (meme, entity) pairs, the validation set 2 069 pairs and the test set 2 433 pairs. We did all the training on the datasets from the two domains, COVID-19 and US politics jointly. The test set contains examples from both domains. The evaluation is done with Macro-F 1 score; the OCR and the list of entities are provided along with the image of the meme. We run all experiments 5 times to check for the robustness of results and perform statistical testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental process",
"sec_num": "5.1"
},
{
"text": "For CLIP, we use the biggest L/14 CLIP-ViT model built on the Vision Transformers (Dosovitskiy et al., 2021) . Both preliminary self-supervised fine-tuning and fine-tuning while doing the classification failed. This is probably due to the size and the format of the shared task dataset, much smaller and quite different from the training data of the pretrained model; any fine-tuning leads the model to forget the knowledge it learned during pre-training. Consequently, we freeze all layers and tune only the classifier, with the architectures described in Section 4.",
"cite_spans": [
{
"start": 82,
"end": 108,
"text": "(Dosovitskiy et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental process",
"sec_num": "5.1"
},
{
"text": "For VisualBERT, we fine-tune the visualbert-vcr-coco-pre model trained on caption generation and visual commonsense reasoning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental process",
"sec_num": "5.1"
},
{
"text": "For OFA enc pool with an MLP classifier, we obtained better results by fine-tuning the whole model from the vqa_large_best checkpoint 5 using a small 0.1 label smoothing and feeding the OCR and entity both to the encoder -along with the image -and to the decoder. Our OFA seq2seq model follows the same setup using the ofa_base checkpoint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental process",
"sec_num": "5.1"
},
{
"text": "In the dataset, several entities are associated with more that one label. As this situation is infrequent, we consider the small amount of samples with multiple labels does not warrant a full-fledged multilabel classification setup. Thus, our models output a single categorical distribution. When multiple labels ought to be predicted for an entity (the entity appears twice in the list of entities associated with the meme), we predict them in order of likelihood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental process",
"sec_num": "5.1"
},
{
"text": "Classifier results. Table 1 compares our main models on the CONSTRAINT'22 test set. We measure the statistical significance of our results using a one-sided Welch's unequal variances t-test (Welch, 1947) under the null hypothesis that the macro-F 1 are equals. Some hyperparameters are optimized on a per-model basis. In particular, using the list of entities as additional feature improves the performance for VisualBERT and CLIP-attention but not for our best CLIP-MLP model.",
"cite_spans": [
{
"start": 190,
"end": 203,
"text": "(Welch, 1947)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 20,
"end": 27,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quantitative results",
"sec_num": "5.2"
},
{
"text": "A CLIP enc pool together with an MLP classifier reached the best performances among our nonensembling model pool, significantly (p < 0.0004) improving over the OFA MLP combination. Using the unpooled features of the transformers (enc full ) with an attention classifier underperform compared to the enc pool +MLP approach. However this difference is not significant in the case of Visual-BERT (p < 0.3). In particular, attention-based approaches have more variance than their MLP counterpart. The OFA seq2seq model reaches performances within the error margin of the OFA MLP model (p < 0.14), which is not surprising since the two models are relatively close. The gap between VisualBERT and OFA is somewhat significant with p-values between 0.001 and 0.07 depending on the pairwise comparison. As expected, ensembling leads to the best result, regardless of the ensembling strategy; human annotators far exceed current model performances. We further develop human annotation in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative results",
"sec_num": "5.2"
},
{
"text": "Sampling results. Table 2 compares the different sampling strategies represented in Figure 3 for training a CLIP encoder with MLP model. As expected, using the empirical class distribution (micro strategy) leads to the worse score. While the macro strategy is in theory what we should maximise to improve the Macro-F 1 , it is second worst among all strategies. The dynamic strategies, which use evolving sampling frequencies during training clearly outperform static strategies. In particular, for training CLIP, the short cycle strategy outperforms the other ones, but the difference with long cycle and interpolate is not statistically significant (p-values > 0.05). We observe similar tendencies with systems based on OFA and VisualBERT, with a slight advantage to the interpolate strategy over the cycling ones for the former. Despite the different subsampling strategies, the per-class performances vary widely, see for example the results for the CLIP MLP model with a short cycling subsampling strategy: We observe similar results with all hyperparameter combination. These performances somewhat follow the empirical distribution of the classes, with the rarest class hero having the worst performance, and victim being not much better. This makes us consider sub-sampling other even below 25%. However, this observation-inspired \"super-macro\" strategy did not prove successful, reaching an average Macro-F 1 or 40.0, higher than the micro strategy but lower than the macro one.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 25,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 84,
"end": 92,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Quantitative results",
"sec_num": "5.2"
},
{
"text": "We extract the embeddings of all entities in the train set as their are embedded by the CLIP model, right before being fed into the MLP or being used as query for the attention mechanism. Keeping only the ones occurring more than 30 times, we perform a PCA on their embeddings and represent the first two components in Figure 4 . Each point represents an entity, its colour depends on the distribution of labels that are attributed to the entity, normalised by the global frequency of each label in the full dataset. We keep only the two most frequent labels associated with the entity for colouring. We can see that inanimate objects tend to be labeled as other.",
"cite_spans": [],
"ref_spans": [
{
"start": 319,
"end": 327,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative analysis",
"sec_num": "5.3"
},
{
"text": "On the other hand, large political parties are nearly always portrayed as villain with America as a victim. The somewhat unexpected heroic status of the libertarian party can be explained by the pres- Figure 4 : PCA of entity embeddings from CLIP. The explained variance is 33%+18%. The entities appearing more than 30 times, with labels attached to the 16 most frequent ones. The color of the embeddings reflect the role attached to the entity in the train set ( hero, villain, victim, other) . When the entity is assigned different roles, the color are mixed together; e.g. covid19 appears twice as often as other as it does as villain.",
"cite_spans": [
{
"start": 462,
"end": 493,
"text": "( hero, villain, victim, other)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 201,
"end": 209,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative analysis",
"sec_num": "5.3"
},
{
"text": "ence of advertisements in the form of memes in the dataset. We can see that CLIP was able to separate the entities according to their probable class even before processing the meme. Still, the model can't clearly distinguish between most heroes and villains without seeing the meme, which is to be expected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative analysis",
"sec_num": "5.3"
},
{
"text": "The multimodal aspect is crucial in this task. When looking at entity names, only 15% have an exact surface form match in the caption's OCR; moreover, the OCR is often incomplete or noisy (see example in Figure 1 with the \"Exit\" sign popping in the middle of the caption). Thus, using only the text is far from sufficient. On the other hand, recognising the entities in the image of the meme is not an easy task. As stated in the introduction, the image and the text are often not directly related. Moreover, the image often contains elements not seen in common image datasets; for example, meme creators often perform montages like swapping faces and objects. Overall, a lot of commonsense and cultural knowledge is needed for the model to understand what the meme is about.",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 212,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The absence of contextual information also makes the task difficult for humans. To evaluate the difficulty of the task, we performed human annotation of a sample of 100 (image, entity) pairs with five annotators. Details of annotation process can be found in Appendix A. The average pairwise Cohen's \u03ba (Cohen, 1960) , used to measure the inter-annotator agreement, is 0.47. It indicates a \"moderate\" agreement according to Cohen (1960) . However, it also shows that less than one third of the annotations are reliable (McHugh, 2012) . Moreover, the macro-F 1 scores are relatively low: the average is 0.65 and the maximum 0.69. Having metadata such as source website and date of publication of the meme would help human and algorithmic annotators alike.",
"cite_spans": [
{
"start": 302,
"end": 315,
"text": "(Cohen, 1960)",
"ref_id": "BIBREF7"
},
{
"start": 423,
"end": 435,
"text": "Cohen (1960)",
"ref_id": "BIBREF7"
},
{
"start": 518,
"end": 532,
"text": "(McHugh, 2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Finally, from a real-world point of view, this task is not entirely complete: the OCR and the list of entities are already provided in the dataset, and we only have to perform the classification. In a reallife setting, we would create a multi-task system jointly extracting the caption, detecting entities and classifying them; the three tasks complementing each other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "In this work, we propose several systems to solve the task of classifying entity roles in memes. We focus on comparing classification models -MLP, Attention and Seq2seq systems -on top of pre-trained multimodal encoder: CLIP, VisualBERT and OFA. Our best standalone system uses the CLIP encoder with MLP classifier, but our best score is obtained using ensembling of a large number of models. We also compare several sampling strategies to deal with the class imbalance issue, proposing dynamic sampling methods that outperform the standard uniform (\"macro\") sampling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "As a preliminary future work, more or less straightforward processing can be performed on the dataset, at the entity-level (using an entity linker to resolve surface forms to entity identifiers, e.g. merging entities \"US\" and \"United States\" together); at the OCR-level (performing lexical normalization (Samuel and Straka, 2021) to deal with OCR errors and meme-specific syntax); and at the image-level (removing the text from the image, for a less noisy image embedding).",
"cite_spans": [
{
"start": 304,
"end": 329,
"text": "(Samuel and Straka, 2021)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "To improve the model, entity representation is key. We wish to train global entity embedding, shared across the whole dataset, and contextualised entity embeddings, aligning the entity's vector representation in the image and in the OCR of the meme (when there is an explicit mention of it).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "ples, thus considering all entities of a meme independently during training and inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The sequence length is limited to 76 byte-pairs. In the CONSTRAINT task corpus, 76 byte-pairs corresponds to the 95th quantile of OCR text length in the test set, and slightly more in the train set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For the OFA model, encpool refers to the output of the penultimate layer of OFA's decoder, while we use encOFA to reference only the OFA's encoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This refers to an OFA model pre-trained on 8 tasks then fine-tuned on VQA from the official OFA repository.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We want to express our strong gratitude to Matt Post for the time he took providing manual annotation for our validation process. We also warmly thank the reviewers for their very valuable feedback. This work received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 101021607 and the last author acknowledges the support of the French Research Agency via the ANR ParSiTi project (ANR16-CE33-0021).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "8"
},
{
"text": "To assess the quality of the dataset and put our results into perspective, we hand labeled part of the datasets. The team of five annotators is composed of researchers in Natural Language Processing. One of them is American native and the other 4 are European. Two of them are in the 40-50s age range and three of them are in the 20-30s. The annotators were all given the same 100 samples to label. To have a better estimate of the macro-F 1 , we sampled 25 memes for each gold role. The annotator were given the class definitions and were informed that the labels had a uniform distribution. The annotation script as well as the answers of the annotators are available with the remainder of our code at https://github. com/smontariol/mmsrl_constraint.We compute the macro-F 1 score of each annotator, resulting in an average score of 0.65. The minimum score was 0.57 and the maximum 0.69. These scores show the difficulty of the task for a human. For comparison, the best score during the challenge was 0.58, still considerably lower than the human best score.To measure the inter-annotator agreement, we compute the average pair-wise Cohen's \u03ba (Cohen, 1960) . It is similar to measuring the percentage of agreement, but taking into account the possibility of the agreement between two annotators to occur by chance for each annotated sample. The average Cohen's \u03ba is 0.47, indicating a \"moderate\" agreement according to Cohen (1960) . However, it also indicates that less than one third of the annotations are reliable (McHugh, 2012) .",
"cite_spans": [
{
"start": 1146,
"end": 1159,
"text": "(Cohen, 1960)",
"ref_id": "BIBREF7"
},
{
"start": 1422,
"end": 1434,
"text": "Cohen (1960)",
"ref_id": "BIBREF7"
},
{
"start": 1521,
"end": 1535,
"text": "(McHugh, 2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Human Annotations",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Don't just assume; look and answer: Overcoming priors for visual question answering",
"authors": [
{
"first": "Aishwarya",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Aniruddha Kembhavi. 2018. Don't just assume; look and answer: Overcoming priors for visual ques- tion answering. In Proceedings of the IEEE Confer- ence on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Aiding intra-text representations with visual context for multimodal named entity recognition",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Arshad",
"suffix": ""
},
{
"first": "Ignazio",
"middle": [],
"last": "Gallo",
"suffix": ""
},
{
"first": "Shah",
"middle": [],
"last": "Nawaz",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Calefati",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 International Conference on Document Analysis and Recognition (ICDAR)",
"volume": "",
"issue": "",
"pages": "337--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Arshad, Ignazio Gallo, Shah Nawaz, and Alessan- dro Calefati. 2019. Aiding intra-text representations with visual context for multimodal named entity recognition. In 2019 International Conference on Document Analysis and Recognition (ICDAR), pages 337-342. IEEE.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Multimodal machine learning: A survey and taxonomy",
"authors": [
{
"first": "Tadas",
"middle": [],
"last": "Baltru\u0161aitis",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Ahuja",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE transactions on pattern analysis and machine intelligence",
"volume": "41",
"issue": "",
"pages": "423--443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tadas Baltru\u0161aitis, Chaitanya Ahuja, and Louis-Philippe Morency. 2018. Multimodal machine learning: A survey and taxonomy. IEEE transactions on pattern analysis and machine intelligence, 41(2):423-443.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Blue at memotion 2.0 2022: You have my image, my text and my transformer",
"authors": [
{
"first": "Ana-Maria",
"middle": [],
"last": "Bucur",
"suffix": ""
},
{
"first": "Adrian",
"middle": [],
"last": "Cosma",
"suffix": ""
},
{
"first": "Ioan-Bogdan",
"middle": [],
"last": "Iordache",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2202.07543"
]
},
"num": null,
"urls": [],
"raw_text": "Ana-Maria Bucur, Adrian Cosma, and Ioan-Bogdan Iordache. 2022. Blue at memotion 2.0 2022: You have my image, my text and my transformer. arXiv preprint arXiv:2202.07543.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Introduction to the conll-2005 shared task: Semantic role labeling",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ninth conference on computational natural language learning (CoNLL-2005)",
"volume": "",
"issue": "",
"pages": "152--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Carreras and Llu\u00eds M\u00e0rquez. 2005. Introduction to the conll-2005 shared task: Semantic role labeling. In Proceedings of the ninth conference on compu- tational natural language learning (CoNLL-2005), pages 152-164.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Can images help recognize entities? a study of the role of images for multimodal NER",
"authors": [
{
"first": "Shuguang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [],
"last": "Aguilar",
"suffix": ""
},
{
"first": "Leonardo",
"middle": [],
"last": "Neves",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)",
"volume": "",
"issue": "",
"pages": "87--96",
"other_ids": {
"DOI": [
"10.18653/v1/2021.wnut-1.11"
]
},
"num": null,
"urls": [],
"raw_text": "Shuguang Chen, Gustavo Aguilar, Leonardo Neves, and Thamar Solorio. 2021. Can images help recognize entities? a study of the role of images for multimodal NER. In Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021), pages 87- 96, Online. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Xgboost: A scalable tree boosting system",
"authors": [
{
"first": "Tianqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "785--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pages 785- 794.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A coefficient of agreement for nominal scales. Educational and Psychological Measurement",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1960,
"venue": "",
"volume": "20",
"issue": "",
"pages": "37--46",
"other_ids": {
"DOI": [
"10.1177/001316446002000104"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Mea- surement, 20(1):37-46.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The Language of Internet Memes",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Davison",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "9",
"issue": "",
"pages": "120--134",
"other_ids": {
"DOI": [
"10.18574/9780814763025-011"
]
},
"num": null,
"urls": [],
"raw_text": "Patrick Davison. 2012. 9. The Language of Internet Memes, pages 120-134. New York University Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 4171- 4186, Minneapolis, Minnesota. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "SemEval-2021 task 6: Detection of persuasion techniques in texts and images",
"authors": [
{
"first": "Dimitar",
"middle": [],
"last": "Dimitrov",
"suffix": ""
},
{
"first": "Shaden",
"middle": [],
"last": "Bishr Bin Ali",
"suffix": ""
},
{
"first": "Firoj",
"middle": [],
"last": "Shaar",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Alam",
"suffix": ""
},
{
"first": "Hamed",
"middle": [],
"last": "Silvestri",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Firooz",
"suffix": ""
},
{
"first": "Giovanni Da San",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martino",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)",
"volume": "",
"issue": "",
"pages": "70--98",
"other_ids": {
"DOI": [
"10.18653/v1/2021.semeval-1.7"
]
},
"num": null,
"urls": [],
"raw_text": "Dimitar Dimitrov, Bishr Bin Ali, Shaden Shaar, Firoj Alam, Fabrizio Silvestri, Hamed Firooz, Preslav Nakov, and Giovanni Da San Martino. 2021. SemEval-2021 task 6: Detection of persuasion tech- niques in texts and images. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 70-98, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale",
"authors": [
{
"first": "Alexey",
"middle": [],
"last": "Dosovitskiy",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Beyer",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Kolesnikov",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Weissenborn",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Unterthiner",
"suffix": ""
},
{
"first": "Mostafa",
"middle": [],
"last": "Dehghani",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Minderer",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Heigold",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Gelly",
"suffix": ""
}
],
"year": null,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Greedy function approximation: a gradient boosting machine",
"authors": [
{
"first": "H",
"middle": [],
"last": "Jerome",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 2001,
"venue": "Annals of statistics",
"volume": "",
"issue": "",
"pages": "1189--1232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerome H Friedman. 2001. Greedy function approx- imation: a gradient boosting machine. Annals of statistics, pages 1189-1232.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatic labeling of semantic roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational linguistics",
"volume": "28",
"issue": "3",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational linguistics, 28(3):245-288.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "770--778",
"other_ids": {
"DOI": [
"10.1109/CVPR.2016.90"
]
},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recogni- tion. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Pratik Ringshia, and Davide Testuggine. 2020. The hateful memes challenge: Detecting hate speech in multimodal memes",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Hamed",
"middle": [],
"last": "Firooz",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Mohan",
"suffix": ""
},
{
"first": "Vedanuj",
"middle": [],
"last": "Goswami",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": null,
"venue": "Advances in Neural Information Processing Systems",
"volume": "33",
"issue": "",
"pages": "2611--2624",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. 2020. The hateful memes chal- lenge: Detecting hate speech in multimodal memes. In Advances in Neural Information Processing Sys- tems, volume 33, pages 2611-2624. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Visualbert: A simple and performant baseline for vision and language",
"authors": [
{
"first": "Liunian Harold",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Da",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.03557"
]
},
"num": null,
"urls": [],
"raw_text": "Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A sim- ple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Vinodkumar Prabhakaran, Bertie Vidgen, and Zeerak Waseem. 2021. Findings of the WOAH 5 shared task on fine grained hateful memes detection",
"authors": [
{
"first": "Lambert",
"middle": [],
"last": "Mathias",
"suffix": ""
},
{
"first": "Shaoliang",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Aida",
"middle": [
"Mostafazadeh"
],
"last": "Davani",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)",
"volume": "",
"issue": "",
"pages": "201--206",
"other_ids": {
"DOI": [
"10.18653/v1/2021.woah-1.21"
]
},
"num": null,
"urls": [],
"raw_text": "Lambert Mathias, Shaoliang Nie, Aida Mostafazadeh Davani, Douwe Kiela, Vinodku- mar Prabhakaran, Bertie Vidgen, and Zeerak Waseem. 2021. Findings of the WOAH 5 shared task on fine grained hateful memes detection. In Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pages 201-206, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Interrater reliability: the kappa statistic",
"authors": [
{
"first": "L",
"middle": [],
"last": "Mary",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mchugh",
"suffix": ""
}
],
"year": 2012,
"venue": "Biochemia medica",
"volume": "22",
"issue": "3",
"pages": "276--282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mary L McHugh. 2012. Interrater reliability: the kappa statistic. Biochemia medica, 22(3):276-282.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dankmemes@ evalita 2020: The memeing of life: Memes, multimodality and politics",
"authors": [
{
"first": "Martina",
"middle": [],
"last": "Miliani",
"suffix": ""
},
{
"first": "Giulia",
"middle": [],
"last": "Giorgi",
"suffix": ""
},
{
"first": "Ilir",
"middle": [],
"last": "Rama",
"suffix": ""
},
{
"first": "Guido",
"middle": [],
"last": "Anselmi",
"suffix": ""
},
{
"first": "Gianluca",
"middle": [
"E"
],
"last": "Lebani",
"suffix": ""
}
],
"year": 2020,
"venue": "EVALITA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martina Miliani, Giulia Giorgi, Ilir Rama, Guido Anselmi, and Gianluca E Lebani. 2020. Dankmemes@ evalita 2020: The memeing of life: Memes, multimodality and politics. In EVALITA.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "MOMENTA: A multimodal framework for detecting harmful memes and their targets",
"authors": [
{
"first": "Shraman",
"middle": [],
"last": "Pramanick",
"suffix": ""
},
{
"first": "Shivam",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Dimitar",
"middle": [],
"last": "Dimitrov",
"suffix": ""
},
{
"first": "Md",
"middle": [
"Shad"
],
"last": "Akhtar",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2021",
"volume": "",
"issue": "",
"pages": "4439--4455",
"other_ids": {
"DOI": [
"10.18653/v1/2021.findings-emnlp.379"
]
},
"num": null,
"urls": [],
"raw_text": "Shraman Pramanick, Shivam Sharma, Dimitar Dim- itrov, Md. Shad Akhtar, Preslav Nakov, and Tan- moy Chakraborty. 2021. MOMENTA: A multimodal framework for detecting harmful memes and their targets. In Findings of the Association for Computa- tional Linguistics: EMNLP 2021, pages 4439-4455, Punta Cana, Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Grounded situation recognition",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Pratt",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Weihs",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
}
],
"year": 2020,
"venue": "European Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "314--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Pratt, Mark Yatskar, Luca Weihs, Ali Farhadi, and Aniruddha Kembhavi. 2020. Grounded situation recognition. In European Conference on Computer Vision, pages 314-332. Springer.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning transferable visual models from natural language supervision",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jong",
"middle": [
"Wook"
],
"last": "Kim",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Hallacy",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Ramesh",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Goh",
"suffix": ""
},
{
"first": "Sandhini",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Askell",
"suffix": ""
},
{
"first": "Pamela",
"middle": [],
"last": "Mishkin",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2021,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "8748--8763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Faster r-cnn: Towards real-time object detection with region proposal networks",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "Shaoqing Ren",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Girshick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "\u00daFAL at Multi-LexNorm 2021: Improving multilingual lexical normalization by fine-tuning ByT5",
"authors": [
{
"first": "David",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)",
"volume": "",
"issue": "",
"pages": "483--492",
"other_ids": {
"DOI": [
"10.18653/v1/2021.wnut-1.54"
]
},
"num": null,
"urls": [],
"raw_text": "David Samuel and Milan Straka. 2021. \u00daFAL at Multi- LexNorm 2021: Improving multilingual lexical nor- malization by fine-tuning ByT5. In Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021), pages 483-492, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Viswanath Pulabaigari, and Bj\u00f6rn Gamb\u00e4ck. 2020. SemEval-2020 task 8: Memotion analysis-the visuolingual metaphor! In Proceedings of the Fourteenth Workshop on Semantic Evaluation",
"authors": [
{
"first": "Chhavi",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Deepesh",
"middle": [],
"last": "Bhageria",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Pykl",
"middle": [],
"last": "Srinivas",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "759--773",
"other_ids": {
"DOI": [
"10.18653/v1/2020.semeval-1.99"
]
},
"num": null,
"urls": [],
"raw_text": "Chhavi Sharma, Deepesh Bhageria, William Scott, Srinivas PYKL, Amitava Das, Tanmoy Chakraborty, Viswanath Pulabaigari, and Bj\u00f6rn Gamb\u00e4ck. 2020. SemEval-2020 task 8: Memotion analysis-the visuo- lingual metaphor! In Proceedings of the Four- teenth Workshop on Semantic Evaluation, pages 759- 773, Barcelona (online). International Committee for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Findings of the constraint 2022 shared task on detecting the hero, the villain, and the victim in memes",
"authors": [
{
"first": "Shivam",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Tharun",
"middle": [],
"last": "Suresh",
"suffix": ""
},
{
"first": "Atharva",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Himanshi",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Md",
"middle": [
"Shad"
],
"last": "Akhtar",
"suffix": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situations -CONSTRAINT 2022",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shivam Sharma, Tharun Suresh, Atharva Kulkarni, Hi- manshi Mathur, Preslav Nakov, Md. Shad Akhtar, and Tanmoy Chakraborty. 2022. Findings of the con- straint 2022 shared task on detecting the hero, the villain, and the victim in memes. In Proceedings of the Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situations - CONSTRAINT 2022, Collocated with ACL 2022.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Grounding semantic roles in images",
"authors": [
{
"first": "Carina",
"middle": [],
"last": "Silberer",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Pinkal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2616--2626",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1282"
]
},
"num": null,
"urls": [],
"raw_text": "Carina Silberer and Manfred Pinkal. 2018. Ground- ing semantic roles in images. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2616-2626, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Very deep convolutional networks for large-scale image recognition",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceed- ings.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Efficientnet: Rethinking model scaling for convolutional neural networks",
"authors": [
{
"first": "Mingxing",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "6105--6114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingxing Tan and Quoc Le. 2019. Efficientnet: Re- thinking model scaling for convolutional neural net- works. In International conference on machine learn- ing, pages 6105-6114. PMLR.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Unifying architectures, tasks, and modalities through a simple sequence-tosequence learning framework",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "An",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Men",
"suffix": ""
},
{
"first": "Junyang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shuai",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Zhikang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jianxin",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jingren",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Hongxia",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022. Unifying architectures, tasks, and modalities through a simple sequence-to- sequence learning framework.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "The generalization of 'student's' problem when several different population variances are involved",
"authors": [
{
"first": "Welch",
"middle": [],
"last": "Bernard Lewis",
"suffix": ""
}
],
"year": 1947,
"venue": "Biometrika",
"volume": "34",
"issue": "1-2",
"pages": "28--35",
"other_ids": {
"DOI": [
"10.1093/biomet/34.1-2.28"
]
},
"num": null,
"urls": [],
"raw_text": "Bernard Lewis Welch. 1947. The generalization of 'student's' problem when several different population variances are involved. Biometrika, 34(1-2):28-35.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Detecting 11k classes: Large scale object detection without fine-grained bounding boxes",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE/CVF International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "9805--9813",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Yang, Hao Wu, and Hao Chen. 2019. Detecting 11k classes: Large scale object detection without fine-grained bounding boxes. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9805-9813.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Our three classifiers. Note that each classifier uses a different combination of encoders. MLP is used with enc pool , Attention requires enc full , while Seq2seq requires an enc OFA -dec OFA pair."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Target frequencies of the various strategies during training. The micro strategy corresponds to using the empirical class distribution in the dataset, that is hero 2.7%, villain 13.9%, victim 5.2% and other 78.2%."
},
"TABREF0": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>role</td><td>fc</td><td>role</td></tr><tr><td/><td colspan=\"2\">weighted sum</td></tr><tr><td>image OCR entity enc pool enc pool MLP</td><td colspan=\"2\">softmax enc pool inner product Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value fc Value Value image OCR enc full fc fc entity</td><td>image OCR entity</td></tr><tr><td>MLP</td><td>Attention</td><td/><td>Seq2seq</td></tr></table>",
"text": "Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value Value",
"num": null
},
"TABREF2": {
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Sampling results with the CLIP model and MLP classifier, with 500 batch per epoch.",
"num": null
}
}
}
}