ACL-OCL / Base_JSON /prefixN /json /nlp4if /2020.nlp4if-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:32:13.688136Z"
},
"title": "A Multi-Modal Method for Satire Detection using Textual and Visual Cues",
"authors": [
{
"first": "Lily",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Jericho Senior High School",
"location": {
"region": "New York",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Or",
"middle": [],
"last": "Levi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AdVerifai",
"location": {
"settlement": "Amsterdam",
"country": "Netherlands"
}
},
"email": ""
},
{
"first": "Pedram",
"middle": [],
"last": "Hosseini",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The George Washington University",
"location": {
"settlement": "Washington",
"region": "D.C",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "David",
"middle": [
"A"
],
"last": "Broniatowski",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The George Washington University",
"location": {
"settlement": "Washington",
"region": "D.C",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Satire is a form of humorous critique, but it is sometimes misinterpreted by readers as legitimate news, which can lead to harmful consequences. We observe that the images used in satirical news articles often contain absurd or ridiculous content and that image manipulation is used to create fictional scenarios. While previous work have studied text-based methods, in this work we propose a multi-modal approach based on state-of-the-art visiolinguistic model ViLBERT. To this end, we create a new dataset consisting of images and headlines of regular and satirical news for the task of satire detection. We fine-tune ViLBERT on the dataset and train a convolutional neural network that uses an image forensics technique. Evaluation on the dataset shows that our proposed multi-modal approach outperforms image-only, text-only, and simple fusion baselines.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Satire is a form of humorous critique, but it is sometimes misinterpreted by readers as legitimate news, which can lead to harmful consequences. We observe that the images used in satirical news articles often contain absurd or ridiculous content and that image manipulation is used to create fictional scenarios. While previous work have studied text-based methods, in this work we propose a multi-modal approach based on state-of-the-art visiolinguistic model ViLBERT. To this end, we create a new dataset consisting of images and headlines of regular and satirical news for the task of satire detection. We fine-tune ViLBERT on the dataset and train a convolutional neural network that uses an image forensics technique. Evaluation on the dataset shows that our proposed multi-modal approach outperforms image-only, text-only, and simple fusion baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Satire is a literary device that writers employ to mock or ridicule a person, group, or ideology by passing judgment on them for a cultural transgression or poor social behavior. Satirical news utilizes humor and irony by placing the target of the criticism into a ridiculous, fictional situation that the reader must suspend their disbelief and go along with (Maslo, 2019) . However, despite what absurd content satirical news may contain, it is often mistaken by readers as real, legitimate news, which may then lead to the unintentional spread of misinformation. In a recent survey conducted by The Conversation (Garrett et al., 2019) , up to 28% of Republican respondents and 14% of Democratic respondents reported that they believed stories fabricated by the Babylon Bee, a satirical news website, to be \"definitely true\". In these instances, the consequences of satire are indistinguishable from those of fake news.",
"cite_spans": [
{
"start": 360,
"end": 373,
"text": "(Maslo, 2019)",
"ref_id": "BIBREF12"
},
{
"start": 615,
"end": 637,
"text": "(Garrett et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To reduce the spread of misinformation, social media platforms have partnered with third-party factcheckers to flag false news articles and tag articles from known satirical websites as satire for users (Facebook, nd; Google, nd) . However, due to the high cost and relative inefficiency of employing experts to manually annotate articles, many researchers have tackled the challenge of automated satire detection. Existing models for satirical news detection have yet to explore the visual domain of satire, even though image thumbnails of news articles may convey information that reveals or disproves the satirical nature of the articles. In the field of cognitive-linguistics, Maslo (2019) observed the use of altered images showing imaginary scenarios on the satirical news show The Daily Show. This phenomenon also extends to satirical news articles, as seen in Figure 1 . For example, Figure 1 (A) depicts the Marvel Cinematic Universe character Hulk from the film Avengers: Infinity War and the United States President Donald Trump spliced together. Alone, each of the two images is serious and not satirical, but, since they come from drastically different contexts, combining the two images creates a clearly ridiculous thumbnail that complements the headline of the article.",
"cite_spans": [
{
"start": 203,
"end": 229,
"text": "(Facebook, nd; Google, nd)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 868,
"end": 876,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 892,
"end": 900,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our work, we propose a multi-modal method for detecting satirical news articles. We hypothesize that 1) the content of news thumbnail images when combined with text, and 2) detecting the presence of manipulated or added characters and objects, can aid in the identification of satirical articles. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous work proposed methods for satirical news detection using textual content (Levi et al., 2019) . Some works utilize classical machine learning algorithms such as SVM with handcrafted features from factual and satirical news headlines and body text, including bag-of-words, n-grams, and lexical features (Burfoot and Baldwin, 2009; Rubin et al., 2016) . More recent works use deep learning to extract learned features for satire detection. Yang et al. (2017) proposed a hierarchical model with attention mechanism and handcrafted linguistic features to understand satire at a paragraph and article-level.",
"cite_spans": [
{
"start": 82,
"end": 101,
"text": "(Levi et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 310,
"end": 337,
"text": "(Burfoot and Baldwin, 2009;",
"ref_id": "BIBREF0"
},
{
"start": 338,
"end": 357,
"text": "Rubin et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 446,
"end": 464,
"text": "Yang et al. (2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While previous work utilize visiolinguistic data for similar tasks, there is no related work that employs multi-modal data to classify articles into satirical and factual news. Nakamura et al. (2019) created a dataset containing images and text for fake news detection in posts on the social media website Reddit. While they include a category for satire/parody in their 6-way dataset, since they use only content that has been submitted by Reddit users, it is not representative of mainstream news media. Multi-modal approaches have also been tried in sarcasm detection; Castro et al. 2019 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We create a new multi-modal dataset of satirical and regular news articles. The satirical news is collected from four websites that explicitly declare themselves to be satire, and the regular news is collected from six mainstream news websites 1 . Specifically, the satirical news websites we collect articles from are The Babylon Bee, Clickhole, Waterford Whisper News, and The DailyER. The regular news websites are Reuters, The Hill, Politico, New York Post, Huffington Post, and Vice News. We collect the headlines and the thumbnail images of the latest 1000 articles for each of the publications. The dataset contains a total of 4000 satirical and 6000 regular news articles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "Multi-Modal Learning. We use Vision & Language BERT (ViLBERT), a multi-modal model proposed by Lu et al. (2019) that processes images and text in two separate streams. Each stream consists of transformer blocks based on BERT (Devlin et al., 2018) and co-attentive layers that facilitate interaction between the visual and textual modalities. In each co-attentive transformer layer, multi-head attention is computed the same as a standard transformer block except the visual modality attends to the textual modality and vice-versa. To learn representations for vision-and-language tasks, ViLBERT is pre-trained using the masked multi-model modeling and multi-modal alignment prediction tasks on the Conceptual Captions dataset (Sharma et al., 2018) . We choose to use ViLBERT because of its high performance on a variety of visiolinguistic tasks, including Visual Question Answering, Image Retrieval, and Visual Commonsense Reasoning. We fine-tune ViLBERT on the satire detection dataset by passing the elementwise product of the final image and text representations into a learned classification layer.",
"cite_spans": [
{
"start": 95,
"end": 111,
"text": "Lu et al. (2019)",
"ref_id": "BIBREF11"
},
{
"start": 225,
"end": 246,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 726,
"end": 747,
"text": "(Sharma et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Models",
"sec_num": "3.2"
},
{
"text": "Image Forgery Detection. Since satirical news images are often forged from two or more images (known as image splicing), we implement an additional model that uses error level analysis (ELA). ELA is an image forensics technique that takes advantage of lossy JPEG compression for image tampering detection (Krawetz, 2007) . In ELA, each JPEG image is resaved at a known compression rate, and the absolute pixel-by-pixel differences between the original and the resaved images are compared. ELA can be used to identify image manipulations where a lower quality image was spliced into a higher quality image or vice-versa. To detect image forgeries as an indicator of satirical news, we preprocess the images using ELA with a compression rate of 90% and use them as input into a CNN.",
"cite_spans": [
{
"start": 305,
"end": 320,
"text": "(Krawetz, 2007)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Models",
"sec_num": "3.2"
},
{
"text": "For the CNN, we use two convolutional layers with 32 kernels and a filter width of 5, each followed by a max-pooling layer. The output features from the CNN are fed into a MLP with a hidden size of 256 and a classification layer. We pretrain the model on the CASIA 2.0 image tampering detection dataset (Dong et al., 2013) before fine-tuning on the images of the satire detection dataset.",
"cite_spans": [
{
"start": 303,
"end": 322,
"text": "(Dong et al., 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Models",
"sec_num": "3.2"
},
{
"text": "Implemention. We divide the data into training and test sets with a ratio of 80%:20%. We train all our models with a batch size of 32 and Adam optimizer. We use the MMF (Singh et al., 2020) implementation of ViLBERT and fine-tune it for 12 epochs with a learning rate of 5e-6. We extract Mask RCNN (He et al., 2017) features from the images in the dataset as visual input. The ViLBERT model has 6 transformer blocks in the visual stream and 12 transformer blocks in the textual stream. Our ELA+CNN model is trained with a learning rate of 1e-5 for 7 epochs. 2",
"cite_spans": [
{
"start": 169,
"end": 189,
"text": "(Singh et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 298,
"end": 315,
"text": "(He et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Models",
"sec_num": "3.2"
},
{
"text": "To create fair baselines for our fine-tuned ViLBERT model, we train multi-modal models that use simple fusion. In the model denoted as Concatenation, ResNet-101 (He et al., 2016) and BERT features are concatenated and a MLP is trained on top. In the model denoted as Average fusion, the output of ResNet-101 and BERT are averaged. We choose these two models as our baselines to evaluate the effects of ViLBERT's early fusion of visual and textual representations and multi-modal pre-training on Conceptual Captions (Sharma et al., 2018) . We also fine-tune uni-modal ResNet-101 and BERT BASE models to compare the performance of the multi-modal models to. 4 Results and Discussion",
"cite_spans": [
{
"start": 161,
"end": 178,
"text": "(He et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 515,
"end": 536,
"text": "(Sharma et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.3"
},
{
"text": "We measure the performance of the proposed and baseline models using Accuracy, F1 score, and AUC-ROC metrics. The results are shown in Table 1 . The models using only the visual modality (ResNet-101 and CNN+ELA) do not perform as well as the model that uses only the text modality (BERT BASE ). The simple fusion models (Average fusion, Concatenation) perform marginally better than BERT BASE .",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 142,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.1"
},
{
"text": "ViLBERT outperforms the simple fusion multi-modal models because it uses early, deep fusion and has undergone multi-modal pre-training rather than only separate uni-modal visual and text pre-training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.1"
},
{
"text": "ViLBERT also performs almost 3.5 F1 points above the uni-modal BERT BASE model. Surprisingly, the performance of the ELA+CNN model was very poor, achieving an accuracy worse than random chance. While this is not in line with our initial hypothesis, there might be several reasons for these results: Firstly, ELA is not able to detect image manipulations if the images have been resaved multiple times since after they have been compressed at a high rate there is little visible change in error levels (Krawetz, 2007) . This makes it especially difficult to identify manipulation in images taken from the Internet, as they have usually undergone multiple resaves and are not camera originals. Additionally, although ELA can be used as a method to detect and localize the region of an image that has been potentially altered, it does not allow for the identification of what kind of image manipulation technique was used. This is important because even reputable news publications, such as Reuters and The Associated Press use Photoshop and other software to perform minor adjustments to photos, for example, to alter the coloring or lighting, or to blur the background (Schlesinger, 2007; The Associated Press, 2014) . Figure 2 shows examples from the satire detection dataset that illustrate the inconsistency of error level analysis in highlighting image manipulations. Both Figure 2 ",
"cite_spans": [
{
"start": 501,
"end": 516,
"text": "(Krawetz, 2007)",
"ref_id": "BIBREF9"
},
{
"start": 1168,
"end": 1187,
"text": "(Schlesinger, 2007;",
"ref_id": "BIBREF15"
},
{
"start": 1188,
"end": 1215,
"text": "The Associated Press, 2014)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1218,
"end": 1226,
"text": "Figure 2",
"ref_id": "FIGREF3"
},
{
"start": 1376,
"end": 1384,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.1"
},
{
"text": "After classification, we randomly select 20% of the test set samples misclassified by ViLBERT and observed them for patterns across multiple samples. Figure 3 shows examples of misclassified samples. We observed three main reasons that may have been the cause of the incorrectly classified articles: The model misinterpreted the headline (Figure 3(A) ), the model lacks knowledge of current events (Figure 3(B) ), and the article covered a bizarre but true story (Figure 3(C) ).",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 158,
"text": "Figure 3",
"ref_id": "FIGREF4"
},
{
"start": 338,
"end": 350,
"text": "(Figure 3(A)",
"ref_id": "FIGREF4"
},
{
"start": 398,
"end": 411,
"text": "(Figure 3(B)",
"ref_id": "FIGREF4"
},
{
"start": 464,
"end": 476,
"text": "(Figure 3(C)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Model Misclassification Study",
"sec_num": "4.2"
},
{
"text": "Figure 3(A) shows an article from Politico that has been classified as satire. The image does not portray anything strange or out of the ordinary. However, the headline uses the word \"bursts\", which the model might be incorrectly interpreting in the literal sense even though it is being used metaphorically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Misclassification Study",
"sec_num": "4.2"
},
{
"text": "If \"bursts\" was intended to be literal, it would drastically change the meaning of the text, which may be why the model failed to classify the article as factual. Figure 3(B) shows a satirical article from Babylon Bee that has been misclassified as factual. Its image has also not been heavily altered or faked; in fact, it is the same image that was used as the original thumbnail of the Joe Rogan podcast episode that is the subject of the article. However, the model fails to recognize the ridiculousness of the text, since it does not have the political knowledge to spot the contrast between the \"alt-right\" and the American politician Bernie Sanders. In Figure 3(C) , an article is from the factual publication The New York Post is misclassified as satirical. Although both the headline and the image seem very ridiculous, the story and the image were, in fact, not fabricated. Thus, identifying text/images as absurd might not always aid in satire detection, since ViLBERT fails in classifying this article as factual because it is unable to tell that the image has not been forged. ",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 174,
"text": "Figure 3(B)",
"ref_id": "FIGREF4"
},
{
"start": 660,
"end": 671,
"text": "Figure 3(C)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Model Misclassification Study",
"sec_num": "4.2"
},
{
"text": "In this paper, we create a multi-modal satire detection dataset and propose two models for the task based on the characteristics of satirical images and their relationships with the headlines. While our model based on image tampering detection performed significantly worse than the baselines, empirical evaluation showed the efficacy of our proposed multi-modal approach compared to simple fusion and uni-modal models. In future work on satire detection, we will incorporate image forensics methods to identify image splicing in satirical images, body text of articles instead of just headlines, as well as knowledge about politics and other current issues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Investigations",
"sec_num": "5"
},
{
"text": "The regular news websites we use are listed by Media Bias/Fact Check https://mediabiasfactcheck.com/, a volunteer-run and nonpartisan organization dedicated to fact-checking and determining the bias of news publications",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Scripts for our experiments are available at: https://github.com/lilyli2004/satire",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic satire detection: Are you having a laugh?",
"authors": [
{
"first": "Clint",
"middle": [],
"last": "Burfoot",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the ACL-IJCNLP 2009 Conference Short Papers",
"volume": "",
"issue": "",
"pages": "161--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clint Burfoot and Timothy Baldwin. 2009. Automatic satire detection: Are you having a laugh? In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 161-164, Suntec, Singapore, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multi-modal sarcasm detection in twitter with hierarchical fusion model",
"authors": [
{
"first": "Yitao",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Huiyu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2506--2515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yitao Cai, Huiyu Cai, and Xiaojun Wan. 2019. Multi-modal sarcasm detection in twitter with hierarchical fusion model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2506-2515, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Towards multimodal sarcasm detection (an Obviously perfect paper)",
"authors": [
{
"first": "Santiago",
"middle": [],
"last": "Castro",
"suffix": ""
},
{
"first": "Devamanyu",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "Ver\u00f3nica",
"middle": [],
"last": "P\u00e9rez-Rosas",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Zimmermann",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4619--4629",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Santiago Castro, Devamanyu Hazarika, Ver\u00f3nica P\u00e9rez-Rosas, Roger Zimmermann, Rada Mihalcea, and Soujanya Poria. 2019. Towards multimodal sarcasm detection (an Obviously perfect paper). In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4619-4629, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirec- tional transformers for language understanding.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "CASIA image tampering detection evaluation database",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Tieniu",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2013,
"venue": "2013 IEEE China Summit and International Conference on Signal and Information Processing",
"volume": "",
"issue": "",
"pages": "422--426",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Dong, Wei Wang, and Tieniu Tan. 2013. CASIA image tampering detection evaluation database. In 2013 IEEE China Summit and International Conference on Signal and Information Processing, pages 422-426. IEEE.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fact-checking on Facebook: What publishers should know",
"authors": [
{
"first": "",
"middle": [
"N"
],
"last": "Facebook",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Facebook. n.d. Fact-checking on Facebook: What publishers should know. Business Help Center.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Too many people think satirical news is real. Google. n.d. What does each label mean?",
"authors": [
{
"first": "R",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Kelly",
"middle": [],
"last": "Garrett",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "Shannon",
"middle": [],
"last": "Poulsen",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Kelly Garrett, Robert Bond, and Shannon Poulsen, 2019. Too many people think satirical news is real. Google. n.d. What does each label mean? Publisher Help Center.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "770--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Mask r-cnn",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Georgia",
"middle": [],
"last": "Gkioxari",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Dollar",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Girshick",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE International Conference on Computer Vision (ICCV)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. 2017. Mask r-cnn. 2017 IEEE International Conference on Computer Vision (ICCV), Oct.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A picture's worth: Digital image analysis and forensics",
"authors": [
{
"first": "Neal",
"middle": [],
"last": "Krawetz",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neal Krawetz. 2007. A picture's worth: Digital image analysis and forensics. Black Hat Briefings.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Identifying nuances in fake news vs. satire: Using semantic and linguistic cues",
"authors": [
{
"first": "Or",
"middle": [],
"last": "Levi",
"suffix": ""
},
{
"first": "Pedram",
"middle": [],
"last": "Hosseini",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Broniatowski",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01160"
]
},
"num": null,
"urls": [],
"raw_text": "Or Levi, Pedram Hosseini, Mona Diab, and David A. Broniatowski. 2019. Identifying nuances in fake news vs. satire: Using semantic and linguistic cues. arXiv preprint arXiv:1910.01160.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks",
"authors": [
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "13--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, pages 13-23.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Parsing satirical humor: a model of cognitive-linguistic satire analysis. Knji\u017eevni jezik",
"authors": [
{
"first": "Adi",
"middle": [],
"last": "Maslo",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "231--253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adi Maslo. 2019. Parsing satirical humor: a model of cognitive-linguistic satire analysis. Knji\u017eevni jezik, (30):231-253.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "r/fakeddit: A new multimodal benchmark dataset for fine-grained fake news detection",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Nakamura, Sharon Levy, and William Yang Wang. 2019. r/fakeddit: A new multimodal benchmark dataset for fine-grained fake news detection.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Fake news or truth? using satirical cues to detect potentially misleading news",
"authors": [
{
"first": "Victoria",
"middle": [],
"last": "Rubin",
"suffix": ""
},
{
"first": "Niall",
"middle": [],
"last": "Conroy",
"suffix": ""
},
{
"first": "Yimin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Cornwell",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Second Workshop on Computational Approaches to Deception Detection",
"volume": "",
"issue": "",
"pages": "7--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victoria Rubin, Niall Conroy, Yimin Chen, and Sarah Cornwell. 2016. Fake news or truth? using satirical cues to detect potentially misleading news. In Proceedings of the Second Workshop on Computational Approaches to Deception Detection, pages 7-17, San Diego, California, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The use of Photoshop",
"authors": [
{
"first": "David",
"middle": [],
"last": "Schlesinger",
"suffix": ""
}
],
"year": 2007,
"venue": "Reuters Blogs Dashboard",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Schlesinger. 2007. The use of Photoshop. Reuters Blogs Dashboard.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Conceptual Captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning",
"authors": [
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2556--2565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual Captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556-2565, Melbourne, Australia, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Mmf: A multimodal framework for vision and language research",
"authors": [
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Vedanuj",
"middle": [],
"last": "Goswami",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Natarajan",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Meet",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amanpreet Singh, Vedanuj Goswami, Vivek Natarajan, Yu Jiang, Xinlei Chen, Meet Shah, Marcus Rohrbach, Dhruv Batra, and Devi Parikh. 2020. Mmf: A multimodal framework for vision and language research. https://github.com/facebookresearch/mmf.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "AP News Values and Principals",
"authors": [],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The Associated Press. 2014. AP News Values and Principals.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Satirical news detection and analysis using attention mechanism and linguistic features",
"authors": [
{
"first": "Fan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Arjun",
"middle": [],
"last": "Mukherjee",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Dragut",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1979--1989",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fan Yang, Arjun Mukherjee, and Eduard Dragut. 2017. Satirical news detection and analysis using attention mechanism and linguistic features. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1979-1989, Copenhagen, Denmark, September. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Examples of satirical news images created by altering existing images.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "compiled a dataset of scenes from popular TV shows and Cai et al. (2019) used tweets comprising of text and images from Twitter.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "(A) and Figure 2(B) are thumbnails from satirical articles that have clearly been fabricated. However, it is clear from the difference in ELA values that Figure 2(A) is a composite, while the ELA of Figure 2(B) is relatively uniform so the splicing can go undetected. Similarly, Figure 2(C) and Figure 2(D) are both thumbnails from factual articles, yet the drastic difference in ELA values of the building in Figure 2(C) indicates that it has undergone heavy editing while the ELA in Figure 2(D) does not.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"text": "Examples of images in the satire detection dataset and their ELA.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF4": {
"text": "Examples of articles misclassified by ViLBERT",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF1": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": ""
}
}
}
}