ACL-OCL / Base_JSON /prefixF /json /fever /2022.fever-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:42:14.025025Z"
},
"title": "Automatic Fake News Detection: Are current models \"fact-checking\" or \"gut-checking\"?",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Kelk",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harvard University",
"location": {}
},
"email": ""
},
{
"first": "Benjamin",
"middle": [
"Basseri"
],
"last": "Wee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harvard University",
"location": {}
},
"email": ""
},
{
"first": "Yi",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harvard University",
"location": {}
},
"email": ""
},
{
"first": "Richard",
"middle": [],
"last": "Qiu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harvard University",
"location": {}
},
"email": "rqiu@college"
},
{
"first": "Chris",
"middle": [],
"last": "Tanner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harvard University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic fake news detection models are ostensibly based on logic, where the truth of a claim made in a headline can be determined by supporting or refuting evidence found in a resulting web query. These models are believed to be reasoning in some way; however, it has been shown that these same results, or better, can be achieved without considering the claim at all-only the evidence. This implies that other signals are contained within the examined evidence, and could be based on manipulable factors such as emotion, sentiment, or part-of-speech (POS) frequencies, which are vulnerable to adversarial inputs. We neutralize some of these signals through multiple forms of both neural and non-neural pre-processing and style transfer, and find that this flattening of extraneous indicators can induce the models to actually require both claims and evidence to perform well. We conclude with the construction of a model using emotion vectors built off a lexicon and passed through an \"emotional attention\" mechanism to appropriately weight certain emotions. We provide quantifiable results that prove our hypothesis that manipulable features are being used for fact-checking.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic fake news detection models are ostensibly based on logic, where the truth of a claim made in a headline can be determined by supporting or refuting evidence found in a resulting web query. These models are believed to be reasoning in some way; however, it has been shown that these same results, or better, can be achieved without considering the claim at all-only the evidence. This implies that other signals are contained within the examined evidence, and could be based on manipulable factors such as emotion, sentiment, or part-of-speech (POS) frequencies, which are vulnerable to adversarial inputs. We neutralize some of these signals through multiple forms of both neural and non-neural pre-processing and style transfer, and find that this flattening of extraneous indicators can induce the models to actually require both claims and evidence to perform well. We conclude with the construction of a model using emotion vectors built off a lexicon and passed through an \"emotional attention\" mechanism to appropriately weight certain emotions. We provide quantifiable results that prove our hypothesis that manipulable features are being used for fact-checking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent events such as the last two U.S. presidential elections have been greatly affected by fake news, defined as \"fabricated information that disseminates deceptive content, or grossly distort actual news reports, shared on social media platforms\" (Allcott and Gentzkow, 2017) . In fact, the World Economic Forum 2013 report designates massive digital misinformation as a major technological and geopolitical risk (Bovet and Makse, 2019) . As daily social media usage increases (Statista Research Department, 2021), manual fact-checking cannot keep up with this deluge of information.",
"cite_spans": [
{
"start": 250,
"end": 278,
"text": "(Allcott and Gentzkow, 2017)",
"ref_id": "BIBREF2"
},
{
"start": 416,
"end": 439,
"text": "(Bovet and Makse, 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Automatic fact-checking models are therefore a necessity, and most of them function using a system of claims and evidence (Hassan et al., 2017) .",
"cite_spans": [
{
"start": 122,
"end": 143,
"text": "(Hassan et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a specific claim, the models use external knowledge as evidence. Typically, a web search query is treated as the claim, and a subset of the top search results is treated as the evidence. There is an implicit assumption that the fact-checking models are reasoning in some way, using the evidence to confirm or refute the claim. Recent research (Hansen et al., 2021) found this conclusion may be premature; current models can show improved performance when considering evidence alone, essentially fact-checking an unasked question. While this might seem reasonable given that the evidence is conditioned on the claims by the search engine, this can be exploited as illustrated in Figure 1 , which shows that evidence returned using a ridiculous claim can still appear reasonable if we view the evidence alone without the claim. Furthermore, textual entailment requires both a text and a hypothesis; if we have a result without a hypothesis, we are performing a different, unknown task.",
"cite_spans": [
{
"start": 349,
"end": 370,
"text": "(Hansen et al., 2021)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 684,
"end": 692,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This finding indicates a problem with current automatic fake news detection, signaling that the models rely on features in the evidence typical to fake news, rather than using entailment. Since most automated fact-checking research is primarily concerned with the accuracy of the results, rather than addressing how the results are achieved, we propose a novel investigation into these models and their evidence. We use a variety of pre-processing steps, including neural and non-neural ones, to attempt to reduce the affectations common in evidence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Stemming, stopword removal, negation, and POS-filtering (Babanejad et al., 2020) .",
"cite_spans": [
{
"start": 58,
"end": 82,
"text": "(Babanejad et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Style transfer neural models using the Styleformer model to perform informal-to-formal and formal-to-informal paraphrasing methods (Li et al., 2018; Schmidt, 2020) .",
"cite_spans": [
{
"start": 133,
"end": 150,
"text": "(Li et al., 2018;",
"ref_id": "BIBREF13"
},
{
"start": 151,
"end": 165,
"text": "Schmidt, 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also develop our own BERT-based model as an extension of the EmoCred system (Giachanou Figure 1 : An example of why evidence alone does not suffice in identifying fake news, despite the evidence being conditioned on the claim as a search-engine query. Although the returned evidence appearing reputable, it is clear that it has little relevance to deciding the veracity of the claim that \"all Canadians have eaten at least one bear.\" et al., 2019), adding an \"emotional attention\" layer to weight the most relevant emotional signals in a given evidence snippet. We make our code publicly available. 1 With each of these methods, we focus on scores where the models perform better using both the claims and the evidence combined, S C&E , rather than with the evidence alone, S E . Going forward, we will refer to the difference between these dataset combinations as the delta of the pre-processing step, where delta = S C&E \u2212 S E . A positive delta score indicates that the claim was useful and helped yield an increase in performance. Since we are removing indicators that the current models rely on, some of the models perform worse at the task than they did previously. However, a surprising result is that many improved, and the need to consider the claim and the evidence together is a sign of using reasoning rather than manipulable indicators.",
"cite_spans": [
{
"start": 602,
"end": 603,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 90,
"end": 98,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Under current fact-checking models, adversarial data can subvert these detectors. Paraphrasing can be performed by inserting fictitious statements into otherwise truthful evidence with little effect on the model's output. For example, an article titled \"Is the GOP losing Walmart?\", could have \"Walmart\" substituted with \"Apple,\" and the predictions are nearly identical despite the news now being fictitious (Zhou et al., 2019 ",
"cite_spans": [
{
"start": 409,
"end": 427,
"text": "(Zhou et al., 2019",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There has been significant work with automatic fact-checking models using RNNs and Transformers (Shaar et al., 2020a; Alam et al., 2020; Shaar et al., 2020b) as well as non-neural machine learning using TF-IDF vectors (Reddy et al., 2018) .",
"cite_spans": [
{
"start": 96,
"end": 117,
"text": "(Shaar et al., 2020a;",
"ref_id": null
},
{
"start": 118,
"end": 136,
"text": "Alam et al., 2020;",
"ref_id": "BIBREF0"
},
{
"start": 137,
"end": 157,
"text": "Shaar et al., 2020b)",
"ref_id": "BIBREF24"
},
{
"start": 203,
"end": 238,
"text": "TF-IDF vectors (Reddy et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Current fake news detection models that use a claim's search engine results as evidence may unintentionally use hidden signals that are not attributed to the claim (Hansen et al., 2021) . Additionally, models may in fact simply memorize biases within data (Gururangan et al., 2018) . Improvements can be made when using human-identified justifications for fact-checking (Alhindi et al., 2018; Vo and Lee, 2020), and making use of textual entailment can offer improvements (Saikh et al., 2019) .",
"cite_spans": [
{
"start": 164,
"end": 185,
"text": "(Hansen et al., 2021)",
"ref_id": "BIBREF10"
},
{
"start": 256,
"end": 281,
"text": "(Gururangan et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 472,
"end": 492,
"text": "(Saikh et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Emotional text can signal low credibility (Rashkin et al., 2017), characterizing fake news as a task where pre-processing can be used effectively to diminish bias (Giachanou et al., 2019; Babanejad et al., 2020) . A framework to both categorize fake news and to identify features that differentiate fake news from real news has been described by Molina et al. (2021) , and debiasing inappropriate subjectivity in text can be accomplished by replacing a single biased word in each sentence (Pryzant et al., 2020) . : Ablation studies where evidence was sequentially removed for training and evaluation of models. On the far left, we show the most effective non-neural pre-processing compared to the baseline of none. Performance generally worsens as the ablation increases.",
"cite_spans": [
{
"start": 163,
"end": 187,
"text": "(Giachanou et al., 2019;",
"ref_id": "BIBREF8"
},
{
"start": 188,
"end": 211,
"text": "Babanejad et al., 2020)",
"ref_id": null
},
{
"start": 346,
"end": 366,
"text": "Molina et al. (2021)",
"ref_id": "BIBREF16"
},
{
"start": 489,
"end": 511,
"text": "(Pryzant et al., 2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Using the claim as a query, the top ten results from Google News (\"snippets\") constitute the evidence (Hansen et al., 2021) . PolitiFact and Snopes use five labels (False, Mostly False, Mixture, Mostly True, True), which we collapse to True, Mixture, and False.",
"cite_spans": [
{
"start": 102,
"end": 123,
"text": "(Hansen et al., 2021)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To construct the emotion vectors for our EmoAttention system, we use the NRC Affect Intensity Lexicon, which maps approximately 6,000 terms to values between 0 and 1, representing the term's intensity along 8 different emotions (Mohammad, 2017). For example, \"interrupt\" and \"rage\" are both categorized as anger words, but with the respective intensity values of 0.333 and 0.911.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The most common automatic fact-checking NLP models are based on term frequency, word embeddings, and contextualized word embeddings, using Random Forests, LSTMs, and BERT (Hassan et al., 2017). We limit our experimentation to the BERT model, as it is the highest performing state-of-the-art model and was thoroughly tested in (Hansen et al., 2021 ). This BERT model with no pre-processing is our baseline model.",
"cite_spans": [
{
"start": 326,
"end": 346,
"text": "(Hansen et al., 2021",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "For the style transfer model we use the Styleformer model (Li et al., 2018; Schmidt, 2020) , a Transformer-based seq2seq model.",
"cite_spans": [
{
"start": 58,
"end": 75,
"text": "(Li et al., 2018;",
"ref_id": "BIBREF13"
},
{
"start": 76,
"end": 90,
"text": "Schmidt, 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "We also develop our own BERT-based model using the EmoLexi and EmoInt implementation of the EmoCred system by adding an emotional attention layer to emphasize certain emotion representations for a given claim and its evidence (Giachanou et al., 2019) . There is also a snippet attention layer at-tending to which evidence itself should be weighted most heavily for the given claim. Our goal is to separate affect-based properties from factual content of the text. Toward this, we run a large number of permutations of the following four simple pre-processing steps (see Figure 4 in Appendix B for results). These steps were chosen as they have been shown to facilitate affective tasks such as sentiment analysis, emotion classification, and sarcasm detection (Babanejad et al., 2020) . In some cases we used a modified form -such as removing adverbs for POS pre-processing.",
"cite_spans": [
{
"start": 226,
"end": 250,
"text": "(Giachanou et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 759,
"end": 783,
"text": "(Babanejad et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 570,
"end": 578,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "\u2022 Negation (NEG): A mechanism that transforms a negated statement into its inverse (Benamara et al., 2012) . An example, \"I am not happy\" would have \"not\" removed and \"happy\" replaced by its antonym, forming the sentence \"I am sad.\"",
"cite_spans": [
{
"start": 83,
"end": 106,
"text": "(Benamara et al., 2012)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "\u2022 Parts-of-Speech (POS): We keep only three parts of speech: nouns, verbs, and adjectives. We initially included adverbs but found removing them improved results. This could be due to some adverbs being emotionally charged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "\u2022 Stopwords (STOP): These are generally the most common words in a language, such as function words and prepositions. We use the NLTK library.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "\u2022 Stemming (STEM): Reducing a word to its root form. We use the NLTK Snowball Stemmer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "We use the adversarial technique of generating paraphrases for all the claims and evidence through style transfer. The neural Transformer-based seq2seq model Styleformer changes the formality of the text, and it frequently changes the ordering of the sentence itself, too. For example, the formal-to-informal model changes \"A photograph shows William Harley and Arthur Davidson unveiling their first motorcycle in 1914\" to \"In a 1914 photograph William Harley and Arthur Davidson unveil their first motorcycle.\" As well, it removes punctuation and alters phrasing that might be understood as sarcasm, such as \"Melania Trump said that Native Americans upset about the Dakota Access Pipeline should 'go back to India\"' to \"Melania Trump told Native Americans that was upset by the Dakota Access Pipeline, that they should travel to India.\" The informalto-formal model lowercases everything and also changes the text significantly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural formality style transfer",
"sec_num": "5.2"
},
{
"text": "We chose this paraphrasing model based on the idea that fake news -especially that which is frequently posted on social media -has a certain polarizing style that might be neutralized by altering the formality of the text. Rather surprisingly, we received better results transforming the style from formal-to-informal than we did with informal-toformal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural formality style transfer",
"sec_num": "5.2"
},
{
"text": "The EmoCred systems of EmoLexi and EmoInt use a lexicon to determine emotional word counts and intensities, respectively (Giachanou et al., 2019) . We use the NRC Affect Intensity Lexicon, a \"highcoverage lexicons that captures word-affect intensities\" for eight basic emotions, which were created using a technique called best-worst scaling (Mohammad, 2017). These eight emotions can be used to create an emotion vector for a sentence, where each index corresponds to a score: [anger, anticipation, disgust, fear, joy, sadness, surprise, trust].",
"cite_spans": [
{
"start": 121,
"end": 145,
"text": "(Giachanou et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EmoCred emotion representations with emotional attention",
"sec_num": "5.3"
},
{
"text": "As an example, a sentence that contains the word \"suffering\" conveys sadness with an NRC Affect Intensity Lexicon intensity of 0.844, whereas the word \"affection\" indicates joy with an intensity of 0.647. We create the vector of length eight, and for each word associated with an emotion, the emotion's indexed value is either: (1) incremented by one for EmoLexi; or, (2) incremented by its intensity for EmoInt. Thus, the sentence \"He had an affection for suffering\" would have an EmoLexi emotion vector of [0, 0, 0, 0, 1, 1, 0, 0] and an EmoInt emotion vector of [0, 0, 0, 0, 0.647, 0.844, 0, 0] We build on this EmoCred framework, adding an attention system for emotion that gives a weight to each emotion vector, just as the attention layer for each snippet gives a weight to each snippet. The end result is that two independent attention layers attend to the ten snippets and ten emotional representations independently, and we call the resulting system Emotional Attention (see Figure 3 ).",
"cite_spans": [
{
"start": 565,
"end": 597,
"text": "[0, 0, 0, 0, 0.647, 0.844, 0, 0]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 984,
"end": 992,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "EmoCred emotion representations with emotional attention",
"sec_num": "5.3"
},
{
"text": "Surprisingly, the four top-performing models with the Snopes dataset include two non-neural models and two neural models. All four achieve greater F1 Macro scores than the baseline BERT model without pre-processing (see Figure 2 ). POS and STOP yield the biggest delta between S C&E vs. S E , followed by EmoInt and Informal Style Transfer. However, EmoInt yields the highest F1 Macro, followed by POS, Informal, and STOP.",
"cite_spans": [],
"ref_spans": [
{
"start": 220,
"end": 228,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "In PolitiFact, none of the pre-processing steps achieve a delta greater than zero for S C&E versus S E . The combination of POS+STOP steps come closest to parity, followed by EmoInt, then POS and STOP. For the best F1 Macro scores overall, EmoAttention's two forms (i.e., EmoInt and EmoLexi) were the two best, followed by STOP and POS. All of these pre-processing steps achieve higher F1 Macro scores than the baseline BERT model. Further, they yield better deltas for S C&E versus S E , implying that the model now requires the claims to reason.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Many pre-processing steps increase both the model's F1 scores and its need for claims and evidence, validating our hypothesis that signals in style and tone have become a crutch for factchecking models. Rather than doing entailment, they are leveraging other signals -perhaps similar to sentiment analysis -and relying on a \"gut feeling\". EmoAttention generates our best predictions and deltas, confirming our suspicion that the models rely on emotionally charged style as a predictive feature. This is further narrowed to emotional intensity: the EmoInt intensity score-based model performs much better than its count-based counterpart EmoLexi. Thus, evidence containing emotions associated with fake news will be considered more when scoring the claim. One surprising result is the effectiveness of the simple POS and STOP pre-processing steps. POS only included nouns, verbs, and adjectives (i.e., a superset of STOP). This could explain why it has the best delta between S C&E vs. S E . Future research could investigate if stopwords, which are often discarded, actually contain signals such as anaphora: a repetitive rhetoric style which can affect NLP analyses (Liddy, 1990) .",
"cite_spans": [
{
"start": 1167,
"end": 1180,
"text": "(Liddy, 1990)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "As an example, Donald Trump makes heavy use of anaphora in his 2017 inauguration speech: \"Together, we will make America strong again. We will make America wealthy again. We will make America proud again. We will make America safe again. And, yes, together, we will make america great again.\" (Trump Inauguration Address, 2017) By removing stopwords \"we\", \"will\" and \"again\", the model relies less on the text's rhetoric style and more on the entailment we are seeking. We propose further study on the effects of STOP and POS, as well as experimenting with different emotional vectors and EmoAttention to make factchecking models more robust. Automatic Fake News detection remains a challenging problem, and unfortunately, current fact-checking models can be subverted by adversarial techniques that exploit emotionally charged writing.",
"cite_spans": [
{
"start": 293,
"end": 327,
"text": "(Trump Inauguration Address, 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Disinformation is much more than just a mild inconvenience for society; it has resulted in needless deaths in the COVID-19 pandemic, and has fomented violence and political instability all over the globe (van der Linden et al., 2020) . Our goal in this paper is to discover exploitable weaknesses in current fact-checking models and recommend that such models not be relied upon in their current form. We point out how the models are dependent on emotional signals in the texts instead of exclusively performing textual entailment, and that additional research needs to be done to ensure they are performing the proper task.",
"cite_spans": [
{
"start": 204,
"end": 233,
"text": "(van der Linden et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Impact Statement",
"sec_num": null
},
{
"text": "Harm Minimization Our quantifying of the effects of pre-processing on fact-checking models does not cause any harm to real-world users or companies. Research has demonstrated that adversarial attacks could result in disinformation being labeled as factual news. Disinformation has become increasingly present in global politics, as some nation-states with significant resources have disseminated propaganda to create political dissent in other countries (Zhou et al., 2019) . Our research here has demonstrated potential risks: emotional writing could be used as an exploit to circumvent fact-checking models. Thus, we urge others to further illuminate such vulnerabilities, to minimize potential harms, and to encourage improvements with new models.",
"cite_spans": [
{
"start": 454,
"end": 473,
"text": "(Zhou et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Impact Statement",
"sec_num": null
},
{
"text": "Deployment Social media companies often deal with fake news by placing highly visible labels. However, simply tagging stories as false can make readers more willing to believe and share other false, untagged stories. This unintended consequence -in which the selective labeling of false news makes other news stories seem more legitimate -has been called the \"implied-truth effect\" (Pennycook et al., 2019) . Thus, unless these models become so accurate that they catch all fake news presented to them, the entire basis of their use is called into question.",
"cite_spans": [
{
"start": 382,
"end": 406,
"text": "(Pennycook et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Impact Statement",
"sec_num": null
},
{
"text": "Despite the significant progress in developing models to correctly identify fake news, the real elephant in the room is that many people simply ignore the labels (Molina et al., 2021) . There is, however, prior work supporting the idea that if people are warned that a headline is false, they will be less likely to believe it (Ecker et al., 2010; Lewandowsky et al., 2012) . Because of this, we believe this research represents a net benefit for humanity.",
"cite_spans": [
{
"start": 162,
"end": 183,
"text": "(Molina et al., 2021)",
"ref_id": "BIBREF16"
},
{
"start": 327,
"end": 347,
"text": "(Ecker et al., 2010;",
"ref_id": "BIBREF7"
},
{
"start": 348,
"end": 373,
"text": "Lewandowsky et al., 2012)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Impact Statement",
"sec_num": null
},
{
"text": "Warning labels are just one way of dealing with properly identified fake news, and publishers can choose to simply not allow it on their platforms. Of course, this issue leads to questions of censorship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Impact Statement",
"sec_num": null
},
{
"text": "In Figure 4 , we report all results for each preprocessing step. Figure 4 : The full table of results for all pre-processing steps for the Snopes (SNES) and PolitiFact (POMT) datasets. Due to the high compute requirements of the formal and informal style transfer models, these datasets were only prepared for the Snopes dataset. The darkest green colors indicate the best results, while the red indicates the worst. Multiple pre-processing steps such as (pos, stop) were performed in the order written.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 4",
"ref_id": null
},
{
"start": 65,
"end": 73,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Extended Results",
"sec_num": null
},
{
"text": "DatasetsWe use the MultiFC dataset(Augenstein et al., 2019), which consists of political claims and associated truth labels from PolitiFact and Snopes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fighting the COVID-19 infodemic: Modeling the perspective of journalists, fact-checkers, social media platforms, policy makers, and the society",
"authors": [
{
"first": "Firoj",
"middle": [],
"last": "Alam",
"suffix": ""
},
{
"first": "Shaden",
"middle": [],
"last": "Shaar",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Nikolov",
"suffix": ""
},
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Da San",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Martino",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Abdelali",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Kareem",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Firoj Alam, Shaden Shaar, Alex Nikolov, Hamdy Mubarak, Giovanni Da San Martino, Ahmed Ab- delali, Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Kareem Darwish, and Preslav Nakov. 2020. Fight- ing the COVID-19 infodemic: Modeling the per- spective of journalists, fact-checkers, social media platforms, policy makers, and the society. CoRR, abs/2005.00033.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Where is your evidence: Improving factchecking by justification modeling",
"authors": [
{
"first": "Savvas",
"middle": [],
"last": "Tariq Alhindi",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Petridis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Muresan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "85--90",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5513"
]
},
"num": null,
"urls": [],
"raw_text": "Tariq Alhindi, Savvas Petridis, and Smaranda Mure- san. 2018. Where is your evidence: Improving fact- checking by justification modeling. In Proceedings of the First Workshop on Fact Extraction and VERi- fication (FEVER), pages 85-90, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Social media and fake news in the 2016 election",
"authors": [
{
"first": "Hunt",
"middle": [],
"last": "Allcott",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Gentzkow",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of Economic Perspectives",
"volume": "31",
"issue": "2",
"pages": "211--247",
"other_ids": {
"DOI": [
"10.1257/jep.31.2.211"
]
},
"num": null,
"urls": [],
"raw_text": "Hunt Allcott and Matthew Gentzkow. 2017. Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2):211-36.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multifc: A real-world multi-domain dataset for evidence-based fact checking of claims",
"authors": [
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Lioma",
"suffix": ""
},
{
"first": "Dongsheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lucas",
"middle": [
"Chaves"
],
"last": "Lima",
"suffix": ""
},
{
"first": "Casper",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "Jakob",
"middle": [
"Grue"
],
"last": "Simonsen",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, and Jakob Grue Simonsen. 2019. Multifc: A real-world multi-domain dataset for evidence-based fact checking of claims.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Aijun An, and Manos Papagelis. 2020. A comprehensive analysis of preprocessing for word representation learning in affective tasks",
"authors": [
{
"first": "Nastaran",
"middle": [],
"last": "Babanejad",
"suffix": ""
},
{
"first": "Ameeta",
"middle": [],
"last": "Agrawal",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5799--5810",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.514"
]
},
"num": null,
"urls": [],
"raw_text": "Nastaran Babanejad, Ameeta Agrawal, Aijun An, and Manos Papagelis. 2020. A comprehensive analysis of preprocessing for word representation learning in affective tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 5799-5810, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "How do negation and modality impact on opinions",
"authors": [
{
"first": "Farah",
"middle": [],
"last": "Benamara",
"suffix": ""
},
{
"first": "Baptiste",
"middle": [],
"last": "Chardon",
"suffix": ""
},
{
"first": "Yannick",
"middle": [],
"last": "Mathieu",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics",
"volume": "12",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Farah Benamara, Baptiste Chardon, Yannick Mathieu, Vladimir Popescu, and Nicholas Asher. 2012. How do negation and modality impact on opinions? In Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics, Ex- ProM '12, page 10-18, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Influence of fake news in twitter during the 2016 us presidential election",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Bovet",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hern\u00e1n",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Makse",
"suffix": ""
}
],
"year": 2019,
"venue": "JournalNature Communications",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1038/s41467-018-07761-2"
]
},
"num": null,
"urls": [],
"raw_text": "Alexandre Bovet and Hern\u00e1n A. Makse. 2019. Influence of fake news in twitter during the 2016 us presidential election. JournalNature Communications, 10(1).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Explicit warnings reduce but do not eliminate the continued influence of misinformation",
"authors": [
{
"first": "Ullrich",
"middle": [],
"last": "Ecker",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Lewandowsky",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2010,
"venue": "Memory and cognition",
"volume": "38",
"issue": "",
"pages": "1087--100",
"other_ids": {
"DOI": [
"10.3758/MC.38.8.1087"
]
},
"num": null,
"urls": [],
"raw_text": "Ullrich Ecker, Stephan Lewandowsky, and David Tang. 2010. Explicit warnings reduce but do not eliminate the continued influence of misinformation. Memory and cognition, 38:1087-100.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Leveraging emotional signals for credibility detection",
"authors": [
{
"first": "Anastasia",
"middle": [],
"last": "Giachanou",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Crestani",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'19",
"volume": "",
"issue": "",
"pages": "877--880",
"other_ids": {
"DOI": [
"10.1145/3331184.3331285"
]
},
"num": null,
"urls": [],
"raw_text": "Anastasia Giachanou, Paolo Rosso, and Fabio Crestani. 2019. Leveraging emotional signals for credibil- ity detection. In Proceedings of the 42nd Interna- tional ACM SIGIR Conference on Research and De- velopment in Information Retrieval, SIGIR'19, page 877-880, New York, NY, USA. Association for Com- puting Machinery.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Annotation artifacts in natural language inference data",
"authors": [
{
"first": "Swabha",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "107--112",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2017"
]
},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language infer- ence data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 107-112, New Orleans, Louisiana. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automatic fake news detection: Are models learning to reason?",
"authors": [
{
"first": "Casper",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "Lucas Chaves",
"middle": [],
"last": "Lima",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Casper Hansen, Christian Hansen, and Lucas Chaves Lima. 2021. Automatic fake news detection: Are models learning to reason?",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Toward automated fact-checking: Detecting check-worthy factual claims by claimbuster",
"authors": [
{
"first": "Naeemul",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Fatma",
"middle": [],
"last": "Arslan",
"suffix": ""
},
{
"first": "Chengkai",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Tremayne",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '17",
"volume": "",
"issue": "",
"pages": "1803--1812",
"other_ids": {
"DOI": [
"10.1145/3097983.3098131"
]
},
"num": null,
"urls": [],
"raw_text": "Naeemul Hassan, Fatma Arslan, Chengkai Li, and Mark Tremayne. 2017. Toward automated fact-checking: Detecting check-worthy factual claims by claim- buster. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '17, page 1803-1812, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Misinformation and its correction: Continued influence and successful debiasing",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Lewandowsky",
"suffix": ""
},
{
"first": "K",
"middle": [
"H"
],
"last": "Ullrich",
"suffix": ""
},
{
"first": "Colleen",
"middle": [
"M"
],
"last": "Ecker",
"suffix": ""
},
{
"first": "Norbert",
"middle": [],
"last": "Seifert",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Schwarz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cook",
"suffix": ""
}
],
"year": 2012,
"venue": "Psychological Science in the Public Interest",
"volume": "13",
"issue": "",
"pages": "106--131",
"other_ids": {
"DOI": [
"10.1177/1529100612451018"
],
"PMID": [
"26173286"
]
},
"num": null,
"urls": [],
"raw_text": "Stephan Lewandowsky, Ullrich K. H. Ecker, Colleen M. Seifert, Norbert Schwarz, and John Cook. 2012. Misinformation and its correction: Continued influ- ence and successful debiasing. Psychological Sci- ence in the Public Interest, 13(3):106-131. PMID: 26173286.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Delete, retrieve, generate: A simple approach to sentiment and style transfer",
"authors": [
{
"first": "Juncen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: A simple approach to sen- timent and style transfer. CoRR, abs/1804.06437.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Anaphora in natural language processing and information retrieval",
"authors": [
{
"first": "Elizabeth",
"middle": [],
"last": "Duross",
"suffix": ""
},
{
"first": "Liddy",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1990,
"venue": "Special Issue: Natural Language Processing and Information Retrieval",
"volume": "26",
"issue": "",
"pages": "39--52",
"other_ids": {
"DOI": [
"10.1016/0306-4573(90)90008-P"
]
},
"num": null,
"urls": [],
"raw_text": "Elizabeth DuRoss Liddy. 1990. Anaphora in natural language processing and information retrieval. In- formation Processing & Management, 26(1):39-52. Special Issue: Natural Language Processing and In- formation Retrieval.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "fake news\" is not simply false information: A concept explication and taxonomy of online content",
"authors": [
{
"first": "Maria",
"middle": [
"D"
],
"last": "Molina",
"suffix": ""
},
{
"first": "S",
"middle": [
"Shyam"
],
"last": "Sundar",
"suffix": ""
},
{
"first": "Thai",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Dongwon",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2021,
"venue": "American Behavioral Scientist",
"volume": "65",
"issue": "2",
"pages": "180--212",
"other_ids": {
"DOI": [
"10.1177/0002764219878224"
]
},
"num": null,
"urls": [],
"raw_text": "Maria D. Molina, S. Shyam Sundar, Thai Le, and Dong- won Lee. 2021. \"fake news\" is not simply false information: A concept explication and taxonomy of online content. American Behavioral Scientist, 65(2):180-212.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings",
"authors": [
{
"first": "Gordon",
"middle": [],
"last": "Pennycook",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Bear",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2019,
"venue": "Management Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1287/mnsc.2019.3478"
]
},
"num": null,
"urls": [],
"raw_text": "Gordon Pennycook, Adam Bear, and Evan Collins. 2019. The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived ac- curacy of headlines without warnings. Management Science, page 1.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automatically neutralizing subjective bias in text",
"authors": [
{
"first": "Reid",
"middle": [],
"last": "Pryzant",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Martinez",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Dass",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "480--489",
"other_ids": {
"DOI": [
"10.1609/aaai.v34i01.5385"
]
},
"num": null,
"urls": [],
"raw_text": "Reid Pryzant, Richard Martinez, Nathan Dass, Sadao Kurohashi, Dan Jurafsky, and Diyi Yang. 2020. Au- tomatically neutralizing subjective bias in text. Pro- ceedings of the AAAI Conference on Artificial Intelli- gence, 34:480-489.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Truth of varying shades: Analyzing language in fake news and political fact-checking",
"authors": [
{
"first": "Eunsol",
"middle": [],
"last": "Hannah Rashkin",
"suffix": ""
},
{
"first": "Jin",
"middle": [
"Yea"
],
"last": "Choi",
"suffix": ""
},
{
"first": "Svitlana",
"middle": [],
"last": "Jang",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Volkova",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2931--2937",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1317"
]
},
"num": null,
"urls": [],
"raw_text": "Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. 2017. Truth of varying shades: Analyzing language in fake news and po- litical fact-checking. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2931-2937, Copenhagen, Denmark. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Defactonlp: Fact verification using entity recognition, TFIDF vector comparison and decomposable attention",
"authors": [
{
"first": "Gil",
"middle": [],
"last": "Aniketh Janardhan Reddy",
"suffix": ""
},
{
"first": "Diego",
"middle": [],
"last": "Rocha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Esteves",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aniketh Janardhan Reddy, Gil Rocha, and Diego Es- teves. 2018. Defactonlp: Fact verification using en- tity recognition, TFIDF vector comparison and de- composable attention. CoRR, abs/1809.00509.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Asif Ekbal, and Pushpak Bhattacharyya",
"authors": [
{
"first": "Tanik",
"middle": [],
"last": "Saikh",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Anand",
"suffix": ""
}
],
"year": 2019,
"venue": "A Novel Approach Towards Fake News Detection: Deep Learning Augmented with Textual Entailment Features",
"volume": "",
"issue": "",
"pages": "345--358",
"other_ids": {
"DOI": [
"10.1007/978-3-030-23281-8_30"
]
},
"num": null,
"urls": [],
"raw_text": "Tanik Saikh, Amit Anand, Asif Ekbal, and Pushpak Bhattacharyya. 2019. A Novel Approach Towards Fake News Detection: Deep Learning Augmented with Textual Entailment Features, pages 345-358.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Generative text style transfer for improved language sophistication",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Schmidt",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Schmidt. 2020. Generative text style transfer for improved language sophistication. Stanford CS230.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Overview of checkthat! 2020 english: Automatic identification and verification of claims in social media",
"authors": [
{
"first": "Shaden",
"middle": [],
"last": "Shaar",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Nikolov",
"suffix": ""
},
{
"first": "Nikolay",
"middle": [],
"last": "Babulkov",
"suffix": ""
},
{
"first": "Firoj",
"middle": [],
"last": "Alam",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Barr\u00f3n-Cede\u00f1o",
"suffix": ""
},
{
"first": "Tamer",
"middle": [],
"last": "Elsayed",
"suffix": ""
},
{
"first": "Maram",
"middle": [],
"last": "Hasanain",
"suffix": ""
},
{
"first": "Reem",
"middle": [],
"last": "Suwaileh",
"suffix": ""
},
{
"first": "Fatima",
"middle": [],
"last": "Haouari",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Da San",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Martino",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2020,
"venue": "CLEF",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaden Shaar, Alex Nikolov, Nikolay Babulkov, Firoj Alam, Alberto Barr\u00f3n-Cede\u00f1o, Tamer Elsayed, Maram Hasanain, Reem Suwaileh, Fatima Haouari, Giovanni Da San Martino, and Preslav Nakov. 2020b. Overview of checkthat! 2020 english: Automatic identification and verification of claims in social me- dia. In CLEF.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Inoculating against fake news about covid-19",
"authors": [
{
"first": "Jon",
"middle": [],
"last": "Sander Van Der Linden",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Roozenbeek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Compton",
"suffix": ""
}
],
"year": 2020,
"venue": "Frontiers in Psychology",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3389/fpsyg.2020.566790"
]
},
"num": null,
"urls": [],
"raw_text": "Sander van der Linden, Jon Roozenbeek, and Josh Compton. 2020. Inoculating against fake news about covid-19. Frontiers in Psychology, 11:2928.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Where are the facts? searching for fact-checked information to alleviate the spread of fake news",
"authors": [
{
"first": "Nguyen",
"middle": [],
"last": "Vo",
"suffix": ""
},
{
"first": "Kyumin",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7717--7731",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.621"
]
},
"num": null,
"urls": [],
"raw_text": "Nguyen Vo and Kyumin Lee. 2020. Where are the facts? searching for fact-checked information to alle- viate the spread of fake news. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7717-7731, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Fake news detection via NLP is vulnerable to adversarial attacks",
"authors": [
{
"first": "Zhixuan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Huankang",
"middle": [],
"last": "Guan",
"suffix": ""
},
{
"first": "Moorthy",
"middle": [],
"last": "Meghana",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hsu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhixuan Zhou, Huankang Guan, Meghana Moorthy Bhat, and Justin Hsu. 2019. Fake news detection via NLP is vulnerable to adversarial attacks. CoRR, abs/1901.09657.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Figure 2: Ablation studies where evidence was sequentially removed for training and evaluation of models. On the far left, we show the most effective non-neural pre-processing compared to the baseline of none. Performance generally worsens as the ablation increases.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "The EmoAttention BERT model architecture using emotional-and snippet attention 5 Experiments 5.1 Non-neural pre-processing",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"html": null,
"text": "Top results from various pre-processing steps. The top three steps are highlighted in blue. The lowest F1 Macro scores and deltas are in red. With the exception of EmoLexi tying for the lowest delta, the best pre-processing steps outperform the baseline BERT model fromHansen et al. (2021).",
"content": "<table/>"
}
}
}
}